input
stringlengths
2.6k
28.8k
output
stringlengths
4
150
Context: as subjects perceive the sensory world, different stimuli elicit a number of neural representations. here, a subjective distance between stimuli is defined, measuring the degree of similarity between the underlying representations. as an example, the subjective distance between different locations in space is calculated from the activity of rodent hippocampal place cells, and lateral septal cells. such a distance is compared to the real distance, between locations. as the number of sampled neurons increases, the subjective distance shows a tendency to resemble the metrics of real space. two planetary nebulae are shown to belong to the sagittarius dwarf galaxy, on the basis of their radial velocities. this is only the second dwarf spheroidal galaxy, after fornax, found to contain planetary nebulae. their existence confirms that this galaxy is at least as massive as the fornax dwarf spheroidal which has a single planetary nebula, and suggests a mass of a few times 10 * * 7 solar masses. the two planetary nebulae are located along the major axis of the galaxy, near the base of the tidal tail. there is a further candidate, situated at a very large distance along the direction of the tidal tail, for which no velocity measurement is available. the location of the planetary nebulae and globular clusters of the sagittarius dwarf galaxy suggests that a significant fraction of its mass is contained within the tidal tail. oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars. recent surveys have revealed a lack of close - in planets around evolved stars more massive than 1. 2 msun. such planets are common around solar - mass stars. we have calculated the orbital evolution of planets around stars with a range of initial masses, and have shown how planetary orbits are affected by the evolution of the stars all the way to the tip of the red giant branch ( rgb ). we find that tidal interaction can lead to the engulfment of close - in planets by evolved stars. the engulfment is more efficient for more - massive planets and less - massive stars. these results may explain the observed semi - major axis distribution of planets around evolved stars with masses larger than 1. 5 msun. our results also suggest that massive planets may form more efficiently around intermediate - mass stars. .... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture, on the ground that the earth is suspended within the concavity of the sky, and that it has as much room on the one side of it as on the other : hence they say that the part that is beneath must also be inhabited. but they do not remark that, although it be supposed or scientifically demonstrated that the world is of a round and spherical form, yet it does not follow that the other side of the earth is bare of water ; nor even, though it be bare, does it immediately follow that it is peopled. for scripture, which proves the truth of its historical statements by the accomplishment of its prophecies, gives no false information ; and it is too absurd to say, that some men might have taken ship and traversed the whole wide ocean, and crossed from this side of the world to the other, and that thus even the inhabitants of that distant region are descended from that one first man. some historians do not view augustine ' s scriptural commentaries as endorsing any particular cosmological model, endorsing instead the view that augustine shared the common view of his contemporaries that the earth is spherical, in line with his endorsement of science in de genesi ad litteram. c. p. e. nothaft, responding to writers like leo ferrari who described augustine as endorsing a flat earth, says that "... other recent writers on the subject treat augustine ' s acceptance of the earth ' s spherical shape as a well - established fact ". while it always remained a minority view, from the mid - fourth to the seventh centuries ad, the flat - earth view experienced a revival, around the time when diodorus of tarsus founded the exegetical school known as the school of antioch, which sought to counter what he saw as the pagan cosmology of the greeks with a return to the traditional cosmology. the writings beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 – 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. it consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway : the localizer ( 108 to 111. 95 mhz frequency ), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope ( 329. 15 to 335 mhz ) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is on the correct horizontal and vertical approach. the ils beams are receivable for at least 15 miles, and have a radiated power of 25 watts. ils systems at airports are being replaced by systems that use satellite navigation. non - directional beacon ( ndb ) – legacy fixed radio beacons used before the vo we bring you, as usual, the sun and moon and stars, plus some galaxies and a new section on astrobiology. some highlights are short ( the newly identified class of gamma - ray bursts, and the deep impact on comet 9p / tempel 1 ), some long ( the age of the universe, which will be found to have the earth at its center ), and a few metonymic, for instance the term " down - sizing " to describe the evolution of star formation rates with redshift. observed solar neutrino fluxes are employed to constrain the interior composition of the sun. including the effects of neutrino flavor mixing, the results from homestake, sudbury, and gallium experiments constrain the mg, si, and fe abundances in the solar interior to be within a factor 0. 89 to 1. 34 of the surface values with 68 % confidence. if the o and / or ne abundances are increased in the interior to resolve helioseismic discrepancies with recent standard solar models, then the nominal interior mg, si, and fe abundances are constrained to a range of 0. 83 to 1. 24 relative to the surface. additional research is needed to determine whether the sun ' s interior is metal poor relative to its surface. the luminosity variation of a stellar source due to the gravitational microlensing effect can be considered also if the light rays are defocused ( instead of focused ) toward the observer. in this case, we should detect a gap instead of a peak in the light curve of the source. actually, we describe how the phenomenon depends on the relative position of source and lens with respect to the observer : if the lens is between, we have focusing, if the lens is behind, we have defocusing. it is shown that the number of events with predicted gaps is equal to the number of events with peaks in the light curves. a 4mj planet with a 15. 8day orbital period has been detected from very precise radial velocity measurements with the coralie echelle spectrograph. a second remote and more massive companion has also been detected. all the planetary companions so far detected in orbit closer than 0. 08 au have a parent star with a statistically higher metal content compared to the metallicity distribution of other stars with planets. different processes occuring during their formation may provide a possible explanation for this observation. Question: The distance between the Sun and the next closest star, Proxima Centauri, is most accurately measured in A) magnitudes. B) light years. C) perigees. D) red shifts.
B) light years.
Context: subsea engineering and the ability to detect, track and destroy submarines ( anti - submarine warfare ) required the parallel development of a host of marine scientific instrumentation and sensors. visible light is not transferred far underwater, so the medium for transmission of data is primarily acoustic. high - frequency sound is used to measure the depth of the ocean, determine the nature of the seafloor, and detect submerged objects. the higher the frequency, the higher the definition of the data that is returned. sound navigation and ranging or sonar was developed during the first world war to detect submarines, and has been greatly refined through to the present day. submarines similarly use sonar equipment to detect and target other submarines and surface ships, and to detect submerged obstacles such as seamounts that pose a navigational obstacle. simple echo - sounders point straight down and can give an accurate reading of ocean depth ( or look up at the underside of sea - ice ). more advanced echo sounders use a fan - shaped beam or sound, or multiple beams to derive highly detailed images of the ocean floor. high power systems can penetrate the soil and seabed rocks to give information about the geology of the seafloor, and are widely used in geophysics for the discovery of hydrocarbons, or for engineering survey. for close - range underwater communications, optical transmission is possible, mainly using blue lasers. these have a high bandwidth compared with acoustic systems, but the range is usually only a few tens of metres, and ideally at night. as well as acoustic communications and navigation, sensors have been developed to measure ocean parameters such as temperature, salinity, oxygen levels and other properties including nitrate levels, levels of trace chemicals and environmental dna. the industry trend has been towards smaller, more accurate and more affordable systems so that they can be purchased and used by university departments and small companies as well as large corporations, research organisations and governments. the sensors and instruments are fitted to autonomous and remotely - operated systems as well as ships, and are enabling these systems to take on tasks that hitherto required an expensive human - crewed platform. manufacture of marine sensors and instruments mainly takes place in asia, europe and north america. products are advertised in specialist journals, and through trade shows such as oceanology international and ocean business which help raise awareness of the products. = = = environmental engineering = = = in every coastal and offshore project, environmental sustainability is an important consideration for the preservation of ocean ecosystems and natural resources. instances in which marine engineers benefit from knowledge of environmental engineering include creation of fisheries, clean missiles, ships, vehicles, and also to map weather patterns and terrain. a radar set consists of a transmitter and receiver. the transmitter emits a narrow beam of radio waves which is swept around the surrounding space. when the beam strikes a target object, radio waves are reflected back to the receiver. the direction of the beam reveals the object ' s location. since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received " echo ", the range to the target can be calculated. the targets are often displayed graphically on a map display called a radar screen. doppler radar can measure a moving object ' s velocity, by measuring the change in frequency of the return radio waves due to the doppler effect. radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. parabolic ( dish ) antennas are widely used. in most radars the transmitting antenna also serves as the receiving antenna ; this is called a monostatic radar. a radar which uses separate transmitting and receiving antennas is called a bistatic radar. airport surveillance radar – in aviation, radar is the main tool of air traffic control. a rotating dish antenna sweeps a vertical fan - shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as " blips " of light on a display called a radar screen. airport radar operates at 2. 7 – 2. 9 ghz in the microwave s band. in large airports the radar image is displayed on multiple screens in an operations room called the tracon ( terminal radar approach control ), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. secondary surveillance radar – aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) – military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it the group velocity of light has been measured at eight different wavelengths between 385 nm and 532 nm in the mediterranean sea at a depth of about 2. 2 km with the antares optical beacon systems. a parametrisation of the dependence of the refractive index on wavelength based on the salinity, pressure and temperature of the sea water at the antares site is in good agreement with these measurements. beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 – 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. it consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway : the localizer ( 108 to 111. 95 mhz frequency ), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope ( 329. 15 to 335 mhz ) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is on the correct horizontal and vertical approach. the ils beams are receivable for at least 15 miles, and have a radiated power of 25 watts. ils systems at airports are being replaced by systems that use satellite navigation. non - directional beacon ( ndb ) – legacy fixed radio beacons used before the vo ocean, determine the nature of the seafloor, and detect submerged objects. the higher the frequency, the higher the definition of the data that is returned. sound navigation and ranging or sonar was developed during the first world war to detect submarines, and has been greatly refined through to the present day. submarines similarly use sonar equipment to detect and target other submarines and surface ships, and to detect submerged obstacles such as seamounts that pose a navigational obstacle. simple echo - sounders point straight down and can give an accurate reading of ocean depth ( or look up at the underside of sea - ice ). more advanced echo sounders use a fan - shaped beam or sound, or multiple beams to derive highly detailed images of the ocean floor. high power systems can penetrate the soil and seabed rocks to give information about the geology of the seafloor, and are widely used in geophysics for the discovery of hydrocarbons, or for engineering survey. for close - range underwater communications, optical transmission is possible, mainly using blue lasers. these have a high bandwidth compared with acoustic systems, but the range is usually only a few tens of metres, and ideally at night. as well as acoustic communications and navigation, sensors have been developed to measure ocean parameters such as temperature, salinity, oxygen levels and other properties including nitrate levels, levels of trace chemicals and environmental dna. the industry trend has been towards smaller, more accurate and more affordable systems so that they can be purchased and used by university departments and small companies as well as large corporations, research organisations and governments. the sensors and instruments are fitted to autonomous and remotely - operated systems as well as ships, and are enabling these systems to take on tasks that hitherto required an expensive human - crewed platform. manufacture of marine sensors and instruments mainly takes place in asia, europe and north america. products are advertised in specialist journals, and through trade shows such as oceanology international and ocean business which help raise awareness of the products. = = = environmental engineering = = = in every coastal and offshore project, environmental sustainability is an important consideration for the preservation of ocean ecosystems and natural resources. instances in which marine engineers benefit from knowledge of environmental engineering include creation of fisheries, clean - up of oil spills, and creation of coastal solutions. = = = offshore systems = = = a number of systems designed fully or in part by marine engineers are used offshore - far away from coastlines. = = = = offshore oil platforms = = = = the design of offshore oil platforms involves a number of an important question of theoretical physics is whether sound is able to propagate in vacuums at all and if this is the case, then it must lead to the reinterpretation of one zero - restmass particle which corresponds to vacuum - sound waves. taking the electron - neutrino as the corresponding particle, its observed non - vanishing rest - energy may only appear for neutrino - propagation inside material media. the idea may also influence the physics of dense matter, restricting the maximum speed of sound, both in vacuums and in matter to the speed of light. even artillery shells to their target, and handheld gps receivers are produced for hikers and the military. radio beacon – a fixed location terrestrial radio transmitter which transmits a continuous radio signal used by aircraft and ships for navigation. the locations of beacons are plotted on navigational maps used by aircraft and ships. vhf omnidirectional range ( vor ) – a worldwide aircraft radio navigation system consisting of fixed ground radio beacons transmitting between 108. 00 and 117. 95 mhz in the very high frequency ( vhf ) band. an automated navigational instrument on the aircraft displays a bearing to a nearby vor transmitter. a vor beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 – 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. it consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway : the localizer ( 108 to 111. 95 mhz frequency ), which provides horizontal guidance, a heading line to keep the aircraft centered on beam reveals the object ' s location. since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received " echo ", the range to the target can be calculated. the targets are often displayed graphically on a map display called a radar screen. doppler radar can measure a moving object ' s velocity, by measuring the change in frequency of the return radio waves due to the doppler effect. radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. parabolic ( dish ) antennas are widely used. in most radars the transmitting antenna also serves as the receiving antenna ; this is called a monostatic radar. a radar which uses separate transmitting and receiving antennas is called a bistatic radar. airport surveillance radar – in aviation, radar is the main tool of air traffic control. a rotating dish antenna sweeps a vertical fan - shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as " blips " of light on a display called a radar screen. airport radar operates at 2. 7 – 2. 9 ghz in the microwave s band. in large airports the radar image is displayed on multiple screens in an operations room called the tracon ( terminal radar approach control ), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. secondary surveillance radar – aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) – military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. it often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. marine radar – an s or x band radar on ships used to detect nearby ships and obstructions like bridges. a rotating antenna sweeps a vertical ##directional range ( vor ) – a worldwide aircraft radio navigation system consisting of fixed ground radio beacons transmitting between 108. 00 and 117. 95 mhz in the very high frequency ( vhf ) band. an automated navigational instrument on the aircraft displays a bearing to a nearby vor transmitter. a vor beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 – 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. it consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway : the localizer ( 108 to 111. 95 mhz frequency ), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope ( 329. 15 to 335 mhz ) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is radio waves. the radio waves carry the information to the receiver location. at the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna – a weaker replica of the current in the transmitting antenna. this voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. the modulation signal is converted by a transducer back to a human - usable form : an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. the radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter ' s radio waves oscillate at a different frequency, measured in hertz ( hz ), kilohertz ( khz ), megahertz ( mhz ) or gigahertz ( ghz ). the receiving antenna typically picks up the radio signals of many transmitters. the receiver uses tuned circuits to select the radio signal desired out of all the signals picked up by the antenna and reject the others. a tuned circuit acts like a resonator, similar to a tuning fork. it has a natural resonant frequency at which it oscillates. the resonant frequency of the receiver ' s tuned circuit is adjusted by the user to the frequency of the desired radio station ; this is called tuning. the oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. radio signals at other frequencies are blocked by the tuned circuit and not passed on. = = = bandwidth = = = a modulated radio wave, carrying an information signal, occupies a range of frequencies. the information in a radio signal is usually concentrated in narrow frequency bands called sidebands ( sb ) just above and below the carrier frequency. the width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency, is called its bandwidth ( bw ). for any given signal - to - noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located ; bandwidth is a measure of information - carrying capacity. the bandwidth required by a radio transmission depends on the data rate of Question: Which of these uses sound waves to locate underwater objects? A) radar B) sonar C) telescope D) microscope
B) sonar
Context: have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became ##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as ##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen - free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life ' s basic ingredients : energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability. = = = human nutrition = = = virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most ##rozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokar which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures , tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive ranks varying from family to subgenus have terms for their study, including agrostology ( or graminology ) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. study can also be divided by guild rather than clade or grade. for example, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing . microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of . the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, Question: Paleontologists are scientists who study evidence of past life on Earth. Which method do paleontologists most likely use to determine the forms of life that existed millions of years ago? A) examine current species of plants and animals B) research past species in the library C) interview older scientists D) examine fossils records
D) examine fossils records
Context: this process may release or absorb energy. when the resulting nucleus is lighter than that of iron, energy is normally released ; when the nucleus is heavier than that of iron, energy is generally absorbed. this process of fusion occurs in stars, which derive their energy from hydrogen and helium. they form, through stellar nucleosynthesis, the light elements ( lithium to calcium ) as well as some of the heavy elements ( beyond iron and nickel, via the s - process ). the remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the r - process. of course, these natural processes of astrophysics are not examples of nuclear " technology ". because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. hydrogen bombs, formally known as thermonuclear weapons, obtain their enormous destructive power from fusion, but their energy cannot be controlled. controlled fusion is achieved in particle accelerators ; this is how many synthetic elements are produced. a fusor can also produce controlled fusion and is a useful neutron source. however, both of these devices operate at a net energy loss. controlled, viable fusion power has proven elusive, despite the occasional hoax. technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world. nuclear fusion was initially pursued only in theoretical stages during world war ii, when scientists on the manhattan project ( led by edward teller ) investigated it as a method to build a bomb. the project abandoned fusion after concluding that it would require a fission reaction to detonate. it took until 1952 for the first full hydrogen bomb to be detonated, so - called because it used reactions between deuterium and tritium. fusion reactions are much more energetic per unit mass of fuel than fission reactions, but starting the fusion chain reaction is much more difficult. = = nuclear weapons = = a nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission or a combination of fission and fusion. both reactions release vast quantities of energy from relatively small amounts of matter. even small nuclear devices can devastate a city by blast, fire and radiation. nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut. the design of a nuclear weapon is more complicated than it might seem. such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality nuclear jets containing relativistic ` ` hot ' ' particles close to the central engine cool dramatically by producing high energy radiation. the radiative dissipation is similar to the famous compton drag acting upon ` ` cold ' ' thermal particles in a relativistic bulk flow. highly relativistic protons induce anisotropic showers raining electromagnetic power down onto the putative accretion disk. thus, the radiative signature of hot hadronic jets is x - ray irradiation of cold thermal matter. the synchrotron radio emission of the accelerated electrons is self - absorbed due to the strong magnetic fields close to the magnetic nozzle. the mechanism of stabilization of neutron - excess nuclei in stars is considered. this mechanism must produce the neutronisation process in hot stars in the same way as it occurs in the dwarfs. while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern. possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. different kinds of spectra are often used in chemical spectroscopy, e. g. ir, microwave, nmr, esr, etc. spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. the term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. = = = reaction = = = when a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as in steady state, the fuel cycle of a fusion plasma requires inward particle fluxes of fuel ions. these particle flows are also accompanied by heating. in the case of classical transport in a rotating cylindrical plasma, this heating can proceed through several distinct channels depending on the physical mechanisms involved. some channels directly heat the fuel ions themselves, whereas others heat electrons. which channel dominates depends, in general, on the details of the temperature, density, and rotation profiles of the plasma constituents. however, remarkably, under relatively few assumptions concerning these profiles, if the alpha particles, the byproducts of the fusion reaction, can be removed directly by other means, a hot - ion mode tends to emerge naturally. observations of the ly - alpha forest at z ~ 3 reveal an average metallicity z ~ 0. 01 z _ solar. the high - redshift supernovae that polluted the igm also accelerated relativistic electrons. since the energy density of the cmb scales as ( 1 + z ) ^ 4, at high redshift these electrons cool via inverse compton scattering. thus, the first star clusters emit x - rays. unlike stellar uv ionizing photons, these x - rays can escape easily from their host galaxies. this has a number of important physical consequences : ( i ) due to their large mean free path, these x - rays can quickly establish a universal ionizing background and partially reionize the universe in a gradual, homogeneous fashion. if x - rays formed the dominant ionizing background, the universe would have more closely resembled a single - phase medium, rather than a two - phase medium. ( ii ) x - rays can reheat the universe to higher temperatures than possible with uv radiation. ( iii ) x - rays counter the tendency of uv radiation to photo - dissociate h2, an important coolant in the early universe, by promoting gas phase h2 formation. the x - ray production efficiency is calibrated to local observations of starburst galaxies, which imply that ~ 10 % of the supernova energy is converted to x - rays. while direct detection of sources in x - ray emission is difficult, the presence of relativistic electrons at high redshift and thus a minimal level of x - ray emission may be inferred by synchrotron emission observations with the square kilometer array. these sources may constitute a significant fraction of the unresolved hard x - ray background, and can account for both the shape and amplitude of the gamma - ray background. this paper discusses the existence and observability of high - redshift x - ray sources, while a companion paper models the detailed reionization physics and chemistry. factor e βˆ’ e / k t { \ displaystyle e ^ { - e / kt } } – that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic an electron inside liquid helium forms a bubble of 17 \ aa in radius. in an external magnetic field, the two - level system of a spin 1 / 2 electron is ideal for the implementation of a qubit for quantum computing. the electron spin is well isolated from other thermal reservoirs so that the qubit should have very long coherence time. by confining a chain of single electron bubbles in a linear rf quadrupole trap, a multi - bit quantum register can be implemented. all spins in the register can be initialized to the ground state either by establishing thermal equilibrium at a temperature around 0. 1 k and at a magnetic field of 1 t or by sorting the bubbles to be loaded into the trap with magnetic separation. schemes are designed to address individual spins and to do two - qubit cnot operations between the neighboring spins. the final readout can be carried out through a measurement similar to the stern - gerlach experiment. or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. different kinds of spectra are often used in chemical spectroscopy, e. g. ir, microwave, nmr, esr, etc. spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. the term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. = = = reaction = = = when a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for Question: The temperature in a hot star is high enough to pull electrons away from atoms. What state of matter results from this process? A) gas B) solid C) liquid D) plasma
D) plasma
Context: maya were great, even by today ' s standards. an example of this exceptional engineering is the use of pieces weighing upwards of one ton in their stonework placed together so that not even a blade can fit into the cracks. inca villages used irrigation canals and drainage systems, making agriculture very efficient. while some claim that the incas were the first inventors of hydroponics, their agricultural technology was still soil based, if advanced. though the maya civilization did not incorporate metallurgy or wheel technology in their architectural constructions, they developed complex writing and astronomical systems, and created beautiful sculptural works in stone and flint. like the inca, the maya also had command of fairly advanced agricultural and construction technology. the maya are also responsible for creating the first pressurized water system in mesoamerica, located in the maya site of palenque. the main contribution of the aztec rule was a system of communications between the conquered cities and the ubiquity of the ingenious agricultural technology of chinampas. in mesoamerica, without draft animals for transport ( nor, as a result, wheeled vehicles ), the roads were designed for travel on foot, just as in the inca and mayan civilizations. the aztec, subsequently to the maya, inherited many of the technologies and intellectual advancements of their predecessors : the olmec ( see native american inventions and innovations ). = = = medieval to early modern = = = one of the most significant developments of the medieval were economies in which water and wind power were more significant than animal and human muscle power. : 38 most water and wind power was used for milling grain. water power was also used for blowing air in blast furnace, pulping rags for paper making and for felting wool. the domesday book recorded 5, 624 water mills in great britain in 1086, being about one per thirty families. = = = = east asia = = = = = = = = indian subcontinent = = = = = = = = islamic world = = = = the muslim caliphates united in trade large areas that had previously traded little, including the middle east, north africa, central asia, the iberian peninsula, and parts of the indian subcontinent. the science and technology of previous empires in the region, including the mesopotamian, egyptian, persian, hellenistic and roman empires, were inherited by the muslim world, where arabic replaced syriac, persian and greek as the lingua franca of the region. significant advances were made in the region during the islamic golden age ( 8th – 16th centuries ( create a critical mass ) for detonation. it also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. the procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - vr healthcare solutions are not meant to be a competitor to traditional therapies, as research shows that when coupled together physical therapy is more effective. research into vr rehabilitation continues to expand with new research into haptic developing, which would allow the user to feel their environments and to incorporate their hands and feet into their recovery plan. additionally, there are more sophisticated vr systems being developed which allow the user to use their entire body in their recovery. it also has sophisticated sensors that would allow medical professionals to collect data on muscle engagement and tension. it uses electrical impedance tomography, a form of noninvasive imaging to view muscle usage. another concern is the lack of major funding by big companies and the government into the field. many of these vr sets are off the shelf items, and not properly made for medical use. external add - ones are usually 3d printed or made from spare parts from other electronics. this lack of support means that patients who want to try this method have to be technically savvy, which is unlikely as many ailments only appear later in life. additionally, certain parts of vr like haptic feedback and tracking are still not advanced enough to be used reliably in a medical setting. another issue is the amount of vr devices that are available for purchase. while this does increase the options available, the differences between vr systems could impact patient recovery. the vast number of vr devices also makes it difficult for medical professionals to give and interpret information, as they might not have had practice with the specific model, which could lead to faulty advice being given out. = = = applications = = = currently other applications within healthcare are being explored, such as : applications for monitoring of glucose, alcohol, and lactate or blood oxygen, breath monitoring, heartbeat, heart rate and its variability, electromyography ( emg ), electrocardiogram ( ecg ) and electroencephalogram ( eeg ), body temperature, pressure ( e. g. in shoes ), sweat rate or sweat loss, levels of uric acid and ions – e. g. for preventing fatigue or injuries or for optimizing training patterns, including via " human - integrated electronics " forecasting changes in mood, stress, and health measuring blood alcohol content measuring athletic performance monitoring how sick the user is detecting early signs of infection long - term monitoring of patients with heart and circulatory problems that records an electrocardiogram and is self - moistening health risk assessment applications, including measures of frailty and risks of age - dependent etc technology is viable it does offer an example that it is possible. etc requires much less energy input from outside sources, like a battery, than a railgun or a coilgun would. tests have shown that energy output by the propellant is higher than energy input from outside sources on etc guns. in comparison, a railgun currently cannot achieve a higher muzzle velocity than the amount of energy input. even at 50 % efficiency a rail gun launching a projectile with a kinetic energy of 20 mj would require an energy input into the rails of 40 mj, and 50 % efficiency has not yet been achieved. to put this into perspective, a rail gun launching at 9 mj of energy would need roughly 32 mj worth of energy from capacitors. current advances in energy storage allow for energy densities as high as 2. 5 mj / dm3, which means that a battery delivering 32 mj of energy would require a volume of 12. 8 dm3 per shot ; this is not a viable volume for use in a modern main battle tank, especially one designed to be lighter than existing models. there has even been discussion about eliminating the necessity for an outside electrical source in etc ignition by initiating the plasma cartridge through a small explosive force. furthermore, etc technology is not only applicable to solid propellants. to increase muzzle velocity even further electrothermal - chemical ignition can work with liquid propellants, although this would require further research into plasma ignition. etc technology is also compatible with existing projects to reduce the amount of recoil delivered to the vehicle while firing. understandably, recoil of a gun firing a projectile at 17 mj or more will increase directly with the increase in muzzle energy in accordance to newton ' s third law of motion and successful implementation of recoil reduction mechanisms will be vital to the installation of an etc powered gun in an existing vehicle design. for example, oto melara ' s new lightweight 120 mm l / 45 gun has achieved a recoil force of 25 t by using a longer recoil mechanism ( 550 mm ) and a pepperpot muzzle brake. reduction in recoil can also be achieved through mass attenuation of the thermal sleeve. the ability of etc technology to be applied to existing gun designs means that for future gun upgrades there ' s no longer the necessity to redesign the turret to include a larger breech or caliber gun barrel. several countries have already determined that etc technology is viable for the future and have funded indigenous projects considerably. these include the united states, germany and the united kingdom, amongst others. the united while virtual reality ( vr ) was originally developed for gaming, it also can be used for rehabilitation. virtual reality headsets are given to patients and the patients instructed to complete a series of tasks, but in a game format. this has significant benefits compared to traditional therapies. for one, it is more controllable ; the operator can change their environment to anything they desire including areas that may help them conquer their fear, like in the case of ptsd. another benefit is the price. on average, traditional therapies are several hundred dollars per hour, whereas vr headsets are only several hundred dollars and can be used whenever desired. in patients with neurological disorders like parkinson ' s, therapy in game format where multiple different skills can be utilized at the same time, thus simultaneously stimulating several different parts of the brain. vr ' s usage in physical therapy is still limited as there is insufficient research. some research has pointed to the occurrence of motion sickness while performing intensive tasks, which can be detrimental to the patient ' s progress. detractors also point out that a total dependence on vr can lead to self - isolation and be coming overly dependent on technology, preventing patients from interacting with their friends and family. there are concerns about privacy and safety, as the vr software would need patient data and information to be effective, and this information could be compromised during a data breach, like in the case of 23andme. the lack of proper medical experts coupled with the longer learning curved involved with the recovery project, may result in patients not realizing their mistakes and recovery taking longer than expected. the issue of cost and accessibility is also another issue ; while vr headsets are significantly cheaper than traditional physical therapy, there may be many ad - ons that could raise the price, making it inaccessible to many. base models may be less effective compared to higher end models, which may lead to a digital divide. overall, vr healthcare solutions are not meant to be a competitor to traditional therapies, as research shows that when coupled together physical therapy is more effective. research into vr rehabilitation continues to expand with new research into haptic developing, which would allow the user to feel their environments and to incorporate their hands and feet into their recovery plan. additionally, there are more sophisticated vr systems being developed which allow the user to use their entire body in their recovery. it also has sophisticated sensors that would allow medical professionals to collect data on muscle engagement and tension. it uses electrical impedance tomography, a form of noninvasive imaging to view muscle usage. . additionally, there are more sophisticated vr systems being developed which allow the user to use their entire body in their recovery. it also has sophisticated sensors that would allow medical professionals to collect data on muscle engagement and tension. it uses electrical impedance tomography, a form of noninvasive imaging to view muscle usage. another concern is the lack of major funding by big companies and the government into the field. many of these vr sets are off the shelf items, and not properly made for medical use. external add - ones are usually 3d printed or made from spare parts from other electronics. this lack of support means that patients who want to try this method have to be technically savvy, which is unlikely as many ailments only appear later in life. additionally, certain parts of vr like haptic feedback and tracking are still not advanced enough to be used reliably in a medical setting. another issue is the amount of vr devices that are available for purchase. while this does increase the options available, the differences between vr systems could impact patient recovery. the vast number of vr devices also makes it difficult for medical professionals to give and interpret information, as they might not have had practice with the specific model, which could lead to faulty advice being given out. = = = applications = = = currently other applications within healthcare are being explored, such as : applications for monitoring of glucose, alcohol, and lactate or blood oxygen, breath monitoring, heartbeat, heart rate and its variability, electromyography ( emg ), electrocardiogram ( ecg ) and electroencephalogram ( eeg ), body temperature, pressure ( e. g. in shoes ), sweat rate or sweat loss, levels of uric acid and ions – e. g. for preventing fatigue or injuries or for optimizing training patterns, including via " human - integrated electronics " forecasting changes in mood, stress, and health measuring blood alcohol content measuring athletic performance monitoring how sick the user is detecting early signs of infection long - term monitoring of patients with heart and circulatory problems that records an electrocardiogram and is self - moistening health risk assessment applications, including measures of frailty and risks of age - dependent diseases automatic documentation of care activities days - long continuous imaging of diverse organs via a wearable bioadhesive stretchable high - resolution ultrasound imaging patch or e. g. a wearable continuous heart ultrasound imager. ( potential novel diagnostic and monitoring tools ) sleep tracking cortisol monitoring for measuring stress measuring relaxation or alert issue of cost and accessibility is also another issue ; while vr headsets are significantly cheaper than traditional physical therapy, there may be many ad - ons that could raise the price, making it inaccessible to many. base models may be less effective compared to higher end models, which may lead to a digital divide. overall, vr healthcare solutions are not meant to be a competitor to traditional therapies, as research shows that when coupled together physical therapy is more effective. research into vr rehabilitation continues to expand with new research into haptic developing, which would allow the user to feel their environments and to incorporate their hands and feet into their recovery plan. additionally, there are more sophisticated vr systems being developed which allow the user to use their entire body in their recovery. it also has sophisticated sensors that would allow medical professionals to collect data on muscle engagement and tension. it uses electrical impedance tomography, a form of noninvasive imaging to view muscle usage. another concern is the lack of major funding by big companies and the government into the field. many of these vr sets are off the shelf items, and not properly made for medical use. external add - ones are usually 3d printed or made from spare parts from other electronics. this lack of support means that patients who want to try this method have to be technically savvy, which is unlikely as many ailments only appear later in life. additionally, certain parts of vr like haptic feedback and tracking are still not advanced enough to be used reliably in a medical setting. another issue is the amount of vr devices that are available for purchase. while this does increase the options available, the differences between vr systems could impact patient recovery. the vast number of vr devices also makes it difficult for medical professionals to give and interpret information, as they might not have had practice with the specific model, which could lead to faulty advice being given out. = = = applications = = = currently other applications within healthcare are being explored, such as : applications for monitoring of glucose, alcohol, and lactate or blood oxygen, breath monitoring, heartbeat, heart rate and its variability, electromyography ( emg ), electrocardiogram ( ecg ) and electroencephalogram ( eeg ), body temperature, pressure ( e. g. in shoes ), sweat rate or sweat loss, levels of uric acid and ions – e. g. for preventing fatigue or injuries or for optimizing training patterns, including via " human - integrated electronics " forecasting changes in mood, stress more readily than they could participate in hunter - gatherer activities. with this increase in population and availability of labor came an increase in labor specialization. what triggered the progression from early neolithic villages to the first cities, such as uruk, and the first civilizations, such as sumer, is not specifically known ; however, the emergence of increasingly hierarchical social structures and specialized labor, of trade and war among adjacent cultures, and the need for collective action to overcome environmental challenges such as irrigation, are all thought to have played a role. the invention of writing led to the spread of cultural knowledge and became the basis for history, libraries, schools, and scientific research. continuing improvements led to the furnace and bellows and provided, for the first time, the ability to smelt and forge gold, copper, silver, and lead – native metals found in relatively pure form in nature. the advantages of copper tools over stone, bone and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of neolithic times ( about 10 kya ). native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. eventually, the working of metals led to the discovery of alloys such as bronze and brass ( about 4, 000 bce ). the first use of iron alloys such as steel dates to around 1, 800 bce. = = = ancient = = = after harnessing fire, humans discovered other forms of energy. the earliest known use of wind power is the sailing ship ; the earliest record of a ship under sail is that of a nile boat dating to around 7, 000 bce. from prehistoric times, egyptians likely used the power of the annual flooding of the nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and " catch " basins. the ancient sumerians in mesopotamia used a complex system of canals and levees to divert water from the tigris and euphrates rivers for irrigation. archaeologists estimate that the wheel was invented independently and concurrently in mesopotamia ( in present - day iraq ), the northern caucasus ( maykop culture ), and central europe. time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - nuclear states signed the limited test ban treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. the treaty permitted underground nuclear testing. france continued atmospheric testing until 1974, while china continued up until 1980. the last underground test by the united states was in 1992, the soviet union made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up Question: Which resource, abundant in Nevada, is nonrenewable? A) copper B) wind C) sunlight D) wood
A) copper
Context: scientists look through telescopes, study images on electronic screens, record meter readings, and so on. generally, on a basic level, they can agree on what they see, e. g., the thermometer shows 37. 9 degrees c. but, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. for example, before albert einstein ' s general theory of relativity, observers would have likely interpreted an image of the einstein cross as five different objects in space. in light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. observations that cannot be separated from theoretical interpretation are said to be theory - laden. all observation involves both perception and cognition. that is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. therefore, observations are affected by one ' s underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. in this sense, it can be argued that all observation is theory - laden. = = = the purpose of science = = = should science aim to determine ultimate truth, or are there questions that science cannot answer? scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. conversely, scientific anti - realists argue that science does not aim ( or at least does not succeed ) at truth, especially truth about unobservables like electrons or other universes. instrumentalists argue that scientific theories should only be evaluated on whether they are useful. in their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. realists often point to the success of recent scientific theories as evidence for the truth ( or near truth ) of current theories. antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. antirealists attempt to explain the success of scientific theories without reference to truth. some antirealists claim that scientific , including objects we can see with our naked eyes. it is one of the oldest sciences. astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. there are two types of astronomy : observational astronomy and theoretical astronomy. observational astronomy is focused on acquiring and analyzing data, mainly using basic principles of physics. in contrast, theoretical astronomy is oriented towards developing computer or analytical models to describe astronomical objects and phenomena. this discipline is the science of celestial objects and phenomena that originate outside the earth ' s atmosphere. it is concerned with the evolution, physics, chemistry, meteorology, geology, and motion of celestial objects, as well as the formation and development of the universe. astronomy includes examining, studying, and modeling stars, planets, and comets. most of the information used by astronomers is gathered by remote observation. however, some laboratory reproduction of celestial phenomena has been performed ( such as the molecular chemistry of the interstellar medium ). there is considerable overlap with physics and in some areas of earth science. there are also interdisciplinary fields such as astrophysics, planetary sciences, and cosmology, along with allied disciplines such as space physics and astrochemistry. while the study of celestial features and phenomena can be traced back to antiquity, the scientific methodology of this field began to develop in the middle of the 17th century. a key factor was galileo ' s introduction of the telescope to examine the night sky in more detail. the mathematical treatment of astronomy began with newton ' s development of celestial mechanics and the laws of gravitation. however, it was triggered by earlier work of astronomers such as kepler. by the 19th century, astronomy had developed into formal science, with the introduction of instruments such as the spectroscope and photography, along with much - improved telescopes and the creation of professional observatories. = = interdisciplinary studies = = the distinctions between the natural science disciplines are not always sharp, and they share many cross - discipline fields. physics plays a significant role in the other natural sciences, as represented by astrophysics, geophysics, chemical physics and biophysics. likewise chemistry is represented by such fields as biochemistry, physical chemistry, geochemistry and astrochemistry. a particular example of a scientific discipline that draws upon multiple natural sciences is environmental science. this field studies the interactions of physical, chemical, geological, and biological components of the environment, with particular regard to the effect of human activities and the impact on biodiversity and sustainability. this science also draws upon expertise from other fields, such necessary and sufficient conditions for a term to apply to an object. for example : " a platonic solid is a convex, regular polyhedron in three - dimensional euclidean space. " an extensional definition instead lists all objects where the term applies. for example : " a platonic solid is one of the following : tetrahedron, cube, octahedron, dodecahedron, or icosahedron. " in logic, the extension of a predicate is the set of all objects for which the predicate is true. further, the logical principle of extensionality judges two objects to objects to be equal if they satisfy the same external properties. since, by the axiom, two sets are defined to be equal if they satisfy membership, sets are extentional. jose ferreiros credits richard dedekind for being the first to explicitly state the principle, although he does not assert it as a definition : it very frequently happens that different things a, b, c... considered for any reason under a common point of view, are collected together in the mind, and one then says that they form a system s ; one calls the things a, b, c... the elements of the system s, they are contained in s ; conversely, s consists of these elements. such a system s ( or a collection, a manifold, a totality ), as an object of our thought, is likewise a thing ; it is completely determined when, for every thing, it is determined whether it is an element of s or not. = = = background = = = around the turn of the 20th century, mathematics faced several paradoxes and counter - intuitive results. for example, russell ' s paradox showed a contradiction of naive set theory, it was shown that the parallel postulate cannot be proved, the existence of mathematical objects that cannot be computed or explicitly described, and the existence of theorems of arithmetic that cannot be proved with peano arithmetic. the result was a foundational crisis of mathematics. the resolution of this crisis involved the rise of a new mathematical discipline called mathematical logic, which studies formal logic within mathematics. subsequent discoveries in the 20th century then stabilized the foundations of mathematics into a coherent framework valid for all mathematics. this framework is based on a systematic use of axiomatic method and on set theory, specifically zermelo – fraenkel set theory, developed by ernst zermelo and abraham fraenkel. this set theory ( and set theory in general ) is now considered the most common foundation of mathematics early data taken during commissioning of the sdss have resulted in the discovery of a very cool white dwarf. it appears to have stronger collision induced absorption from molecular hydrogen than any other known white dwarf, suggesting it has a cooler temperature than any other. while its distance is presently unknown, it has a surprisingly small proper motion, making it unlikely to be a halo star. an analysis of white dwarf cooling times suggests that this object may be a low - mass star with a helium core. the sdss imaging and spectroscopy also recovered lhs 3250, the coolest previously known white dwarf, indicating that the sdss will be an effective tool for identifying these extreme objects. general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. engineers often use online documents and books such as those published by asm to aid them in determining the type of failure and possible causes. once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests. = = = thermodynamics and thermo - science = = = thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. at its simplest, thermodynamics is the study of energy, its use and transformation through a system. typically, engineering thermodynamics is concerned with changing energy from one form to another. as an example, automotive engines convert chemical energy ( enthalpy ) from the fuel into heat, and then into mechanical work that eventually turns the wheels. thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. mechanical engineers use thermo - science to design engines and power plants, heating, ventilation, and air - conditioning ( hvac ) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others. = = = design and drafting = = = drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. a technical drawing can be a computer model or hand - drawn schematic showing all the dimensions necessary to manufacture a , cash flow statement. forensic aerial photography is the study and interpretation of aerial photographic evidence. forensic anthropology is the application of physical anthropology in a legal setting, usually for the recovery and identification of skeletonized human remains. forensic archaeology is the application of a combination of archaeological techniques and forensic science, typically in law enforcement. forensic astronomy uses methods from astronomy to determine past celestial constellations for forensic purposes. forensic botany is the study of plant life in order to gain information regarding possible crimes. forensic chemistry is the study of detection and identification of illicit drugs, accelerants used in arson cases, explosive and gunshot residue. forensic dactyloscopy is the study of fingerprints. forensic document examination or questioned document examination answers questions about a disputed document using a variety of scientific processes and methods. many examinations involve a comparison of the questioned document, or components of the document, with a set of known standards. the most common type of examination involves handwriting, whereby the examiner tries to address concerns about potential authorship. forensic dna analysis takes advantage of the uniqueness of an individual ' s dna to answer forensic questions such as paternity / maternity testing and placing a suspect at a crime scene, e. g. in a rape investigation. forensic engineering is the scientific examination and analysis of structures and products relating to their failure or cause of damage. forensic entomology deals with the examination of insects in, on and around human remains to assist in determination of time or location of death. it is also possible to determine if the body was moved after death using entomology. forensic geology deals with trace evidence in the form of soils, minerals and petroleum. forensic geomorphology is the study of the ground surface to look for potential location ( s ) of buried object ( s ). forensic geophysics is the application of geophysical techniques such as radar for detecting objects hidden underground or underwater. forensic intelligence process starts with the collection of data and ends with the integration of results within into the analysis of crimes under investigation. forensic interviews are conducted using the science of professionally using expertise to conduct a variety of investigative interviews with victims, witnesses, suspects or other sources to determine the facts regarding suspicions, allegations or specific incidents in either public or private sector settings. forensic histopathology is the application of histological techniques and examination to forensic pathology practice. forensic limnology is the analysis of evidence collected from crime scenes in or around fresh - water sources. examination of biological organisms, in particular diatoms, can be useful in connecting suspects with victims. forensic linguistics deals supermassive stars ( sms ) are massive hydrogen objects, slowly radiating their gravitational binding energy. such hypothetical primordial objects may have been the seed of the massive black holes ( bhs ) observed at the centre of galaxies. under the standard picture, these objects can be approximately described as n = 3 polytropes, and they are expected to shine extremely close to their eddington luminosity. once however, one considers the porosity induced by instabilities near the eddington limit, which give rise to super - eddington states, the standard picture should be modified. we study the structure, evolution and mass loss of these objects. we find the following. first, the evolution of smss is hastened due to their increased energy release. they accelerate continuum driven winds. if there is no rotational stabilization, these winds are insufficient to " evaporate " the objects, such that they can collapse to form a supermassive bhs, however, they do prevent smss from emitting a copious amount of ionizing radiation. if the smss are rotationally stabilized, the winds " evaporate " the objects until a normal sub - eddington star remains, having a mass of a few 100msun. intense research in the materials science community due to the unique properties that they exhibit. nanostructure deals with objects and structures that are in the 1 – 100 nm range. in many materials, atoms or molecules agglomerate to form objects at the nanoscale. this causes many interesting electrical, magnetic, optical, and mechanical properties. in describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. nanotextured surfaces have one dimension on the nanoscale, i. e., only the thickness of the surface of an object is between 0. 1 and 100 nm. nanotubes have two dimensions on the nanoscale, i. e., the diameter of the tube is between 0. 1 and 100 nm ; its length could be much greater. finally, spherical nanoparticles have three dimensions on the nanoscale, i. e., the particle is between 0. 1 and 100 nm in each spatial dimension. the terms nanoparticles and ultrafine particles ( ufp ) often are used synonymously although ufp can reach into the micrometre range. the term ' nanostructure ' is often used, when referring to magnetic technology. nanoscale structure in biology is often called ultrastructure. = = = = microstructure = = = = microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25Γ— magnification. it deals with objects from 100 nm to a few cm. the microstructure of a material ( which can be broadly classified into metallic, polymeric, ceramic and composite ) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high / low temperature behavior, wear resistance, and so on. most of the traditional materials ( such as metals and ceramics ) are microstructured. the manufacture of a perfect crystal of a material is physically impossible. for example, any crystalline material will contain defects such as precipitates, grain boundaries ( hall – petch relationship ), vacancies, interstitial atoms or substitutional atoms. the microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties. = = = = macrostructure = = = = macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the versatility of pvc is due to the wide range of plasticisers and other additives that it accepts. the term " additives " in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties. polycarbonate would be normally considered an engineering plastic ( other examples include peek, abs ). such plastics are valued for their superior strengths and other special material properties. they are usually not used for disposable applications, unlike commodity plastics. specialty plastics are materials with unique characteristics, such as ultra - high strength, electrical conductivity, electro - fluorescence, high thermal stability, etc. the dividing lines between the various types of plastics is not based on material but rather on their properties and applications. for example, polyethylene ( pe ) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium - density polyethylene ( mdpe ) is used for underground gas and water pipes, and another variety called ultra - high - molecular - weight polyethylene ( uhmwpe ) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low - friction socket in implanted hip joints. = = = metal alloys = = = the alloys of iron ( steel, stainless steel, cast iron, tool steel, alloy steels ) make up the largest proportion of metals today both by quantity and commercial value. iron alloyed with various proportions of carbon gives low, mid and high carbon steels. an iron - carbon alloy is only considered steel if the carbon level is between 0. 01 % and 2. 00 % by weight. for steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. heat treatment processes such as quenching and tempering can significantly change these properties, however. in contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. cast iron is defined as an iron – carbon alloy with more than 2. 00 %, but less than 6. 67 % carbon. stainless steel is defined as a regular steel alloy with greater than 10 % by weight alloying content of chromium. nickel and molybdenum are typically also added in stainless steels. other significant metallic alloys are those of aluminium, titanium, copper and magnesium. copper alloys have been known for a astronomy uses methods from astronomy to determine past celestial constellations for forensic purposes. forensic botany is the study of plant life in order to gain information regarding possible crimes. forensic chemistry is the study of detection and identification of illicit drugs, accelerants used in arson cases, explosive and gunshot residue. forensic dactyloscopy is the study of fingerprints. forensic document examination or questioned document examination answers questions about a disputed document using a variety of scientific processes and methods. many examinations involve a comparison of the questioned document, or components of the document, with a set of known standards. the most common type of examination involves handwriting, whereby the examiner tries to address concerns about potential authorship. forensic dna analysis takes advantage of the uniqueness of an individual ' s dna to answer forensic questions such as paternity / maternity testing and placing a suspect at a crime scene, e. g. in a rape investigation. forensic engineering is the scientific examination and analysis of structures and products relating to their failure or cause of damage. forensic entomology deals with the examination of insects in, on and around human remains to assist in determination of time or location of death. it is also possible to determine if the body was moved after death using entomology. forensic geology deals with trace evidence in the form of soils, minerals and petroleum. forensic geomorphology is the study of the ground surface to look for potential location ( s ) of buried object ( s ). forensic geophysics is the application of geophysical techniques such as radar for detecting objects hidden underground or underwater. forensic intelligence process starts with the collection of data and ends with the integration of results within into the analysis of crimes under investigation. forensic interviews are conducted using the science of professionally using expertise to conduct a variety of investigative interviews with victims, witnesses, suspects or other sources to determine the facts regarding suspicions, allegations or specific incidents in either public or private sector settings. forensic histopathology is the application of histological techniques and examination to forensic pathology practice. forensic limnology is the analysis of evidence collected from crime scenes in or around fresh - water sources. examination of biological organisms, in particular diatoms, can be useful in connecting suspects with victims. forensic linguistics deals with issues in the legal system that requires linguistic expertise. forensic meteorology is a site - specific analysis of past weather conditions for a point of loss. forensic metrology is the application of metrology to assess the reliability of scientific evidence obtained through measurements forensic microbiology is the study of the necrobiome. forensic nursing Question: A student reaches one hand into a bag filled with smooth objects. The student feels the objects but does not look into the bag. Which property of the objects can the student most likely identify? A) shape B) color C) ability to reflect light D) ability to conduct electricity
A) shape
Context: organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. archaea reproduce asexually by binary fission, fragmentation, or budding ; unlike bacteria, no known species of archaea form endospores. the first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet. archaea are a major part of earth ' s life. they are part of the microbiota of all organisms. in the human microbiome, they are important in the gut, mouth, and on the skin. their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles : carbon fixation ; nitrogen cycling ; organic compound turnover ; and maintaining microbial symbiotic and syntrophic communities, for example. = = = eukaryotes = = = eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria ( or symbiogenesis ) that gave rise to mitochondria and chloroplasts, both of which are now part of modern - day eukaryotic cells. the major lineages of eukaryotes diversified in the precambrian about 1. 5 billion years ago and can be classified into eight major clades : alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. while it is likely that protists share a common ancestor ( the last eukaryotic common ancestor ), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. like groupings such as algae, more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. archaea use more energy sources than eukaryotes : these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. archaea reproduce asexually by binary fission, fragmentation, or budding ; unlike bacteria, no known species of archaea form endospores. the first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet. archaea are a major part of earth ' s life. they are part of the microbiota of all organisms. in the human microbiome, they are important in the gut, mouth, and on the skin. their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles : carbon fixation ; nitrogen cycling ; organic compound turnover ; and maintaining microbial symbiotic and syntrophic communities, for example. = = = eukaryotes = = = eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria ( or symbiogenesis ) that gave rise to mitochondria and chloroplasts, both of which are now part of modern - day eukaryotic cells. the major lineages of eukaryotes diversified in the precambrian about 1. 5 billion years ago and can be classified into eight major clades : alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. while it is waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea are further divided into multiple recognized phyla. archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat and square cells of haloquadratum walsbyi. despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. archaea use more energy sources than eukaryotes : these range from organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. archaea reproduce asexually by binary fission, fragmentation, or budding ; unlike bacteria, no known species of archaea form endospores. the first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet. archaea are a major part of earth ' s life. they are part of the microbiota of all organisms. in the human microbiome, they are important in the gut, mouth, and on the skin. their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles : carbon fixation ; nitrogen cycling ; organic compound turnover ; and maintaining microbial and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying superdielectric behavior was observed in pastes made of high surface area alumina filled to the level of incipient wetness with water containing dissolved sodium chloride ( table salt ). in some cases the dielectric constants were greater than 10 ^ 10. onset of electro - chemical corrosion. similar problems are encountered in coastal and offshore structures. = = = anti - fouling = = = anti - fouling is the process of eliminating obstructive organisms from essential components of seawater systems. depending on the nature and location of marine growth, this process is performed in a number of different ways : marine organisms may grow and attach to the surfaces of the outboard suction inlets used to obtain water for cooling systems. electro - chlorination involves running high electrical current through sea water, altering the water ' s chemical composition to create sodium hypochlorite, purging any bio - matter. an electrolytic method of anti - fouling involves running electrical current through two anodes ( scardino, 2009 ). these anodes typically consist of copper and aluminum ( or alternatively, iron ). the first metal, copper anode, releases its ion into the water, creating an environment that is too toxic for bio - matter. the second metal, aluminum, coats the inside of the pipes to prevent corrosion. other forms of marine growth such as mussels and algae may attach themselves to the bottom of a ship ' s hull. this growth interferes with the smoothness and uniformity of the ship ' s hull, causing the ship to have a less hydrodynamic shape that causes it to be slower and less fuel - efficient. marine growth on the hull can be remedied by using special paint that prevents the growth of such organisms. = = = pollution control = = = = = = = sulfur emission = = = = the burning of marine fuels releases harmful pollutants into the atmosphere. ships burn marine diesel in addition to heavy fuel oil. heavy fuel oil, being the heaviest of refined oils, releases sulfur dioxide when burned. sulfur dioxide emissions have the potential to raise atmospheric and ocean acidity causing harm to marine life. however, heavy fuel oil may only be burned in international waters due to the pollution created. it is commercially advantageous due to the cost effectiveness compared to other marine fuels. it is prospected that heavy fuel oil will be phased out of commercial use by the year 2020 ( smith, 2018 ). = = = = oil and water discharge = = = = water, oil, and other substances collect at the bottom of the ship in what is known as the bilge. bilge water is pumped overboard, but must pass a pollution threshold test of 15 ppm ( parts per million ) of oil to be discharged. water is tested aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β€” of which around 1 million are insects β€” but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β€” pieces of dna that can move between cells β€” while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ", and as self - replicators. = = ecology = = ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. = = = ecosystems = = = the community of living ( biotic ) organisms in conjunction with the nonliving ( abiotic ) components ( e. equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 ) if a fintie group g acts topologically and faithfully on r ^ 3, then g is a subgroup of o ( 3 ) ##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models Question: An abiotic factor that most influences the organisms living in a salt marsh is A) fish. B) water. C) predators. D) grasses.
B) water.
Context: others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabinol ( active ingredient in cannabis ), caffeine, morphine and nicotine come directly from plants. others are simple derivatives of botanical natural products. for example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. most alcoholic beverages come from fermentation of carbohydrate - rich plant products such as barley ( beer ), rice ( sake ) and grapes ( wine ). native americans have used various plants as ways of treating illness or disease for thousands of years. this knowledge native americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly ferment reaction to proceed more rapidly without being consumed by it β€” by reducing the amount of activation energy needed to convert reactants into products. enzymes also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell ' s environment or to signals from other cells. = = = cellular respiration = = = cellular respiration is a set of metabolic reactions and processes that take place in cells to convert chemical energy from nutrients into adenosine triphosphate ( atp ), and then release waste products. the reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, releasing energy. respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. the overall reaction occurs in a series of biochemical steps, some of which are redox reactions. although cellular respiration is technically a combustion reaction, it clearly does not resemble one when it occurs in a cell because of the slow, controlled release of energy from the series of reactions. sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. cellular respiration involving oxygen is called aerobic respiration, which has four stages : glycolysis, citric acid cycle ( or krebs cycle ), electron transport chain, and oxidative phosphorylation. glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of atp being produced at the same time. each pyruvate is then oxidized into acetyl - coa by the pyruvate dehydrogenase complex, which also generates nadh and carbon dioxide. acetyl - coa enters the citric acid cycle, which takes places inside the mitochondrial matrix. at the end of the cycle, the total yield from 1 glucose ( or 2 pyruvates ) is 6 nadh, 2 fadh2, and 2 atp molecules. finally, the next stage is oxidative phosphorylation, which in eukaryotes, occurs in the mitochondrial cristae. oxidative phosphorylation comprises the electron transport chain, which is a series of four protein complexes that transfer electrons from one complex to another, thereby releasing energy from nadh and fadh2 that is coupled to the pumping of protons ( hydrogen ions ) across the inner mitochondrial membrane ( chemiosmosis ), which generates a proton motive force. energy in space, can adversely affect the earth ' s environment. some hypergolic rocket propellants, such as hydrazine, are highly toxic prior to combustion, but decompose into less toxic compounds after burning. rockets using hydrocarbon fuels, such as kerosene, release carbon dioxide and soot in their exhaust. carbon dioxide emissions are insignificant compared to those from other sources ; on average, the united states consumed 803 million us gal ( 3. 0 million m3 ) of liquid fuels per day in 2014, while a single falcon 9 rocket first stage burns around 25, 000 us gallons ( 95 m3 ) of kerosene fuel per launch. even if a falcon 9 were launched every single day, it would only represent 0. 006 % of liquid fuel consumption ( and carbon dioxide emissions ) for that day. additionally, the exhaust from lox - and lh2 - fueled engines, like the ssme, is almost entirely water vapor. nasa addressed environmental concerns with its canceled constellation program in accordance with the national environmental policy act in 2011. in contrast, ion engines use harmless noble gases like xenon for propulsion. an example of nasa ' s environmental efforts is the nasa sustainability base. additionally, the exploration sciences building was awarded the leed gold rating in 2010. on may 8, 2003, the environmental protection agency recognized nasa as the first federal agency to directly use landfill gas to produce energy at one of its facilities β€” the goddard space flight center, greenbelt, maryland. in 2018, nasa along with other companies including sensor coating systems, pratt & whitney, monitor coating and utrc launched the project caution ( coatings for ultra high temperature detection ). this project aims to enhance the temperature range of the thermal history coating up to 1, 500 Β°c ( 2, 730 Β°f ) and beyond. the final goal of this project is improving the safety of jet engines as well as increasing efficiency and reducing co2 emissions. = = = climate change = = = nasa also researches and publishes on climate change. its statements concur with the global scientific consensus that the climate is warming. bob walker, who has advised former us president donald trump on space issues, has advocated that nasa should focus on space exploration and that its climate study operations should be transferred to other agencies such as noaa. former nasa atmospheric scientist j. marshall shepherd countered that earth science study was built into nasa ' s mission at its creation in the 1958 national aeronautics and space act. nasa won the 2020 webby people ' s voice award for green in the category endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e βˆ’ e / k t { \ displaystyle e ^ { - e / kt } } – that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e βˆ’ e / k t { \ displaystyle e ^ { - e / kt } } – that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid – base reactions are hydroxide ( ohβˆ’ ) and phosphate ( po43βˆ’ ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid – base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the romans were largely neglected throughout europe. the first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in paisley, scotland, john gibb, installed an experimental filter, selling his unwanted surplus to the public. the first treated public water supply in the world was installed by engineer james simpson for the chelsea waterworks company in london in 1829. the first screw - down water tap was patented in 1845 by guest and chrimes, a brass foundry in rotherham. the practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician john snow during the 1854 broad street cholera outbreak demonstrated the role of the water supply in spreading the cholera epidemic. = = = second industrial revolution ( 1860s – 1914 ) = = = the 19th century saw astonishing developments in transportation, construction, is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemical masculinity and warmth. the five phases – fire, earth, metal, wood, and water – described a cycle of transformations in nature. the water turned into wood, which turned into the fire when it burned. the ashes left by fire were earth. using these principles, chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc – 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and , heat from friction during rolling can cause problems for metal bearings ; problems which are reduced by the use of ceramics. ceramics are also more chemically resistant and can be used in wet environments where steel bearings would rust. the major drawback to using ceramics is a significantly higher cost. in many cases their electrically insulating properties may also be valuable in bearings. in the early 1980s, toyota researched production of an adiabatic ceramic engine which can run at a temperature of over 6000 Β°f ( 3300 Β°c ). ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. fuel efficiency of the engine is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials Question: When oil is burning, the reaction will A) only release energy B) only absorb energy C) neither absorb nor release energy D) sometimes release and sometimes absorb energy depending on the oil
A) only release energy
Context: dimension 3, typically r3. a surface that is contained in a projective space is called a projective surface ( see Β§ projective surface ). a surface that is not supposed to be included in another space is called an abstract surface. = = examples = = the graph of a continuous function of two variables, defined over a connected open subset of r2 is a topological surface. if the function is differentiable, the graph is a differentiable surface. a plane is both an algebraic surface and a differentiable surface. it is also a ruled surface and a surface of revolution. a circular cylinder ( that is, the locus of a line crossing a circle and parallel to a given direction ) is an algebraic surface and a differentiable surface. a circular cone ( locus of a line crossing a circle, and passing through a fixed point, the apex, which is outside the plane of the circle ) is an algebraic surface which is not a differentiable surface. if one removes the apex, the remainder of the cone is the union of two differentiable surfaces. the surface of a polyhedron is a topological surface, which is neither a differentiable surface nor an algebraic surface. a hyperbolic paraboloid ( the graph of the function z = xy ) is a differentiable surface and an algebraic surface. it is also a ruled surface, and, for this reason, is often used in architecture. a two - sheet hyperboloid is an algebraic surface and the union of two non - intersecting differentiable surfaces. = = parametric surface = = a parametric surface is the image of an open subset of the euclidean plane ( typically r 2 { \ displaystyle \ mathbb { r } ^ { 2 } } ) by a continuous function, in a topological space, generally a euclidean space of dimension at least three. usually the function is supposed to be continuously differentiable, and this will be always the case in this article. specifically, a parametric surface in r 3 { \ displaystyle \ mathbb { r } ^ { 3 } } is given by three functions of two variables u and v, called parameters x = f 1 ( u, v ), y = f 2 ( u, v ), z = f 3 ( u, v ). { \ displaystyle { \ begin { aligned } x & = f _ { 1 } ( u, v ), \ \ [ 4pt ] y & = f _ { 2 } ( u, v ), \ \ [ 4pt ] z & = f _ { 3 } manifold of dimension two ( see Β§ topological surface ). a differentiable surface is a surfaces that is a differentiable manifold ( see Β§ differentiable surface ). every differentiable surface is a topological surface, but the converse is false. a " surface " is often implicitly supposed to be contained in a euclidean space of dimension 3, typically r3. a surface that is contained in a projective space is called a projective surface ( see Β§ projective surface ). a surface that is not supposed to be included in another space is called an abstract surface. = = examples = = the graph of a continuous function of two variables, defined over a connected open subset of r2 is a topological surface. if the function is differentiable, the graph is a differentiable surface. a plane is both an algebraic surface and a differentiable surface. it is also a ruled surface and a surface of revolution. a circular cylinder ( that is, the locus of a line crossing a circle and parallel to a given direction ) is an algebraic surface and a differentiable surface. a circular cone ( locus of a line crossing a circle, and passing through a fixed point, the apex, which is outside the plane of the circle ) is an algebraic surface which is not a differentiable surface. if one removes the apex, the remainder of the cone is the union of two differentiable surfaces. the surface of a polyhedron is a topological surface, which is neither a differentiable surface nor an algebraic surface. a hyperbolic paraboloid ( the graph of the function z = xy ) is a differentiable surface and an algebraic surface. it is also a ruled surface, and, for this reason, is often used in architecture. a two - sheet hyperboloid is an algebraic surface and the union of two non - intersecting differentiable surfaces. = = parametric surface = = a parametric surface is the image of an open subset of the euclidean plane ( typically r 2 { \ displaystyle \ mathbb { r } ^ { 2 } } ) by a continuous function, in a topological space, generally a euclidean space of dimension at least three. usually the function is supposed to be continuously differentiable, and this will be always the case in this article. specifically, a parametric surface in r 3 { \ displaystyle \ mathbb { r } ^ { 3 } } is given by three functions of two variables u and v, called parameters x = f 1 ( u, v ), y = f 2 ( u, v ), z = f 3 sumerians in mesopotamia used a complex system of canals and levees to divert water from the tigris and euphrates rivers for irrigation. archaeologists estimate that the wheel was invented independently and concurrently in mesopotamia ( in present - day iraq ), the northern caucasus ( maykop culture ), and central europe. time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and the study of mummies. scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and especially their technology. = = = ancient = = = = = = = copper and bronze ages = = = = metallic copper occurs on the surface of weathered copper ore deposits and copper it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground and supported by arches. = = = pre - modern = = = innovations continued through the middle ages with the introduction of silk production ( in asia and later europe ), the horse collar, and horseshoes. simple machines ( such as the lever, the screw, and the pulley ) were combined into more complicated tools time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground earliest record of a ship under sail is that of a nile boat dating to around 7, 000 bce. from prehistoric times, egyptians likely used the power of the annual flooding of the nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and " catch " basins. the ancient sumerians in mesopotamia used a complex system of canals and levees to divert water from the tigris and euphrates rivers for irrigation. archaeologists estimate that the wheel was invented independently and concurrently in mesopotamia ( in present - day iraq ), the northern caucasus ( maykop culture ), and central europe. time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, ##lling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called Question: Fourth graders are planning a roller-skate race. Which surface would be the best for this race? A) gravel B) sand C) blacktop D) grass
C) blacktop
Context: include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous, and premature. currently, germline modification is banned in 40 countries. scientists that do this type of research will often let embryos grow for a few days without allowing it to develop into a baby. researchers are altering the genome of pigs to induce the growth of human organs, with the aim of increasing the success of pig to human organ transplantation. scientists are creating " gene drives ", changing the genomes of mosquitoes to make them immune to malaria, and then looking to spread the genetically altered mosquitoes throughout the mosquito population in the hopes of eliminating the disease. = = = research = = = genetic engineering is an important tool the effect of the energy deposition inside the human body made by radioactive substances is discussed. for the first time, we stress the importance of the recoiling nucleus in such reactions, particularly concerning the damage caused on the dna structure. , there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. in terms of their structural composition, the microtubules are made up of tubulin ( e. g., Ξ± - tubulin and Ξ² - tubulin ) whereas intermediate filaments are made up of fibrous proteins. microfilaments are made up of actin molecules that interact with other strands of proteins. = = = metabolism = = = all cells require energy to sustain cellular processes. metabolism is the set of chemical reactions in an organism. the three main purposes of metabolism are : the conversion of food to energy to run cellular processes ; the conversion of food / fuel to monomer building blocks ; and the elimination of metabolic wastes. these enzyme - catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. metabolic reactions may be categorized as catabolic β€” the breaking down of compounds ( for example, the breaking down of glucose to pyruvate by cellular respiration ) ; or anabolic β€” the building up ( synthesis ) of compounds ( such as proteins, carbohydrates, lipids, and nucleic acids ). usually, catabolism releases energy, and anabolism consumes energy. the chemical reactions of metabolism are organized into metabolic pathways, in which monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous, and premature. currently, germline modification is banned in 40 countries. scientists that do this type of research will often let embryos grow for a few days without allowing it to develop into a baby. researchers are altering the genome of pigs to induce the growth of human organs, with the aim of increasing the success of founded in 1976 and started the production of human proteins. genetically engineered human insulin was produced in 1978 and insulin - producing bacteria were commercialised in 1982. genetically modified food has been sold since 1994, with the release of the flavr savr tomato. the flavr savr was engineered to have a longer shelf life, but most current gm crops are modified to increase resistance to insects and herbicides. glofish, the first gmo designed as a pet, was sold in the united states in december 2003. in 2016 salmon modified with a growth hormone were sold. genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. in research, gmos are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. by knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. as well as producing hormones, vaccines and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. chinese hamster ovary ( cho ) cells are used in industrial genetic engineering. additionally mrna vaccines are made through genetic engineering to prevent infections by viruses such as covid - 19. the same techniques that are used to produce drugs can also have industrial applications such as producing enzymes for laundry detergent, cheeses and other products. the rise of commercialised genetically modified crops has provided economic benefit to farmers in many different countries, but has also been the source of most of the controversy surrounding the technology. this has been present since its early use ; the first field trials were destroyed by anti - gm activists. although there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, critics consider gm food safety a leading concern. gene flow, impact on non - target organisms, control of the food supply and intellectual property rights have also been raised as potential issues. these concerns have led to the development of a regulatory framework, which started in 1975. it has led to an international treaty, the cartagena protocol on biosafety, that was adopted in 2000. individual countries have developed their own regulatory systems regarding gmos, with the most marked differences occurring between the united states and europe. = = overview = = genetic engineering is a process that alters the genetic structure of an organism by either removing or introducing dna, or modifying existing genetic material in situ. unlike traditional animal and plant breeding, which involves doing multiple crosses and then selecting for the organism with the desired phenotype, oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of the cell ' s dna, or mitochondria, which generate adenosine triphosphate ( atp ) to power cellular processes. other organelles such as endoplasmic reticulum and golgi apparatus play a role in the synthesis and packaging of proteins, respectively. biomolecules such as proteins can be engulfed by lysosomes, another specialized organelle. plant cells have additional organelles that distinguish them from animal cells such as a cell wall that provides support for the plant cell, chloroplasts that harvest sunlight energy to produce sugar, and vacuoles that provide storage and structural support as well as being involved in reproduction and breakdown of plant seeds. eukaryotic cells also have cytoskeleton that is made up of microtubules, intermediate filaments, and microfilaments, all of which provide support for the cell and are involved in the movement of the cell and its organelles. in terms of their structural composition, the microtubules are made up of tubulin ( e. g., Ξ± - tubulin and Ξ² - tubulin ) whereas intermediate filaments are made up of fibrous proteins. microfilaments are made up of actin molecules that interact with other strands of proteins. = = = metabolism = = = all cells require energy to sustain cellular processes. metabolism is the set of chemical reactions in an organism. the three main purposes of metabolism are : the conversion of food to energy to run cellular processes ; the conversion of food / fuel to monomer building blocks ; and used to manufacture existing medicines relatively easily and cheaply. the first genetically engineered products were medicines designed to treat human diseases. to cite one example, in 1978 genentech developed synthetic humanized insulin by joining its gene with a plasmid vector inserted into the bacterium escherichia coli. insulin, widely used for the treatment of diabetes, was previously extracted from the pancreas of abattoir animals ( cattle or pigs ). the genetically engineered bacteria are able to produce large quantities of synthetic human insulin at relatively low cost. biotechnology has also enabled emerging therapeutics like gene therapy. the application of biotechnology to basic science ( for example through the human genome project ) has also dramatically improved our understanding of biology and as our scientific knowledge of normal and disease biology has increased, our ability to develop new medicines to treat previously untreatable diseases has increased as well. genetic testing allows the genetic diagnosis of vulnerabilities to inherited diseases, and can also be used to determine a child ' s parentage ( genetic mother and father ) or in general a person ' s ancestry. in addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. genetic testing identifies changes in chromosomes, genes, or proteins. most of the time, testing is used to find changes that are associated with inherited disorders. the results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person ' s chance of developing or passing on a genetic disorder. as of 2011 several hundred genetic tests were in use. since genetic testing may open up ethical or psychological problems, genetic testing is often accompanied by genetic counseling. = = = agriculture = = = genetically modified crops ( " gm crops ", or " biotech crops " ) are plants used in agriculture, the dna of which has been modified with genetic engineering techniques. in most cases, the main aim is to introduce a new trait that does not occur naturally in the species. biotechnology firms can contribute to future food security by improving the nutrition and viability of urban agriculture. furthermore, the protection of intellectual property rights encourages private sector investment in agrobiotechnology. examples in food crops include resistance to certain pests, diseases, stressful environmental conditions, resistance to chemical treatments ( e. g. resistance to a herbicide ), reduction of spoilage, or improving the nutrient profile of the crop. examples in non - food crops include production of water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division. most cells are very small, with diameters ranging from 1 to 100 micrometers and are therefore only visible under a light or electron microscope. there are generally two types of cells : eukaryotic cells, which contain a nucleus, and prokaryotic cells, which do not. prokaryotes are single - celled organisms such as bacteria, whereas eukaryotes can be single - celled or multicellular. in multicellular organisms, every cell in the organism ' s body is derived ultimately from a single cell in a fertilized egg. = = = cell structure = = = every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. a cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. cell membranes also contain membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton. within the cytoplasm of a cell, there are many biomolecules such as proteins and nucleic acids. in addition to biomolecules, eukaryotic cells have specialized structures called organelles that have their own lipid bilayers or are spatially units. these organelles include the cell nucleus, which contains most of Question: Protein is used by the human body to A) build strong bones. B) absorb vitamins. C) repair cells. D) provide fiber.
C) repair cells.
Context: and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the due to its location and climate, antarctica offers unique conditions for long - period observations across a broad wavelength regime, where important diagnostic lines for molecules and ions can be found, that are essential to understand the chemical properties of the interstellar medium. in addition to the natural benefits of the site, new technologies, resulting from astrophotonics, may allow miniaturised instruments, that are easier to winterise and advanced filters to further reduce the background in the infrared. s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = = navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea mike lockwood and mathew owens discuss how eclipse observations are aiding the development of a climatology of near - earth space oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars. pushes more individuals to take part. wearable technology also helps with chronic disease development and monitoring physical activity in terms of context. for example, according to the american journal of preventive medicine, " wearables can be used across different chronic disease trajectory phases ( e. g., pre - versus post - surgery ) and linked to medical record data to obtain granular data on how activity frequency, intensity, and duration changes over the disease course and with different treatments. " wearable technology can be beneficial in tracking and helping analyze data in terms of how one is performing as time goes on, and how they may be performing with different changes in their diet, workout routine, or sleep patterns. also, not only can wearable technology be helpful in measuring results pre and post surgery, but it can also help measure results as someone may be rehabbing from a chronic disease such as cancer, or heart disease, etc. wearable technology has the potential to create new and improved ways of how we look at health and how we actually interpret that science behind our health. it can propel us into higher levels of medicine and has already made a significant impact on how patients are diagnosed, treated, and rehabbed over time. however, extensive research still needs to be continued on how to properly integrate wearable technology into health care and how to best utilize it. in addition, despite the reaping benefits of wearable technology, a lot of research still also has to be completed in order to start transitioning wearable technology towards very sick high risk patients. = = = sense - making of the data = = = while wearables can collect data in aggregate form, most of them are limited in their ability to analyze or make conclusions based on this data – thus, most are used primarily for general health information. end user perception of how their data is used plays a big role in how such datasets can be fully optimized. exception include seizure - alerting wearables, which continuously analyze the wearer ' s data and make a decision about calling for help – the data collected can then provide doctors with objective evidence that they may find useful in diagnoses. wearables can account for individual differences, although most just collect data and apply one - size - fits - all algorithms. software on the wearables may analyze the data directly or send the data to a nearby device ( s ), such as a smartphone, which processes, displays or uses the data for analysis. for analysis and real - term sense - making, machine , including objects we can see with our naked eyes. it is one of the oldest sciences. astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. there are two types of astronomy : observational astronomy and theoretical astronomy. observational astronomy is focused on acquiring and analyzing data, mainly using basic principles of physics. in contrast, theoretical astronomy is oriented towards developing computer or analytical models to describe astronomical objects and phenomena. this discipline is the science of celestial objects and phenomena that originate outside the earth ' s atmosphere. it is concerned with the evolution, physics, chemistry, meteorology, geology, and motion of celestial objects, as well as the formation and development of the universe. astronomy includes examining, studying, and modeling stars, planets, and comets. most of the information used by astronomers is gathered by remote observation. however, some laboratory reproduction of celestial phenomena has been performed ( such as the molecular chemistry of the interstellar medium ). there is considerable overlap with physics and in some areas of earth science. there are also interdisciplinary fields such as astrophysics, planetary sciences, and cosmology, along with allied disciplines such as space physics and astrochemistry. while the study of celestial features and phenomena can be traced back to antiquity, the scientific methodology of this field began to develop in the middle of the 17th century. a key factor was galileo ' s introduction of the telescope to examine the night sky in more detail. the mathematical treatment of astronomy began with newton ' s development of celestial mechanics and the laws of gravitation. however, it was triggered by earlier work of astronomers such as kepler. by the 19th century, astronomy had developed into formal science, with the introduction of instruments such as the spectroscope and photography, along with much - improved telescopes and the creation of professional observatories. = = interdisciplinary studies = = the distinctions between the natural science disciplines are not always sharp, and they share many cross - discipline fields. physics plays a significant role in the other natural sciences, as represented by astrophysics, geophysics, chemical physics and biophysics. likewise chemistry is represented by such fields as biochemistry, physical chemistry, geochemistry and astrochemistry. a particular example of a scientific discipline that draws upon multiple natural sciences is environmental science. this field studies the interactions of physical, chemical, geological, and biological components of the environment, with particular regard to the effect of human activities and the impact on biodiversity and sustainability. this science also draws upon expertise from other fields, such weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial Question: Changes in the weather are important to people living in Alaska. Which two tools best help scientists to share information about weather? A) radio and computer B) clock and notebook C) television and hand lens D) microscope and telephone
A) radio and computer
Context: participates as a consumer, resource, or both in consumer – resource interactions, which form the core of food chains or food webs. there are different trophic levels within any food web, with the lowest level being the primary producers ( or autotrophs ) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. at the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. heterotrophs that consume plants are primary consumers ( or herbivores ) whereas heterotrophs that consume herbivores are secondary consumers ( or carnivores ). and those that eat secondary consumers are tertiary consumers and so on. omnivorous heterotrophs are able to consume at multiple levels. finally, there are decomposers that feed on the waste products or dead bodies of organisms. on average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one - tenth of the energy of the trophic level that it consumes. waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. = = = biosphere = = = in the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. for example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. a biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic ( biosphere ) and the abiotic ( lithosphere, atmosphere, and hydrosphere ) compartments of earth. there are biogeochemical cycles for nitrogen, carbon, and water. = = = conservation = = = conservation biology is the study of the conservation of earth ' s biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. it is concerned with factors that influence the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. the concern stems from estimates suggesting that up to 50 % of all species on the planet be a low - cost, feasible, and accessible way for promoting pa. " essentially, this insinuates that wearable technology can be beneficial to everyone and really is not cost prohibited. also, when consistently seeing wearable technology being actually utilized and worn by other people, it promotes the idea of physical activity and pushes more individuals to take part. wearable technology also helps with chronic disease development and monitoring physical activity in terms of context. for example, according to the american journal of preventive medicine, " wearables can be used across different chronic disease trajectory phases ( e. g., pre - versus post - surgery ) and linked to medical record data to obtain granular data on how activity frequency, intensity, and duration changes over the disease course and with different treatments. " wearable technology can be beneficial in tracking and helping analyze data in terms of how one is performing as time goes on, and how they may be performing with different changes in their diet, workout routine, or sleep patterns. also, not only can wearable technology be helpful in measuring results pre and post surgery, but it can also help measure results as someone may be rehabbing from a chronic disease such as cancer, or heart disease, etc. wearable technology has the potential to create new and improved ways of how we look at health and how we actually interpret that science behind our health. it can propel us into higher levels of medicine and has already made a significant impact on how patients are diagnosed, treated, and rehabbed over time. however, extensive research still needs to be continued on how to properly integrate wearable technology into health care and how to best utilize it. in addition, despite the reaping benefits of wearable technology, a lot of research still also has to be completed in order to start transitioning wearable technology towards very sick high risk patients. = = = sense - making of the data = = = while wearables can collect data in aggregate form, most of them are limited in their ability to analyze or make conclusions based on this data – thus, most are used primarily for general health information. end user perception of how their data is used plays a big role in how such datasets can be fully optimized. exception include seizure - alerting wearables, which continuously analyze the wearer ' s data and make a decision about calling for help – the data collected can then provide doctors with objective evidence that they may find useful in diagnoses. wearables can account for individual differences, although most in 2023, 639, 300 people died in france, 35, 900 fewer than in 2022, a year of high mortality. over the last twenty years, from 2004 to 2023, january 3rd was the deadliest day, while august 15th was the least deadly one. elderly people die significantly less often in the summer. deaths are also less frequent on public holidays and sundays. finally, the risk of dying is higher on one ' s birthday, especially for young people. pushes more individuals to take part. wearable technology also helps with chronic disease development and monitoring physical activity in terms of context. for example, according to the american journal of preventive medicine, " wearables can be used across different chronic disease trajectory phases ( e. g., pre - versus post - surgery ) and linked to medical record data to obtain granular data on how activity frequency, intensity, and duration changes over the disease course and with different treatments. " wearable technology can be beneficial in tracking and helping analyze data in terms of how one is performing as time goes on, and how they may be performing with different changes in their diet, workout routine, or sleep patterns. also, not only can wearable technology be helpful in measuring results pre and post surgery, but it can also help measure results as someone may be rehabbing from a chronic disease such as cancer, or heart disease, etc. wearable technology has the potential to create new and improved ways of how we look at health and how we actually interpret that science behind our health. it can propel us into higher levels of medicine and has already made a significant impact on how patients are diagnosed, treated, and rehabbed over time. however, extensive research still needs to be continued on how to properly integrate wearable technology into health care and how to best utilize it. in addition, despite the reaping benefits of wearable technology, a lot of research still also has to be completed in order to start transitioning wearable technology towards very sick high risk patients. = = = sense - making of the data = = = while wearables can collect data in aggregate form, most of them are limited in their ability to analyze or make conclusions based on this data – thus, most are used primarily for general health information. end user perception of how their data is used plays a big role in how such datasets can be fully optimized. exception include seizure - alerting wearables, which continuously analyze the wearer ' s data and make a decision about calling for help – the data collected can then provide doctors with objective evidence that they may find useful in diagnoses. wearables can account for individual differences, although most just collect data and apply one - size - fits - all algorithms. software on the wearables may analyze the data directly or send the data to a nearby device ( s ), such as a smartphone, which processes, displays or uses the data for analysis. for analysis and real - term sense - making, machine the elimination of metabolic wastes. these enzyme - catalyzed reactions allow organisms to grow and reproduce, maintain their structures, and respond to their environments. metabolic reactions may be categorized as catabolic β€” the breaking down of compounds ( for example, the breaking down of glucose to pyruvate by cellular respiration ) ; or anabolic β€” the building up ( synthesis ) of compounds ( such as proteins, carbohydrates, lipids, and nucleic acids ). usually, catabolism releases energy, and anabolism consumes energy. the chemical reactions of metabolism are organized into metabolic pathways, in which one chemical is transformed through a series of steps into another chemical, each step being facilitated by a specific enzyme. enzymes are crucial to metabolism because they allow organisms to drive desirable reactions that require energy that will not occur by themselves, by coupling them to spontaneous reactions that release energy. enzymes act as catalysts β€” they allow a reaction to proceed more rapidly without being consumed by it β€” by reducing the amount of activation energy needed to convert reactants into products. enzymes also allow the regulation of the rate of a metabolic reaction, for example in response to changes in the cell ' s environment or to signals from other cells. = = = cellular respiration = = = cellular respiration is a set of metabolic reactions and processes that take place in cells to convert chemical energy from nutrients into adenosine triphosphate ( atp ), and then release waste products. the reactions involved in respiration are catabolic reactions, which break large molecules into smaller ones, releasing energy. respiration is one of the key ways a cell releases chemical energy to fuel cellular activity. the overall reaction occurs in a series of biochemical steps, some of which are redox reactions. although cellular respiration is technically a combustion reaction, it clearly does not resemble one when it occurs in a cell because of the slow, controlled release of energy from the series of reactions. sugar in the form of glucose is the main nutrient used by animal and plant cells in respiration. cellular respiration involving oxygen is called aerobic respiration, which has four stages : glycolysis, citric acid cycle ( or krebs cycle ), electron transport chain, and oxidative phosphorylation. glycolysis is a metabolic process that occurs in the cytoplasm whereby glucose is converted into two pyruvates, with two net molecules of atp being produced at the same time. each pyruvate is then ##physical processes which take place in human beings as they make sense of information received through the visual system. the subject of the image. when developing an imaging system, designers must consider the observables associated with the subjects which will be imaged. these observables generally take the form of emitted or reflected energy, such as electromagnetic energy or mechanical energy. the capture device. once the observables associated with the subject are characterized, designers can then identify and integrate the technologies needed to capture those observables. for example, in the case of consumer digital cameras, those technologies include optics for collecting energy in the visible portion of the electromagnetic spectrum, and electronic detectors for converting the electromagnetic energy into an electronic signal. the processor. for all digital imaging systems, the electronic signals produced by the capture device must be manipulated by an algorithm which formats the signals so they can be displayed as an image. in practice, there are often multiple processors involved in the creation of a digital image. the display. the display takes the electronic signals which have been manipulated by the processor and renders them on some visual medium. examples include paper ( for printed, or " hard copy " images ), television, computer monitor, or projector. note that some imaging scientists will include additional " links " in their description of the imaging chain. for example, some will include the " source " of the energy which " illuminates " or interacts with the subject of the image. others will include storage and / or transmission systems. = = subfields = = subfields within imaging science include : image processing, computer vision, 3d computer graphics, animations, atmospheric optics, astronomical imaging, biological imaging, digital image restoration, digital imaging, color science, digital photography, holography, magnetic resonance imaging, medical imaging, microdensitometry, optics, photography, remote sensing, radar imaging, radiometry, silver halide, ultrasound imaging, photoacoustic imaging, thermal imaging, visual perception, and various printing technologies. = = methodologies = = acoustic imaging coherent imaging uses an active coherent illumination source, such as in radar, synthetic aperture radar ( sar ), medical ultrasound and optical coherence tomography ; non - coherent imaging systems include fluorescent microscopes, optical microscopes, and telescopes. chemical imaging, the simultaneous measurement of spectra and pictures digital imaging, creating digital images, generally by scanning or through digital photography disk image, a file which contains the exact content of a data storage medium document imaging, replicating documents commonly , social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient ' s problem. on subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations. = = institutions = = contemporary the received wisdom on how activity affects energy expenditure is that the more activity is undertaken, the more calories will have been burned by the end of the day. yet traditional hunter - gatherers, who lead physically hard lives, burn no more calories each day than western populations living in labour - saving environments. indeed, there is now a wealth of data, both for humans and other animals, demonstrating that long - term lifestyle changes involving increases in exercise or other physical activities do not result in commensurate increases in daily energy expenditure ( dee ). this is because humans and other animals exhibit a degree of energy compensation at the organismal level, ameliorating some of the increases in dee that would occur from the increased activity by decreasing the energy expended on other biological processes. and energy compensation can be sizable, reaching many hundreds of calories in humans. but the processes that are downregulated in the long - term to achieve energy compensation are far from clear, particularly in humans. we do not know how energy compensation is achieved. my review here of the literature on relevant exercise intervention studies, for both humans and other species, indicates conflict regarding the role that basal metabolic rate ( bmr ) or low level activity such as fidgeting play, if any, particularly once changes in body composition are factored out. in situations where bmr and low - level activity are not major components of energy compensation, what then drives it? i discuss how changes in mitochondrial efficiency and changes in circadian fluctuations in bmr may contribute to our understanding of energy management. currently unexplored, these mechanisms and others may provide important insights into the mystery of how energy compensation is achieved. this paper deals with a problem in which two players share a previously sliced pizza and try to eat as much amount of pizza as they can. it takes time to eat each piece of pizza and both players eat pizza at the same rate. one is allowed to take a next piece only after the person has finished eating the piece on hand. also, after the first piece is taken, one can only take a piece which is adjacent to already - taken piece. this paper shows that, in this real time setting, the starting player can always eat at least two - fifth of the total size of the pizza. however, this may not be the best possible amount the starting player can eat. it is a modified problem from an original one where two players takes piece alternatively instead. etc technology is viable it does offer an example that it is possible. etc requires much less energy input from outside sources, like a battery, than a railgun or a coilgun would. tests have shown that energy output by the propellant is higher than energy input from outside sources on etc guns. in comparison, a railgun currently cannot achieve a higher muzzle velocity than the amount of energy input. even at 50 % efficiency a rail gun launching a projectile with a kinetic energy of 20 mj would require an energy input into the rails of 40 mj, and 50 % efficiency has not yet been achieved. to put this into perspective, a rail gun launching at 9 mj of energy would need roughly 32 mj worth of energy from capacitors. current advances in energy storage allow for energy densities as high as 2. 5 mj / dm3, which means that a battery delivering 32 mj of energy would require a volume of 12. 8 dm3 per shot ; this is not a viable volume for use in a modern main battle tank, especially one designed to be lighter than existing models. there has even been discussion about eliminating the necessity for an outside electrical source in etc ignition by initiating the plasma cartridge through a small explosive force. furthermore, etc technology is not only applicable to solid propellants. to increase muzzle velocity even further electrothermal - chemical ignition can work with liquid propellants, although this would require further research into plasma ignition. etc technology is also compatible with existing projects to reduce the amount of recoil delivered to the vehicle while firing. understandably, recoil of a gun firing a projectile at 17 mj or more will increase directly with the increase in muzzle energy in accordance to newton ' s third law of motion and successful implementation of recoil reduction mechanisms will be vital to the installation of an etc powered gun in an existing vehicle design. for example, oto melara ' s new lightweight 120 mm l / 45 gun has achieved a recoil force of 25 t by using a longer recoil mechanism ( 550 mm ) and a pepperpot muzzle brake. reduction in recoil can also be achieved through mass attenuation of the thermal sleeve. the ability of etc technology to be applied to existing gun designs means that for future gun upgrades there ' s no longer the necessity to redesign the turret to include a larger breech or caliber gun barrel. several countries have already determined that etc technology is viable for the future and have funded indigenous projects considerably. these include the united states, germany and the united kingdom, amongst others. the united Question: Which outcome is most likely if a person consumes more Calories than needed for daily activities? A) weight loss B) weight gain C) deficiency disease D) infectious disease
B) weight gain
Context: is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends. = = = = compound = = = = a compound is a pure chemical substance composed of more than one element. the properties of a compound bear little similarity to those of its elements. the standard nomenclature of compounds is set by the international union of pure and applied chemistry ( iupac ). organic compounds are named which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. the fired part will be smaller than the dried part. = = forming methods = = ceramic forming techniques include throwing, slipcasting, tape casting, freeze - casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications excess lightweight products of slow neutron capture in the photosphere, over the mass range of 25 to 207 amu, confirm the solar mass separation recorded by excess lightweight isotopes in the solar wind, over the mass range of 3 to 136 amu [ solar abundance of the elements, meteoritics, volume 18, 1983, pages 209 to 222 ]. both measurements show that major elements inside the sun are fe, o, ni, si and s, like those in rocky planets. ultramagnetized neutron stars or magnetars are magnetically powered neutron stars. their strong magnetic fields dominate the physical processes in their crusts and their surroundings. the past few years have seen several advances in our theoretical and observational understanding of these objects. in spite of a surfeit of observations, their spectra are still poorly understood. i will discuss the emission from strongly magnetized condensed matter surfaces of neutron stars, recent advances in our expectations of the surface composition of magnetars and a model for the non - thermal emission from these objects. . historically, metallurgy has predominately focused on the production of metals. metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. metal alloys are often a blend of at least two different metallic elements. however, non - metallic elements are often added to alloys in order to achieve properties suitable for an application. the study of metal production is subdivided into ferrous metallurgy ( also known as black metallurgy ) and non - ferrous metallurgy, also known as colored metallurgy. ferrous metallurgy involves processes and alloys based on iron, while non - ferrous metallurgy involves processes and alloys based on other metals. the production of ferrous metals accounts for 95 % of world metal production. modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals ( including welding, brazing, and soldering ). emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials ( semiconductors ) and surface engineering. = = etymology and pronunciation = = metallurgy derives from the ancient greek μΡταλλουργος, metallourgos, " worker in metal ", from μΡταλλον, metallon, " mine, metal " + Ρργον, ergon, " work " the word was originally an alchemist ' s term for the extraction of metals from minerals, the ending - urgy signifying a process, especially manufacturing : it was discussed in this sense in the 1797 encyclopΓ¦dia britannica. in the late 19th century, metallurgy ' s definition was extended to the more general scientific study of metals, alloys, and related processes. in english, the pronunciation is the more common one in the united kingdom. the pronunciation is the more common one in the us and is the first - listed variant in various american dictionaries, including merriam - webster collegiate and american heritage. = = history = = the earliest metal employed by humans appears to be gold, which can be found " native ". small amounts of natural gold, dating to the late paleolithic period, 40, 000 bc, have been found in spanish caves. silver, copper, tin and meteoric iron the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle , calorimetry, nuclear microscopy ( hefib ), rutherford backscattering, neutron diffraction, small - angle x - ray scattering ( saxs ), etc. ). besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. for example, steels are classified based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon a prediction and observational evidence for the mass of a dark matter particle are presented.. Question: In a mixture, a magnet is used to separate some particles from sand. The dark particles are most likely made of which element? A) sodium B) iron C) sulfur D) copper
B) iron
Context: men ' s sports include baseball, basketball, cross country, football, golf, swimming & diving, cheerleading, tennis and track & field ; while women ' s sports include basketball, cross country, softball, swimming and diving, tennis, track & field, cheerleading, and volleyball. their cheerleading squad has, in the past, only competed the national cheerleaders & dance association ( nca & nda ) college nationals along with buzz and the goldrush dance team competing here as well. however, in the 2022 season, goldrush competed at the universal cheerleaders & dance association ( uca & uda ) college nationals for the first time and in 2023 the cheer team will compete here for the first time as well. the institute mascots are buzz and the ramblin ' wreck. the institute ' s traditional football rival is the university of georgia ; the rivalry is considered one of the fiercest in college football. the rivalry is commonly referred to as clean, old - fashioned hate, which is also the title of a book about the subject. there is also a long - standing rivalry with clemson. tech has eighteen varsity sports : football, women ' s and men ' s basketball, baseball, softball, volleyball, golf, men ' s and women ' s tennis, men ' s and women ' s swimming and diving, men ' s and women ' s track and field, men ' s and women ' s cross country, and coed cheerleading. four georgia tech football teams were selected as national champions in news polls : 1917, 1928, 1952, and 1990. in may 2007, the women ' s tennis team won the ncaa national championship with a 4 – 2 victory over ucla, the first ever national title granted by the ncaa to tech. = = = fight songs = = = tech ' s fight song " i ' m a ramblin ' wreck from georgia tech " is known worldwide. first published in the 1908 blue print, it was adapted from an old drinking song ( " son of a gambolier " ) and embellished with trumpet flourishes by frank roman. then - vice president richard nixon and soviet premier nikita khrushchev sang the song together when they met in moscow in 1958 to reduce the tension between them. as the story goes, nixon did not know any russian songs, but khrushchev knew that one american song as it had been sung on the ed sullivan show. " i ' m a ramblin ' wreck " has had many other notable moments in its history so mars below means blood and war ", is a false cause fallacy. : 26 many astrologers claim that astrology is scientific. if one were to attempt to try to explain it scientifically, there are only four fundamental forces ( conventionally ), limiting the choice of possible natural mechanisms. : 65 some astrologers have proposed conventional causal agents such as electromagnetism and gravity. the strength of these forces drops off with distance. : 65 scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from earth, of a large but distant planet such as jupiter is far smaller than that produced by ordinary household appliances. astronomer phil plait noted that in terms of magnitude, the sun is the only object with an electromagnetic field of note, but astrology isn ' t based just off the sun alone. : 65 while astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. if the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. : 65 it would also be inconsistent with the other forces which drop off with distance. : 65 if distance is irrelevant, then, logically, all objects in space should be taken into account. : 66 carl jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astrology from a single study he conducted. however, synchronicity itself is considered neither testable nor falsifiable. the study was subsequently heavily criticised for its non - random sample and its use of statistics and also its lack of consistency with astrology. = = psychology = = psychological studies have not found any robust relationship between astrological signs and life outcomes. for example, a study showed that zodiac signs are no more effective than random numbers in predicting subjective well - being and quality of life. it has also been shown that confirmation bias is a psychological factor that contributes to belief in astrology. : 344 : 180 – 181 : 42 – 48 confirmation bias is a form of cognitive bias. : 553 from the literature, astrology believers often tend to selectively remember those predictions that turned out to be true and do not remember those that turned out false. another, separate, form of confirmation bias also plays a role, where believers often fail to . this, he argued, would have been more persuasive and would have produced less controversy. the use of poetic imagery based on the concepts of the macrocosm and microcosm, " as above so below " to decide meaning such as edward w. james ' example of " mars above is red, so mars below means blood and war ", is a false cause fallacy. : 26 many astrologers claim that astrology is scientific. if one were to attempt to try to explain it scientifically, there are only four fundamental forces ( conventionally ), limiting the choice of possible natural mechanisms. : 65 some astrologers have proposed conventional causal agents such as electromagnetism and gravity. the strength of these forces drops off with distance. : 65 scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from earth, of a large but distant planet such as jupiter is far smaller than that produced by ordinary household appliances. astronomer phil plait noted that in terms of magnitude, the sun is the only object with an electromagnetic field of note, but astrology isn ' t based just off the sun alone. : 65 while astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. if the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. : 65 it would also be inconsistent with the other forces which drop off with distance. : 65 if distance is irrelevant, then, logically, all objects in space should be taken into account. : 66 carl jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astrology from a single study he conducted. however, synchronicity itself is considered neither testable nor falsifiable. the study was subsequently heavily criticised for its non - random sample and its use of statistics and also its lack of consistency with astrology. = = psychology = = psychological studies have not found any robust relationship between astrological signs and life outcomes. for example, a study showed that zodiac signs are no more effective than random numbers in predicting subjective well - being and quality of life. it has also been shown that confirmation bias is a psychological factor that contributes to belief in astrology. : 344 : 180 – 181 : , only competed the national cheerleaders & dance association ( nca & nda ) college nationals along with buzz and the goldrush dance team competing here as well. however, in the 2022 season, goldrush competed at the universal cheerleaders & dance association ( uca & uda ) college nationals for the first time and in 2023 the cheer team will compete here for the first time as well. the institute mascots are buzz and the ramblin ' wreck. the institute ' s traditional football rival is the university of georgia ; the rivalry is considered one of the fiercest in college football. the rivalry is commonly referred to as clean, old - fashioned hate, which is also the title of a book about the subject. there is also a long - standing rivalry with clemson. tech has eighteen varsity sports : football, women ' s and men ' s basketball, baseball, softball, volleyball, golf, men ' s and women ' s tennis, men ' s and women ' s swimming and diving, men ' s and women ' s track and field, men ' s and women ' s cross country, and coed cheerleading. four georgia tech football teams were selected as national champions in news polls : 1917, 1928, 1952, and 1990. in may 2007, the women ' s tennis team won the ncaa national championship with a 4 – 2 victory over ucla, the first ever national title granted by the ncaa to tech. = = = fight songs = = = tech ' s fight song " i ' m a ramblin ' wreck from georgia tech " is known worldwide. first published in the 1908 blue print, it was adapted from an old drinking song ( " son of a gambolier " ) and embellished with trumpet flourishes by frank roman. then - vice president richard nixon and soviet premier nikita khrushchev sang the song together when they met in moscow in 1958 to reduce the tension between them. as the story goes, nixon did not know any russian songs, but khrushchev knew that one american song as it had been sung on the ed sullivan show. " i ' m a ramblin ' wreck " has had many other notable moments in its history. it is reportedly the first school song to have been played in space. gregory peck sang the song while strumming a ukulele in the movie the man in the gray flannel suit. john wayne whistled it in the high and the mighty. tim holt ' s character sings a few bars of it in the project consists to determine, mathematically, the trajectory that will take an artificial satellite to fight against the air resistance. during our work, we had to consider that our satellite will crash to the surface of our planet. we started our study by understanding the system of forces that are acting between our satellite and the earth. in this work, we had to study the second law of newton by taking knowledge of the air friction, the speed of the satellite which helped us to find the equation that relates the trajectory of the satellite itself, its speed and the density of the air depending on the altitude. finally, we had to find a mathematic relation that links the density with the altitude and then we had to put it into our movement equation. in order to verify our model, we ' ll see what happens if we give a zero velocity to the satellite. three separate questions of relevance to major league baseball are investigated from a physics perspective. first, can a baseball be hit farther with a corked bat? second, is there evidence that the baseball is more lively today than in earlier years? third, can storing baseballs in a temperature - or humidity - controlled environment significantly affect home run production? each of these questions is subjected to a physics analysis, including an experiment, an interpretation of the data, and a definitive answer. the answers to the three questions are no, no, and yes. , behind which are structures termed reentrant triangles. radar waves penetrating the skin get trapped in these structures, reflecting off the internal faces and losing energy. this method was first used on the blackbird series : a - 12, yf - 12a, lockheed sr - 71 blackbird. the most efficient way to reflect radar waves back to the emitting radar is with orthogonal metal plates, forming a corner reflector consisting of either a dihedral ( two plates ) or a trihedral ( three orthogonal plates ). this configuration occurs in the tail of a conventional aircraft, where the vertical and horizontal components of the tail are set at right angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth air pairs of planck - mass - scale drops of superfluid helium coated by electrons ( i. e., " millikan oil drops " ), when levitated in the presence of strong magnetic fields and at low temperatures, can be efficient quantum transducers between electromagnetic ( em ) and gravitational ( gr ) radiation. a hertz - like experiment, in which em waves are converted at the source into gr waves, and then back - converted at the receiver from gr waves back into em waves, should be practical to perform. this would open up observations of the gravity - wave analog of the cosmic microwave background from the extremely early big bang, and also communications directly through the interior of the earth. the gravitational poynting vector provides a mechanism for the transfer of gravitational energy to a system of falling objects. in the following we will show that the gravitational poynting vector together with the gravitational larmor theorem also provides a mechanism to explain how massive bodies acquire rotational kinetic energy when external mechanical forces are applied on them. that uses a phased array, a computer - controlled antenna that can steer the radar beam quickly to point in different directions without moving the antenna. phased - array radars were developed by the military to track fast - moving missiles and aircraft. they are widely used in military equipment and are now spreading to civilian applications. synthetic aperture radar ( sar ) – a specialized airborne radar set that produces a high - resolution map of ground terrain. the radar is mounted on an aircraft or spacecraft and the radar antenna radiates a beam of radio waves sideways at right angles to the direction of motion, toward the ground. in processing the return radar signal, the motion of the vehicle is used to simulate a large antenna, giving the radar a higher resolution. ground - penetrating radar – a specialized radar instrument that is rolled along the ground surface in a cart and transmits a beam of radio waves into the ground, producing an image of subsurface objects. frequencies from 100 mhz to a few ghz are used. since radio waves cannot penetrate very far into earth, the depth of gpr is limited to about 50 feet. collision avoidance system – a short range radar or lidar system on an automobile or vehicle that detects if the vehicle is about to collide with an object and applies the brakes to prevent the collision. radar fuze – a detonator for an aerial bomb which uses a radar altimeter to measure the height of the bomb above the ground as it falls and detonates it at a certain altitude. = = = = radiolocation = = = = radiolocation is a generic term covering a variety of techniques that use radio waves to find the location of objects, or for navigation. global navigation satellite system ( gnss ) or satnav system – a system of satellites which allows geographical location on earth ( latitude, longitude, and altitude / elevation ) to be determined to high precision ( within a few metres ) by small portable navigation instruments, by timing the arrival of radio signals from the satellites. these are the most widely used navigation systems today. the main satellite navigation systems are the us global positioning system ( gps ), russia ' s glonass, china ' s beidou navigation satellite system ( bds ) and the european union ' s galileo. global positioning system ( gps ) – the most widely used satellite navigation system, maintained by the us air force, which uses a constellation of 31 satellites in low earth orbit. the orbits of the satellites are distributed so at any time at least four satellites are above the horizon over each point on Question: A student tosses a ball into the air. Which force causes the ball to fall back to the ground? A) gravity B) magnetism C) mechanical D) friction
A) gravity
Context: as subjects perceive the sensory world, different stimuli elicit a number of neural representations. here, a subjective distance between stimuli is defined, measuring the degree of similarity between the underlying representations. as an example, the subjective distance between different locations in space is calculated from the activity of rodent hippocampal place cells, and lateral septal cells. such a distance is compared to the real distance, between locations. as the number of sampled neurons increases, the subjective distance shows a tendency to resemble the metrics of real space. the gravitational poynting vector provides a mechanism for the transfer of gravitational energy to a system of falling objects. in the following we will show that the gravitational poynting vector together with the gravitational larmor theorem also provides a mechanism to explain how massive bodies acquire rotational kinetic energy when external mechanical forces are applied on them. einstein, when he began working on the general theory of relativity, believed that energy of any kind is the source of the gravitational field. therefore, the energy of gravity, like any energy, must be the source of the field. it was previously discovered that the energy - momentum tensor of the gravitational field is already contained in the ricci tensor. this hypothesis is used to construct a new equation of the gravitational field. the gravitational inverse square law is microscopic approximation. i suggest that it should be modified for elementary particles to use the surface - to - surface separation of the particles rather than the center - to - center separations. for small particles at macroscopic separations, the ratio between the center - to - center distance d and the surface - to - surface distance d, d / d, approaches unity. at microscopic separations, this ratio grows very large. here i apply this ratio to several microscopic situations and derive the nuclear coupling constants. i will then present a model of a gluon / graviton transformation to justify my surface originating modification. . this, he argued, would have been more persuasive and would have produced less controversy. the use of poetic imagery based on the concepts of the macrocosm and microcosm, " as above so below " to decide meaning such as edward w. james ' example of " mars above is red, so mars below means blood and war ", is a false cause fallacy. : 26 many astrologers claim that astrology is scientific. if one were to attempt to try to explain it scientifically, there are only four fundamental forces ( conventionally ), limiting the choice of possible natural mechanisms. : 65 some astrologers have proposed conventional causal agents such as electromagnetism and gravity. the strength of these forces drops off with distance. : 65 scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from earth, of a large but distant planet such as jupiter is far smaller than that produced by ordinary household appliances. astronomer phil plait noted that in terms of magnitude, the sun is the only object with an electromagnetic field of note, but astrology isn ' t based just off the sun alone. : 65 while astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. if the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. : 65 it would also be inconsistent with the other forces which drop off with distance. : 65 if distance is irrelevant, then, logically, all objects in space should be taken into account. : 66 carl jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astrology from a single study he conducted. however, synchronicity itself is considered neither testable nor falsifiable. the study was subsequently heavily criticised for its non - random sample and its use of statistics and also its lack of consistency with astrology. = = psychology = = psychological studies have not found any robust relationship between astrological signs and life outcomes. for example, a study showed that zodiac signs are no more effective than random numbers in predicting subjective well - being and quality of life. it has also been shown that confirmation bias is a psychological factor that contributes to belief in astrology. : 344 : 180 – 181 : grasping an object is a matter of first moving a prehensile organ at some position in the world, and then managing the contact relationship between the prehensile organ and the object. once the contact relationship has been established and made stable, the object is part of the body and it can move in the world. as any action, the action of grasping is ontologically anchored in the physical space while the correlative movement originates in the space of the body. evolution has found amazing solutions that allow organisms to rapidly and efficiently manage the relationship between their body and the world. it is then natural that roboticists consider taking inspiration of these natural solutions, while contributing to better understand their origin. distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate its distance from the beacon. this allows an aircraft to determine its location " fix " from only one vor beacon. since line - of - sight vhf frequencies are used vor beacons have a range of about 200 miles for aircraft at cruising altitude. tacan is a similar military radio beacon system which transmits in 962 – 1213 mhz, and a combined vor and tacan beacon is called a vortac. the number of vor beacons is declining as aviation switches to the rnav system that relies on global positioning system satellite navigation. instrument landing system ( ils ) - a short range radio navigation aid at airports which guides aircraft landing in low visibility conditions. it consists of multiple antennas at the end of each runway that radiate two beams of radio waves along the approach to the runway : the localizer ( 108 to 111. 95 mhz frequency ), which provides horizontal guidance, a heading line to keep the aircraft centered on the runway, and the glideslope ( 329. 15 to 335 mhz ) for vertical guidance, to keep the aircraft descending at the proper rate for a smooth touchdown at the correct point on the runway. each aircraft has a receiver instrument and antenna which receives the beams, with an indicator to tell the pilot whether he is on the correct horizontal and vertical approach. the ils beams are receivable for at least 15 miles, and have a radiated power of 25 watts. ils systems at airports are being replaced by systems that use satellite navigation. non - directional beacon ( ndb ) – legacy fixed radio beacons used before the vor system that transmit a simple signal in all directions for aircraft or ships to use for radio direction finding. aircraft use automatic direction finder ( adf ) receivers which use a directional antenna to determine the bearing to the beacon. by taking bearings on two beacons they can determine their position. ndbs use frequencies between 190 and 1750 khz in the lf and mf bands which propagate beyond the horizon as ground waves or skywaves much farther than vor beacons. they transmit a callsign consisting of one to 3 morse code letters as an identifier. emergency locator beacon – a portable battery powered radio quantum mechanics is interpreted by the adjacent vacuum that behaves as a virtual particle to be absorbed and emitted by its matter. as described in the vacuum universe model, the adjacent vacuum is derived from the pre - inflationary universe in which the pre - adjacent vacuum is absorbed by the pre - matter. this absorbed pre - adjacent vacuum is emitted to become the added space for the inflation in the inflationary universe whose space - time is separated from the pre - inflationary universe. this added space is the adjacent vacuum. the absorption of the adjacent vacuum as the added space results in the adjacent zero space ( no space ), quantum mechanics is the interaction between matter and the three different types of vacuum : the adjacent vacuum, the adjacent zero space, and the empty space. the absorption of the adjacent vacuum results in the empty space superimposed with the adjacent zero space, confining the matter in the form of particle. when the absorbed vacuum is emitted, the adjacent vacuum can be anywhere instantly in the empty space superimposed with the adjacent zero space where any point can be the starting point ( zero point ) of space - time. consequently, the matter that expands into the adjacent vacuum has the probability to be anywhere instantly in the form of wavefunction. in the vacuum universe model, the universe not only gains its existence from the vacuum but also fattens itself with the vacuum. during the inflation, the adjacent vacuum also generates the periodic table of elementary particles to account for all elementary particles and their masses in a good agreement with the observed values. although known for almost a century, the photophoretic force has only recently been considered in astrophysical context for the first time. in our work, we have examined the effect of photophoresis, acting together with stellar gravity, radiation pressure, and gas drag, on the evolution of solids in transitional circumstellar disks. we have applied our calculations to four different systems : the disks of hr 4796a and hd 141569a, which are several myr old ab - type stars, and two hypothetical systems that correspond to the solar nebula after disk dispersal has progressed sufficiently for the disk to become optically thin. our results suggest that solid objects migrate inward or outward, until they reach a certain size - dependent stability distance from the star. the larger the bodies, the closer to the star they tend to accumulate. photophoresis increases the stability radii, moving objects to larger distances. what is more, photophoresis may cause formation of a belt of objects, but only in a certain range of sizes and only around low - luminosity stars. the effects of photophoresis are noticeable in the size range from several micrometers to several centimeters ( for older transitional disks ) or even several meters ( for younger, more gaseous, ones ). we argue that due to gas damping, rotation does not substantially inhibit photophoresis. so mars below means blood and war ", is a false cause fallacy. : 26 many astrologers claim that astrology is scientific. if one were to attempt to try to explain it scientifically, there are only four fundamental forces ( conventionally ), limiting the choice of possible natural mechanisms. : 65 some astrologers have proposed conventional causal agents such as electromagnetism and gravity. the strength of these forces drops off with distance. : 65 scientists reject these proposed mechanisms as implausible since, for example, the magnetic field, when measured from earth, of a large but distant planet such as jupiter is far smaller than that produced by ordinary household appliances. astronomer phil plait noted that in terms of magnitude, the sun is the only object with an electromagnetic field of note, but astrology isn ' t based just off the sun alone. : 65 while astrologers could try to suggest a fifth force, this is inconsistent with the trends in physics with the unification of electromagnetism and the weak force into the electroweak force. if the astrologer insisted on being inconsistent with the current understanding and evidential basis of physics, that would be an extraordinary claim. : 65 it would also be inconsistent with the other forces which drop off with distance. : 65 if distance is irrelevant, then, logically, all objects in space should be taken into account. : 66 carl jung sought to invoke synchronicity, the claim that two events have some sort of acausal connection, to explain the lack of statistically significant results on astrology from a single study he conducted. however, synchronicity itself is considered neither testable nor falsifiable. the study was subsequently heavily criticised for its non - random sample and its use of statistics and also its lack of consistency with astrology. = = psychology = = psychological studies have not found any robust relationship between astrological signs and life outcomes. for example, a study showed that zodiac signs are no more effective than random numbers in predicting subjective well - being and quality of life. it has also been shown that confirmation bias is a psychological factor that contributes to belief in astrology. : 344 : 180 – 181 : 42 – 48 confirmation bias is a form of cognitive bias. : 553 from the literature, astrology believers often tend to selectively remember those predictions that turned out to be true and do not remember those that turned out false. another, separate, form of confirmation bias also plays a role, where believers often fail to Question: The gravitational force between two objects depends on the distance between the objects and each object's A) mass B) volume C) pressure D) temperature
A) mass
Context: enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies. and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying due to its location and climate, antarctica offers unique conditions for long - period observations across a broad wavelength regime, where important diagnostic lines for molecules and ions can be found, that are essential to understand the chemical properties of the interstellar medium. in addition to the natural benefits of the site, new technologies, resulting from astrophotonics, may allow miniaturised instruments, that are easier to winterise and advanced filters to further reduce the background in the infrared. cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated. the carbon - based biosphere has generated a system ( humans ) capable of creating technology that will result in a comparable evolutionary transition. the digital information created by humans has reached a similar magnitude to biological information in the biosphere. since the 1980s, the quantity of digital information stored has doubled about every 2. 5 years, reaching about 5 zettabytes in 2014 ( 5Γ—1021 bytes ). in biological terms, there are 7. 2 billion humans on the planet, each having a genome of 6. 2 billion nucleotides. since one byte can encode four nucleotide pairs, the individual genomes of every human on the planet could be encoded by approximately 1Γ—1019 bytes. the digital realm stored 500 times more information than this in 2014 ( see figure ). the total amount of dna contained in all of the cells on earth is estimated to be about 5. 3Γ—1037 base pairs, equivalent to 1. 325Γ—1037 bytes of information. if growth in digital storage continues at its current rate of 30 – 38 % compound annual growth per year, it will rival the total information content contained in all of the dna in all of the cells on earth in about 110 years. this would represent a doubling of the amount of information stored in the biosphere across a total time period of just 150 years ". = = = implications for human society = = = in february 2009, under the auspices of the association for the advancement of artificial intelligence ( aaai ), eric horvitz chaired a meeting of leading computer scientists, artificial intelligence researchers and roboticists at the asilomar conference center in pacific grove, california. the goal was to discuss the potential impact of the hypothetical possibility that robots could become self - sufficient and able to make their own decisions. they discussed the extent to which computers and robots might be able to acquire autonomy, and to what degree they could use such abilities to pose threats or hazards. some machines are programmed with various forms of semi - autonomy, including the ability to locate their own power sources and choose targets to attack with weapons. also, some computer viruses can evade elimination and, according to scientists in attendance, could therefore be said to have reached a " cockroach " stage of machine intelligence. the conference attendees noted that self - awareness as depicted in science - fiction is probably unlikely, but that other potential hazards and pitfalls exist. frank s. robinson predicts that once humans achieve a machine with the intelligence of a human, scientific and technological problems will be tackled and solved with are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = = ##hosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere Question: Scientists who have studied global climate changes have found that the average temperature of Earth has risen. There has also been an increase in the accumulation of atmospheric greenhouse gases. What is the goal of the scientific community in collecting this type of data? A) to understand how greenhouse gases are related to global warming B) to decrease the temperature using fossil fuels C) to change public attitude on using natural resources D) to increase the amount of ice in the Arctic
A) to understand how greenhouse gases are related to global warming
Context: use less energy than conventional thermal separation processes such as distillation, sublimation or crystallization. the separation process is purely physical and both fractions ( permeate and retentate ) can be obtained as useful products. cold separation using membrane technology is widely used in the food technology, biotechnology and pharmaceutical industries. furthermore, using membranes enables separations to take place that would be impossible using thermal separation methods. for example, it is impossible to separate the constituents of azeotropic liquids or solutes which form isomorphic crystals by distillation or recrystallization but such separations can be achieved using membrane technology. depending on the type of membrane, the selective separation of certain individual substances or substance mixtures is possible. important technical applications include the production of drinking water by reverse osmosis. in waste water treatment, membrane technology is becoming increasingly important. ultra / microfiltration can be very effective in removing colloids and macromolecules from wastewater. this is needed if wastewater is discharged into sensitive waters especially those designated for contact water sports and recreation. about half of the market is in medical applications such as artificial kidneys to remove toxic substances by hemodialysis and as artificial lung for bubble - free supply of oxygen in the blood. the importance of membrane technology is growing in the field of environmental protection ( nano - mem - pro ippc database ). even in modern energy recovery techniques, membranes are increasingly used, for example in fuel cells and in osmotic power plants. = = mass transfer = = two basic models can be distinguished for mass transfer through the membrane : the solution - diffusion model and the hydrodynamic model. in real membranes, these two transport mechanisms certainly occur side by side, especially during ultra - filtration. = = = solution - diffusion model = = = in the solution - diffusion model, transport occurs only by diffusion. the component that needs to be transported must first be dissolved in the membrane. the general approach of the solution - diffusion model is to assume that the chemical potential of the feed and permeate fluids are in equilibrium with the adjacent membrane surfaces such that appropriate expressions for the chemical potential in the fluid and membrane phases can be equated at the solution - membrane interface. this principle is more important for dense membranes without natural pores such as those used for reverse osmosis and in fuel cells. during the filtration process a boundary layer forms on the membrane. this concentration gradient is created by molecules which cannot pass through the membrane. the the thickness and the density of the material to be measured. the method is used for containers of liquids or of grainy substances thickness gauges : if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. this is useful for continuous production, like of paper, rubber, etc. electrostatic control - to avoid the build - up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon - shaped source of the alpha emitter 241am can be placed close to the material at the end of the production line. the source ionizes the air to remove electric charges on the material. radioactive tracers - since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. examples : adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube. adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil. oil and gas exploration - nuclear well logging is used to help predict the commercial viability of new or existing wells. the technology involves the use of a neutron or gamma - ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography. [ 1 ] road construction - nuclear moisture / density gauges are used to determine the density of soils, asphalt, and concrete. typically a cesium - 137 source is used. = = = commercial applications = = = radioluminescence tritium illumination : tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. some runway markers and building exit signs use the same technology, to remain illuminated during blackouts. betavoltaics. smoke detector : an ionization smoke detector includes a tiny mass of radioactive americium - 241, which is a source of alpha radiation. two ionisation chambers are placed next to each other. both contain a small source of 241am that gives rise to a small constant current. one is closed and serves for comparison, the other is open to ambient air ; it has a gridded electrode. when smoke enters the open chamber, the current is disrupted as the smoke particles attach to the charged ions and restore them to a neutral electrical state. this reduces the current in the open chamber. when the current drops below a certain threshold, the of measuring methods. x - rays and gamma rays are used in industrial radiography to make images of the inside of solid products, as a means of nondestructive testing and inspection. the piece to be radiographed is placed between the source and a photographic film in a cassette. after a certain exposure time, the film is developed and it shows any internal defects of the material. gauges - gauges use the exponential absorption law of gamma rays level indicators : source and detector are placed at opposite sides of a container, indicating the presence or absence of material in the horizontal radiation path. beta or gamma sources are used, depending on the thickness and the density of the material to be measured. the method is used for containers of liquids or of grainy substances thickness gauges : if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. this is useful for continuous production, like of paper, rubber, etc. electrostatic control - to avoid the build - up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon - shaped source of the alpha emitter 241am can be placed close to the material at the end of the production line. the source ionizes the air to remove electric charges on the material. radioactive tracers - since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. examples : adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube. adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil. oil and gas exploration - nuclear well logging is used to help predict the commercial viability of new or existing wells. the technology involves the use of a neutron or gamma - ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography. [ 1 ] road construction - nuclear moisture / density gauges are used to determine the density of soils, asphalt, and concrete. typically a cesium - 137 source is used. = = = commercial applications = = = radioluminescence tritium illumination : tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. some runway markers and building exit signs use the same technology, to remain illuminated during blackouts. betavoltaics this article is withdrawn because of a mistake in the main result of the paper. the nervous system. these kinds of tests can be divided into recordings of : ( 1 ) spontaneous or continuously running electrical activity, or ( 2 ) stimulus evoked responses. subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. diagnostic radiology is concerned with imaging of the body, e. g. by x - rays, x - ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also known as anaesthetics ) : concerned with the perioperative management of the surgical patient. the anesthesiologist ' s role during surgery is to prevent derangement in the vital organs ' ( i. e. brain, heart, kidneys ) functions and postoperative pain. outside of industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured - pyrolized to convert the furfuryl alcohol to carbon. to provide oxidation resistance for reusability, the outer layers of the rcc are converted to silicon carbide. other examples can be seen in the " plastic " casings of television sets, cell - phones and so on. these plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene ( abs ) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. these additions may be termed reinforcing fibers, or dispersants, depending on their purpose. = = = polymers = = = polymers are chemical compounds made up of a large number of identical components linked together like chains. polymers are the raw materials ( the resins ) used to make what are commonly called plastics and rubber. plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride ( pvc ), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. rubbers include natural rubber, styrene - butadiene rubber, chloroprene, and butadiene rubber. plastics are generally classified as commodity the chemistry of condensed phases ( solids, liquids, polymers ) and interfaces between different phases. neurochemistry is the study of neurochemicals ; including transmitters, peptides, proteins, lipids, sugars, and nucleic acids ; their interactions, and the roles they play in forming, maintaining, and modifying the nervous system. nuclear chemistry is the study of how subatomic particles come together and make nuclei. modern transmutation is a large component of nuclear chemistry, and the table of nuclides is an important result and tool for this field. in addition to medical applications, nuclear chemistry encompasses nuclear engineering which explores the topic of using nuclear power sources for generating energy. organic chemistry is the study of the structure, properties, composition, mechanisms, and reactions of organic compounds. an organic compound is defined as any compound based on a carbon skeleton. organic compounds can be classified, organized and understood in reactions by their functional groups, unit atoms or molecules that show characteristic chemical properties in a compound. physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. in particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. physical chemistry has large overlap with molecular physics. physical chemistry involves the use of infinitesimal calculus in deriving equations. it is usually associated with quantum chemistry and theoretical chemistry. physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. theoretical chemistry is the study of chemistry via fundamental theoretical reasoning ( usually within mathematics or physics ). in particular the application of quantum mechanics to chemistry is called quantum chemistry. since the end of the second world war, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. theoretical chemistry has large overlap with ( theoretical and experimental ) condensed matter physics and molecular physics. other subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others. = = = interdisciplinary = = = interdisciplinary fields include ag paper withdrawn due to a crucial algebraic error in section 3. to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiot ##al radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also known as anaesthetics ) : concerned with the perioperative management of the surgical patient. the anesthesiologist ' s role during surgery is to prevent derangement in the vital organs ' ( i. e. brain, heart, kidneys ) functions and postoperative pain. outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. emergency medicine is concerned with the diagnosis and treatment of acute or life - threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. family medicine, family practice, general practice or primary care is, in many countries, the first port - of - call for patients with non - emergency medical problems. family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. medical genetics is concerned with the Question: Paper chromatography is a process used to separate mixtures of substances into their components. The components are carried by a mobile phase through a stationary phase made of absorbent paper. An investigation analyzed a sample of black ink to determine its components.Which property allows the components to separate? A) the solubility of the components in the mobile phase B) the evaporation rate of the components at a certain temperature C) the magnetic property of the components D) the thickness of the paper used as the stationary phase
A) the solubility of the components in the mobile phase
Context: molecular diffusion processes give rise to significant changes in the primary microstructural features. this includes the gradual elimination of porosity, which is typically accompanied by a net shrinkage and overall densification of the component. thus, the pores in the object may close up, resulting in a denser product of significantly greater strength and fracture toughness. another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. the growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform ##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) – military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. it often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. marine radar – an s or x band radar on ships used to detect nearby ships and obstructions like bridges. a rotating antenna sweeps a vertical fan - shaped beam of microwaves around the water surface surrounding the craft out to the horizon. weather radar – a doppler radar which maps weather precipitation intensities and wind speeds with the echoes returned from raindrops and their radial velocity by their doppler shift. phased - array radar – a radar set that uses a phased array, a computer - controlled antenna that can steer the radar beam quickly to point in different directions without moving the antenna. phased - array radars were developed by the military to track fast - moving missiles and aircraft. they are widely used in military equipment and are now spreading to civilian applications. synthetic aperture radar ( sar ) – a specialized airborne radar set that produces a high - resolution map of ground terrain. the radar is mounted on an aircraft or spacecraft and the radar antenna radiates a beam of radio waves sideways at right angles to the direction of motion, toward the ground. in processing the return radar signal, the motion of the vehicle is used to simulate a large antenna, giving the radar a higher resolution. ground - penetrating radar – a specialized radar instrument that is rolled along the ground surface in a cart and transmits a beam of radio waves into the ground, producing an image of subsurface objects. frequencies from 100 mhz to a few ghz are used. since radio waves cannot penetrate very far into earth, the depth of gpr is limited to about 50 feet. collision avoidance system – a short range radar or lidar system on an automobile or vehicle that detects if the vehicle is about to collide with an object and applies the brakes to ##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river ##grade, in digital television picture quality is not affected by poor reception until, at a certain point, the receiver stops working and the screen goes black. terrestrial television, over - the - air ( ota ) television, or broadcast television – the oldest television technology, is the transmission of television signals from land - based television stations to television receivers ( called televisions or tvs ) in viewer ' s homes. terrestrial television broadcasting uses the bands 41 – 88 mhz ( vhf low band or band i, carrying rf channels 1 – 6 ), 174 – 240 mhz, ( vhf high band or band iii ; carrying rf channels 7 – 13 ), and 470 – 614 mhz ( uhf band iv and band v ; carrying rf channels 14 and up ). the exact frequency boundaries vary in different countries. propagation is by line - of - sight, so reception is limited by the visual horizon. in the us, the effective radiated power ( erp ) of television transmitters is regulated according to height above average terrain. viewers closer to the television transmitter can use a simple " rabbit ears " dipole antenna on top of the tv, but viewers in fringe reception areas typically require an outdoor antenna mounted on the roof to get adequate reception. satellite television – a set - top box which receives subscription direct - broadcast satellite television, and displays it on an ordinary television. a direct broadcast satellite in geostationary orbit 22, 200 miles ( 35, 700 km ) above the earth ' s equator transmits many channels ( up to 900 ) modulated on a 12. 2 to 12. 7 ghz ku band microwave downlink signal to a rooftop satellite dish antenna on the subscriber ' s residence. the microwave signal is converted to a lower intermediate frequency at the dish and conducted into the building by a coaxial cable to a set - top box connected to the subscriber ' s tv, where it is demodulated and displayed. the subscriber pays a monthly fee. = = = = time and frequency = = = = government standard frequency and time signal services operate time radio stations which continuously broadcast extremely accurate time signals produced by atomic clocks, as a reference to synchronize other clocks. examples are bpc, dcf77, jjy, msf, rtz, tdf, wwv, and yvto. one use is in radio clocks and watches, which include an automated receiver that periodically ( usually weekly ) receives and decodes the time signal and resets the watch ' s internal quartz clock to the correct time the magnetization of superconducting samples is influenced by their porosity. in addition to structural modifications and improved cooling, the presence of pores also plays a role in trapping magnetic flux. pores have an impact on the irreversibility field, the full penetration field, and the remnant magnetization. generally, as porosity increases, these parameters tend to decrease. however, in the case of mesoscopic samples or samples with low critical current densities, increased porosity can actually enhance the trapping of magnetic flux. ( e. g., trunks of trees, boulders and accumulations of gravel ) from a river bed furnishes a simple and efficient means of increasing the discharging capacity of its channel. such removals will consequently lower the height of floods upstream. every impediment to the flow, in proportion to its extent, raises the level of the river above it so as to produce the additional artificial fall necessary to convey the flow through the restricted channel, thereby reducing the total available fall. reducing the length of the channel by substituting straight cuts for a winding course is the only way in which the effective fall can be increased. this involves some loss of capacity in the channel as a whole, and in the case of a large river with a considerable flow it is difficult to maintain a straight cut owing to the tendency of the current to erode the banks and form again a sinuous channel. even if the cut is preserved by protecting the banks, it is liable to produce changes shoals and raise the flood - level in the channel just below its termination. nevertheless, where the available fall is exceptionally small, as in land originally reclaimed from the sea, such as the english fenlands, and where, in consequence, the drainage is in a great measure artificial, straight channels have been formed for the rivers. because of the perceived value in protecting these fertile, low - lying lands from inundation, additional straight channels have also been provided for the discharge of rainfall, known as drains in the fens. even extensive modification of the course of a river combined with an enlargement of its channel often produces only a limited reduction in flood damage. consequently, such floodworks are only commensurate with the expenditure involved where significant assets ( such as a town ) are under threat. additionally, even when successful, such floodworks may simply move the problem further downstream and threaten some other town. recent floodworks in europe have included restoration of natural floodplains and winding courses, so that floodwater is held back and released more slowly. human intervention sometimes inadvertently modifies the course or characteristics of a river, for example by introducing obstructions such as mining refuse, sluice gates for mills, fish - traps, unduly wide piers for bridges and solid weirs. by impeding flow these measures can raise the flood - level upstream. regulations for the management of rivers may include stringent prohibitions with regard to pollution, requirements for enlarging sluice - ways and the compulsory raising of their gates for the passage of floods above any tidal limit and their average freshwater discharge are proportionate to the extent of their basins and the amount of rain which, after falling over these basins, reaches the river channels in the bottom of the valleys, by which it is conveyed to the sea. the drainage basin of a river is the expanse of country bounded by a watershed ( called a " divide " in north america ) over which rainfall flows down towards the river traversing the lowest part of the valley, whereas the rain falling on the far slope of the watershed flows away to another river draining an adjacent basin. river basins vary in extent according to the configuration of the country, ranging from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer Question: In one area, a large source of prey for eagles is rabbits. If the number of rabbits suddenly decreases, what effect will it most likely have on the eagles? A) Their numbers will increase. B) Their numbers will decrease. C) They will adapt new behaviors. D) They will migrate to new locations.
B) Their numbers will decrease.
Context: becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 ) world made wide use of hydropower, along with early uses of tidal power, wind power, fossil fuels such as petroleum, and large factory complexes ( tiraz in arabic ). a variety of industrial mills were employed in the islamic world, including fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, and tide mills. by the 11th century, every province throughout the islamic world had these industrial mills in operation. muslim engineers also employed water turbines and gears in mills and water - raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water - raising machines. many of these technologies were transferred to medieval europe. wind - powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now iran, afghanistan and pakistan by the 9th century. they were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 – 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two ##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in ##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform earliest record of a ship under sail is that of a nile boat dating to around 7, 000 bce. from prehistoric times, egyptians likely used the power of the annual flooding of the nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and " catch " basins. the ancient sumerians in mesopotamia used a complex system of canals and levees to divert water from the tigris and euphrates rivers for irrigation. archaeologists estimate that the wheel was invented independently and concurrently in mesopotamia ( in present - day iraq ), the northern caucasus ( maykop culture ), and central europe. time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, for inland navigation in the lower portion of their course, as, for instance, the rhine, the danube and the mississippi. river engineering works are only required to prevent changes in the course of the stream, to regulate its depth, and especially to fix the low - water channel and concentrate the flow in it, so as to increase as far as practicable the navigable depth at the lowest stage of the water level. engineering works to increase the navigability of rivers can only be advantageously undertaken in large rivers with a moderate fall and a fair discharge at their lowest stage, for with a large fall the current presents a great impediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is Question: Moving water in a river is considered a renewable resource because it A) carries dissolved oxygen B) easily erodes sediments C) is made of natural gas D) can be recycled by nature over time
D) can be recycled by nature over time
Context: the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations. outer satellites of the planets have distant, eccentric orbits that can be highly inclined or even retrograde relative to the equatorial planes of their planets. these irregular orbits cannot have formed by circumplanetary accretion and are likely products of early capture from heliocentric orbit. the irregular satellites may be the only small bodies remaining which are still relatively near their formation locations within the giant planet region. the study of the irregular satellites provides a unique window on processes operating in the young solar system and allows us to probe possible planet formation mechanisms and the composition of the solar nebula between the rocky objects in the main asteroid belt and the very volatile rich objects in the kuiper belt. the gas and ice giant planets all appear to have very similar irregular satellite systems irrespective of their mass or formation timescales and mechanisms. water ice has been detected on some of the outer satellites of saturn and neptune whereas none has been observed on jupiter ' s outer satellites. armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets. planetary systems can evolve dynamically even after the full growth of the planets themselves. there is actually circumstantial evidence that most planetary systems become unstable after the disappearance of gas from the protoplanetary disk. these instabilities can be due to the original system being too crowded and too closely packed or to external perturbations such as tides, planetesimal scattering, or torques from distant stellar companions. the solar system was not exceptional in this sense. in its inner part, a crowded system of planetary embryos became unstable, leading to a series of mutual impacts that built the terrestrial planets on a timescale of ~ 100 my. in its outer part, the giant planets became temporarily unstable and their orbital configuration expanded under the effect of mutual encounters. a planet might have been ejected in this phase. thus, the orbital distributions of planetary systems that we observe today, both solar and extrasolar ones, can be different from the those emerging from the formation process and it is important to consider possible long - term evolutionary effects to connect the two. three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes. recent surveys have revealed a lack of close - in planets around evolved stars more massive than 1. 2 msun. such planets are common around solar - mass stars. we have calculated the orbital evolution of planets around stars with a range of initial masses, and have shown how planetary orbits are affected by the evolution of the stars all the way to the tip of the red giant branch ( rgb ). we find that tidal interaction can lead to the engulfment of close - in planets by evolved stars. the engulfment is more efficient for more - massive planets and less - massive stars. these results may explain the observed semi - major axis distribution of planets around evolved stars with masses larger than 1. 5 msun. our results also suggest that massive planets may form more efficiently around intermediate - mass stars. a 4mj planet with a 15. 8day orbital period has been detected from very precise radial velocity measurements with the coralie echelle spectrograph. a second remote and more massive companion has also been detected. all the planetary companions so far detected in orbit closer than 0. 08 au have a parent star with a statistically higher metal content compared to the metallicity distribution of other stars with planets. different processes occuring during their formation may provide a possible explanation for this observation. three planets with minimum masses less than 10 earth masses orbit the star hd 40307, suggesting these planets may be rocky. however, with only radial velocity data, it is impossible to determine if these planets are rocky or gaseous. here we exploit various dynamical features of the system in order to assess the physical properties of the planets. observations allow for circular orbits, but a numerical integration shows that the eccentricities must be at least 0. 0001. also, planets b and c are so close to the star that tidal effects are significant. if planet b has tidal parameters similar to the terrestrial planets in the solar system and a remnant eccentricity larger than 0. 001, then, going back in time, the system would have been unstable within the lifetime of the star ( which we estimate to be 6. 1 + / - 1. 6 gyr ). moreover, if the eccentricities are that large and the inner planet is rocky, then its tidal heating may be an order of magnitude greater than extremely volcanic io, on a per unit surface area basis. if planet b is not terrestrial, e. g. neptune - like, these physical constraints would not apply. this analysis suggests the planets are not terrestrial - like, and are more like our giant planets. in either case, we find that the planets probably formed at larger radii and migrated early - on ( via disk interactions ) into their current orbits. this study demonstrates how the orbital and dynamical properties of exoplanet systems may be used to constrain the planets ' physical properties. large scale manned space flight within the solar system is still confronted with the solution of two problems : 1. a propulsion system to transport large payloads with short transit times between different planetary orbits. 2. a cost effective lifting of large payloads into earth orbit. for the solution of the first problem a deuterium fusion bomb propulsion system is proposed where a thermonuclear detonation wave is ignited in a small cylindrical assembly of deuterium with a gigavolt - multimegampere proton beam, drawn from the magnetically insulated spacecraft acting in the ultrahigh vacuum of space as a gigavolt capacitor. for the solution of the second problem, the ignition is done by argon ion lasers driven by high explosives, with the lasers destroyed in the fusion explosion and becoming part of the exhaust. also launched missions to mercury in 2004, with the messenger probe demonstrating as the first use of a solar sail. nasa also launched probes to the outer solar system starting in the 1960s. pioneer 10 was the first probe to the outer planets, flying by jupiter, while pioneer 11 provided the first close up view of the planet. both probes became the first objects to leave the solar system. the voyager program launched in 1977, conducting flybys of jupiter and saturn, neptune, and uranus on a trajectory to leave the solar system. the galileo spacecraft, deployed from the space shuttle flight sts - 34, was the first spacecraft to orbit jupiter, discovering evidence of subsurface oceans on the europa and observed that the moon may hold ice or liquid water. a joint nasa - european space agency - italian space agency mission, cassini – huygens, was sent to saturn ' s moon titan, which, along with mars and europa, are the only celestial bodies in the solar system suspected of being capable of harboring life. cassini discovered three new moons of saturn and the huygens probe entered titan ' s atmosphere. the mission discovered evidence of liquid hydrocarbon lakes on titan and subsurface water oceans on the moon of enceladus, which could harbor life. finally launched in 2006, the new horizons mission was the first spacecraft to visit pluto and the kuiper belt. beyond interplanetary probes, nasa has launched many space telescopes. launched in the 1960s, the orbiting astronomical observatory were nasa ' s first orbital telescopes, providing ultraviolet, gamma - ray, x - ray, and infrared observations. nasa launched the orbiting geophysical observatory in the 1960s and 1970s to look down at earth and observe its interactions with the sun. the uhuru satellite was the first dedicated x - ray telescope, mapping 85 % of the sky and discovering a large number of black holes. launched in the 1990s and early 2000s, the great observatories program are among nasa ' s most powerful telescopes. the hubble space telescope was launched in 1990 on sts - 31 from the discovery and could view galaxies 15 billion light years away. a major defect in the telescope ' s mirror could have crippled the program, had nasa not used computer enhancement to compensate for the imperfection and launched five space shuttle servicing flights to replace the damaged components. the compton gamma ray observatory was launched from the atlantis on sts - 37 in 1991, discovering a possible source of antimatter at the center of the milky way and observing that the majority of gamma - ray bursts Question: When studying planetary data, Jackie found some similarities and differences among the planets that make up the solar system. Which generalization is true about the orbits of the major planets in the solar system? A) Planetary orbits have the same plane. B) Planetary orbits are mostly elliptical in shape. C) Planets orbit at the same speed around the Sun. D) Planets orbit in different directions around the Sun.
B) Planetary orbits are mostly elliptical in shape.
Context: ##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river . microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references ##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β€” of which around 1 million are insects β€” but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β€” pieces of dna that can move between cells β€” while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ", and as self - replicators. = = ecology = = ecology is the study of the distribution and abundance of life, the interaction between organisms and their environment. = = = ecosystems = = = the community of living ( biotic ) organisms in conjunction with the nonliving ( abiotic ) components ( e. , which would exclude fungi and some algae. plant cells were derived by endosymbiosis of a cyanobacterium into an early eukaryote about one billion years ago, which gave rise to chloroplasts. the first several clades that emerged following primary endosymbiosis were aquatic and most of the aquatic photosynthetic eukaryotic organisms are collectively described as algae, which is a term of convenience as not all algae are closely related. algae comprise several distinct clades such as glaucophytes, which are microscopic freshwater algae that may have resembled in form to the early unicellular ancestor of plantae. unlike glaucophytes, the other algal clades such as red and green algae are multicellular. green algae comprise three major clades : chlorophytes, coleochaetophytes, and stoneworts. fungi are eukaryotes that digest foods outside their bodies, secreting digestive enzymes that break down large food molecules before absorbing them through their cell membranes. many fungi are also saprobes, feeding on dead organic matter, making them important decomposers in ecological systems. animals are multicellular eukaryotes. with few exceptions, animals consume organic material, breathe oxygen, are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. over 1. 5 million living animal species have been described β€” of which around 1 million are insects β€” but it has been estimated there are over 7 million animal species in total. they have complex interactions with each other and their environments, forming intricate food webs. = = = viruses = = = viruses are submicroscopic infectious agents that replicate inside the cells of organisms. viruses infect all types of life forms, from animals and plants to microorganisms, including bacteria and archaea. more than 6, 000 virus species have been described in detail. viruses are found in almost every ecosystem on earth and are the most numerous type of biological entity. the origins of viruses in the evolutionary history of life are unclear : some may have evolved from plasmids β€” pieces of dna that can move between cells β€” while others may have evolved from bacteria. in evolution, viruses are an important means of horizontal gene transfer, which increases genetic diversity in a way analogous to sexual reproduction. because viruses possess some but not all characteristics of life, they have been described as " organisms at the edge of life ", remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling and the risks of creating more pollution. = = = e - waste recycling = = = the recycling of electronic waste ( e - waste ) has seen significant technological advancements due to increasing environmental concerns and the growing volume of electronic product disposals. traditional e - waste recycling methods, which often involve manual disassemb also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 ) ##lling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called Question: When nitrogen-rich runoff from farms enters a nearby farm, it causes abundant growth of algal mat on the pond surface. As a result, underwater plants in the pond begin to die. Identify the limiting factor that is most responsible for causing the underwater plants to die. A) oxygen B) sunlight C) nitrogen D) carbon dioxide
B) sunlight
Context: world made wide use of hydropower, along with early uses of tidal power, wind power, fossil fuels such as petroleum, and large factory complexes ( tiraz in arabic ). a variety of industrial mills were employed in the islamic world, including fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, and tide mills. by the 11th century, every province throughout the islamic world had these industrial mills in operation. muslim engineers also employed water turbines and gears in mills and water - raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water - raising machines. many of these technologies were transferred to medieval europe. wind - powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now iran, afghanistan and pakistan by the 9th century. they were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 – 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the liver glycogen. during recovery, when oxygen becomes available, nad + attaches to hydrogen from lactate to form atp. in yeast, the waste products are ethanol and carbon dioxide. this type of fermentation is known as alcoholic or ethanol fermentation. the atp generated in this process is made by substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and ##ulating the liquid below from the cold air above. water has the capacity to absorb energy, giving it a higher specific heat capacity than other solvents such as ethanol. thus, a large amount of energy is needed to break the hydrogen bonds between water molecules to convert liquid water into water vapor. as a molecule, water is not completely stable as each water molecule continuously dissociates into hydrogen and hydroxyl ions before reforming into a water molecule again. in pure water, the number of hydrogen ions balances ( or equals ) the number of hydroxyl ions, resulting in a ph that is neutral. = = = organic compounds = = = organic compounds are molecules that contain carbon bonded to another element such as hydrogen. with the exception of water, nearly all the molecules that make up each organism contain carbon. carbon can form covalent bonds with up to four other atoms, enabling it to form diverse, large, and complex molecules. for example, a single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon – carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller – urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, power to watermills and water - raising machines. many of these technologies were transferred to medieval europe. wind - powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now iran, afghanistan and pakistan by the 9th century. they were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 – 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two drummers operated by a programmable drum machine, where the drummer could be made to play different rhythms and different drum patterns. the castle clock, a hydropowered mechanical astronomical clock invented by al - jazari, was an early programmable analog computer. in the ottoman empire, a practical impulse steam turbine was invented in 1551 by taqi ad - din muhammad ibn ma ' ruf in ottoman egypt. he described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. known as a steam jack, a similar device for rotating a spit was also later described by john the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an exaggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photos water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 – 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two drummers operated by a programmable drum machine, where the drummer could be made to play different rhythms and different drum patterns. the castle clock, a hydropowered mechanical astronomical clock invented by al - jazari, was an early programmable analog computer. in the ottoman empire, a practical impulse steam turbine was invented in 1551 by taqi ad - din muhammad ibn ma ' ruf in ottoman egypt. he described a method for rotating a spit by means of a jet of steam playing on rotary vanes around the periphery of a wheel. known as a steam jack, a similar device for rotating a spit was also later described by john wilkins in 1648. = = = = medieval europe = = = = while medieval technology has been long depicted as a step backward in the evolution of western technology, a generation of medievalists ( like the american historian of science lynn white ) stressed from the 1940s onwards the innovative character of many medieval techniques. genuine medieval contributions include earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. glaciology is the study of the cryosphere, including glaciers and coverage of the earth by ice and snow. concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere. = = ecology = = ecology is the study of the biosphere. this includes the study of nature and of how living things interact with the earth and one another and the consequences of that. it considers how living things use resources such as oxygen, water, and nutrients from the earth to sustain themselves. it also considers how humans and other living creatures cause changes to nature. = = physical geography = = physical geography is the study of earth ' s systems and how they interact with one another as part of a single self - contained system. it incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. physical geography is distinct from human geography, which studies the human populations on earth, though it does include human effects on the environment. = = methodology = = methodologies vary depending on the nature of the subjects being studied. studies typically fall into one of three categories : observational, experimental, or theoretical. earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena ( masculinity and warmth. the five phases – fire, earth, metal, wood, and water – described a cycle of transformations in nature. the water turned into wood, which turned into the fire when it burned. the ashes left by fire were earth. using these principles, chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc – 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and Question: What is the main source of energy for the water cycle? A) the Sun B) fossil fuels C) clouds D) the ocean
A) the Sun
Context: building block. ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another material. cermets are ceramic particles containing some metals. the wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. this process involves the strategic addition of second - phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. this approach enhances fracture toughness, paving the way for the creation of advanced, high - performance ceramics in various industries. = = = composites = = = another application of materials science in industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a which constitutes anywhere from 30 % [ m / m ] to 90 % [ m / m ] of its composition by volume, yielding an array of materials with interesting thermomechanical properties. in the processing of glass - ceramics, molten glass is cooled down gradually before reheating and annealing. in this heat treatment the glass partly crystallizes. in many cases, so - called ' nucleation agents ' are added in order to regulate and control the crystallization process. because there is usually no pressing and sintering, glass - ceramics do not contain the volume fraction of porosity typically present in sintered ceramics. the term mainly refers to a mix of lithium and aluminosilicates which yields an array of materials with interesting thermomechanical properties. the most commercially important of these have the distinction of being impervious to thermal shock. thus, glass - ceramics have become extremely useful for countertop cooking. the negative thermal expansion coefficient ( tec ) of the crystalline ceramic phase can be balanced with the positive tec of the glassy phase. at a certain point ( ~ 70 % crystalline ) the glass - ceramic has a net tec near zero. this type of glass - ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β†’ batching β†’ mixing β†’ forming β†’ drying β†’ firing β†’ assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is the recent report on laser cooling of liquid may contradict the law of energy conservation. thermal expansion coefficient ( tec ) of the crystalline ceramic phase can be balanced with the positive tec of the glassy phase. at a certain point ( ~ 70 % crystalline ) the glass - ceramic has a net tec near zero. this type of glass - ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β†’ batching β†’ mixing β†’ forming β†’ drying β†’ firing β†’ assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can molecular diffusion processes give rise to significant changes in the primary microstructural features. this includes the gradual elimination of porosity, which is typically accompanied by a net shrinkage and overall densification of the component. thus, the pores in the object may close up, resulting in a denser product of significantly greater strength and fracture toughness. another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. the growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured - pyrolized to convert the furfuryl alcohol to carbon. to provide oxidation resistance for reusability, the outer layers of the rcc are converted to silicon carbide. other examples can be seen in the " plastic " casings of television sets, cell - phones and so on. these plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene ( abs ) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. these additions may be termed reinforcing fibers, or dispersants, depending on their purpose. = = = polymers = = = polymers are chemical compounds made up of a large number of identical components linked together like chains. polymers are the raw materials ( the resins ) used to make what are commonly called plastics and rubber. plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride ( pvc ), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. rubbers include natural rubber, styrene - butadiene rubber, chloroprene, and butadiene rubber. plastics are generally classified as commodity, specialty and engineering plastics. polyvinyl chloride ( pvc ) is widely used, inexpensive, and annual production quantities are large. it lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. its fabrication and processing are simple and well - established. the results of hydrodynamic simulations of the virgo and perseus clusters suggest that thermal conduction is not responsible for the observed temperature and density profiles. as a result it seems that thermal conduction occurs at a much lower level than the spitzer value. comparing cavity enthalpies to the radiative losses within the cooling radius for seven clusters suggests that some clusters are probably heated by sporadic, but extremely powerful, agn outflows interspersed between more frequent but lower power outflows. passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another material. cermets are ceramic particles containing some metals. the wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. this process involves the strategic addition of second - phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. this approach enhances fracture toughness, paving the way for the creation of advanced, high - performance ceramics in various industries. = = = composites = = = another application of materials science in industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemical Question: Which of the following properties of a substance is conserved during thermal expansion? A) mass B) volume C) shape D) distance between particles
A) mass
Context: stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : river - beds ), but not for where there may be large obstructions in the ground. an open caisson that is used in soft grounds or high water tables, where open trench excavations are impractical, can also be used to install deep manholes, pump stations and reception / launch pits for microtunnelling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caisson ##lling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomi equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 ) ##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a for inland navigation in the lower portion of their course, as, for instance, the rhine, the danube and the mississippi. river engineering works are only required to prevent changes in the course of the stream, to regulate its depth, and especially to fix the low - water channel and concentrate the flow in it, so as to increase as far as practicable the navigable depth at the lowest stage of the water level. engineering works to increase the navigability of rivers can only be advantageously undertaken in large rivers with a moderate fall and a fair discharge at their lowest stage, for with a large fall the current presents a great impediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is Question: Trees use narrow tubes to transport water upward. Which property of water allows the water to rise in these narrow tubes? A) high vapor pressure B) high boiling point C) cohesion of molecules D) net charge of molecules
C) cohesion of molecules
Context: the first observations of saturn ' s visible - wavelength aurora were made by the cassini camera. the aurora was observed between 2006 and 2013 in the northern and southern hemispheres. the color of the aurora changes from pink at a few hundred km above the horizon to purple at 1000 - 1500 km above the horizon. the spectrum observed in 9 filters spanning wavelengths from 250 nm to 1000 nm has a prominent h - alpha line and roughly agrees with laboratory simulated auroras. auroras in both hemispheres vary dramatically with longitude. auroras form bright arcs between 70 and 80 degree latitude north and between 65 and 80 degree latitude south, which sometimes spiral around the pole, and sometimes form double arcs. a large 10, 000 - km - scale longitudinal brightness structure persists for more than 100 hours. this structure rotates approximately together with saturn. on top of the large steady structure, the auroras brighten suddenly on the timescales of a few minutes. these brightenings repeat with a period of about 1 hour. smaller, 1000 - km - scale structures may move faster or lag behind saturn ' s rotation on timescales of tens of minutes. the persistence of nearly - corotating large bright longitudinal structure in the auroral oval seen in two movies spanning 8 and 11 rotations gives an estimate on the period of 10. 65 $ \ pm $ 0. 15 h for 2009 in the northern oval and 10. 8 $ \ pm $ 0. 1 h for 2012 in the southern oval. the 2009 north aurora period is close to the north branch of saturn kilometric radiation ( skr ) detected at that time. pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " – their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form ammonium hydrosulphide has long since been postulated to exist at least in certain layers of the giant planets. its radiation products may be the reason for the red colour seen on jupiter. several ammonium salts, the products of nh3 and an acid, have previously been detected at comet 67p / churyumov - gerasimenko. the acid h2s is the fifth most abundant molecule in the coma of 67p followed by nh3. in order to look for the salt nh4 + sh -, we analysed in situ measurements from the rosetta / rosina double focusing mass spectrometer during the rosetta mission. nh3 and h2s appear to be independent of each other when sublimating directly from the nucleus. however, we observe a strong correlation between the two species during dust impacts, clearly pointing to the salt. we find that nh4 + sh - is by far the most abundant salt, more abundant in the dust impacts than even water. we also find all previously detected ammonium salts and for the first time ammonium fluoride. the amount of ammonia and acids balance each other, confirming that ammonia is mostly in the form of salt embedded into dust grains. allotropes s2 and s3 are strongly enhanced in the impacts, while h2s2 and its fragment hs2 are not detected, which is most probably the result of radiolysis of nh4 + sh -. this makes a prestellar origin of the salt likely. our findings may explain the apparent depletion of nitrogen in comets and maybe help to solve the riddle of the missing sulphur in star forming regions. parts of australia have been privileged to see dazzling lights in the night sky as the aurora australis ( known as the southern lights ) puts on a show this year. aurorae are significant in australian indigenous astronomical traditions. aboriginal people associate aurorae with fire, death, blood, and omens, sharing many similarities with native american communities. . species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. a familiar example is peppermint, mentha Γ— piperita, a sterile hybrid between mentha aquatica and spearmint, mentha spicata. the many cultivated varieties of wheat are the result of multiple inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in electromagnetic induction. the transmission speed ranges from 2 mbit / s to 10 gbit / s. twisted pair cabling comes in two forms : unshielded twisted pair ( utp ) and shielded twisted - pair ( stp ). each form comes in several category ratings, designed for use in various scenarios. an optical fiber is a glass fiber. it carries pulses of light that represent data via lasers and optical amplifiers. some advantages of optical fibers over metal wires are very low transmission loss and immunity to electrical interference. using dense wave division multiplexing, optical fibers can simultaneously carry multiple streams of data on different wavelengths of light, which greatly increases the rate that data can be sent to up to trillions of bits per second. optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea communications cables to interconnect continents. there are two basic types of fiber optics, single - mode optical fiber ( smf ) and multi - mode optical fiber ( mmf ). single - mode fiber has the advantage of being able to sustain a coherent signal for dozens or even a hundred kilometers. multimode fiber is cheaper to terminate but is limited to a few hundred or even only a few dozens of meters, depending on the data rate and cable grade. = = = wireless = = = network connections can be established wirelessly using radio or other electromagnetic means of communication. terrestrial microwave – terrestrial microwave communication uses earth - based transmitters and receivers resembling satellite dishes. terrestrial microwaves are in the low gigahertz range, which limits all communications to line - of - sight. relay stations are spaced approximately 40 miles ( 64 km ) apart. communications satellites – satellites also communicate via microwave. the satellites are stationed in space, typically in geosynchronous orbit 35, 400 km ( 22, 000 mi ) above the equator. these earth - orbiting systems are capable of receiving and relaying voice, data, and tv signals. cellular networks use several radio communications technologies. the systems divide the region covered into multiple geographic areas. each area is served by a low - power transceiver. radio and spread spectrum technologies – wireless lans use a high - frequency radio technology similar to digital cellular. wireless lans use spread spectrum technology to enable communication between multiple devices in a limited area. ieee 802. 11 defines a common flavor of open - standards wireless radio - wave technology known as wi - fi. free - space optical communication uses visible or invisible light for communications. in most cases, line - of the curvature radiation is applied to the explain the circular polarization of frbs. significant circular polarization is reported in both apparently non - repeating and repeating frbs. curvature radiation can produce significant circular polarization at the wing of the radiation beam. in the curvature radiation scenario, in order to see significant circular polarization in frbs ( 1 ) more energetic bursts, ( 2 ) burst with electrons having higher lorentz factor, ( 3 ) a slowly rotating neutron star at the centre are required. different rotational period of the central neutron star may explain why some frbs have high circular polarization, while others don ' t. considering possible difference in refractive index for the parallel and perpendicular component of electric field, the position angle may change rapidly over the narrow pulse window of the radiation beam. the position angle swing in frbs may also be explained by this non - geometric origin, besides that of the rotating vector model. inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid the status of the theory of color confinemnt is discussed. generation of direct current in zigzag carbon nanotubes due to harmonic mixing of two coherent electromagnetic waves is being considered. the electromagnetic waves have commensurate frequencies of omega and two omega. the rectification of the waves at high frequencies is quite smooth whiles at low frequencies there are some fluctuations. the nonohmicity observed in the i - vcharacteristics is attributed to the nonparabolicity of the electron energy band which is very strong in carbon nanotubes because of high stark component. it is observed that the current falls off faster at lower electric field than the case in superlattice. for omega tau equal to two? the external electric field strength emax for the observation of negative differential conductivity occurs around 1. 03x10e6 v / m which is quite weak. it is interesting to note that the peak of the curve shifts to the left with increasing value of omega tau? Question: Which pair together could cause a rainbow? A) Fog and clouds B) Rain and snow C) Clouds and ice D) Sunshine and rain
D) Sunshine and rain
Context: the recent report on laser cooling of liquid may contradict the law of energy conservation. chemistry is the scientific study of the properties and behavior of matter. it is a physical science within the natural sciences that studies the chemical elements that make up matter and compounds made of atoms, molecules and ions : their composition, structure, properties, behavior and the changes they undergo during reactions with other substances. chemistry also addresses the nature of chemical bonds in chemical compounds. in the scope of its subject, chemistry occupies an intermediate position between physics and biology. it is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. for example, chemistry explains aspects of plant growth ( botany ), the formation of igneous rocks ( geology ), how atmospheric ozone is formed and how environmental pollutants are degraded ( ecology ), the properties of the soil on the moon ( cosmochemistry ), how medications work ( pharmacology ), and how to collect dna evidence at a crime scene ( forensics ). chemistry has existed under various names since ancient times. it has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. the applications of various fields of chemistry are used frequently for economic purposes in the chemical industry. = = etymology = = the word chemistry comes from a modification during the renaissance of the word alchemy, which referred to an earlier set of practices that encompassed elements of chemistry, metallurgy, philosophy, astrology, astronomy, mysticism, and medicine. alchemy is often associated with the quest to turn lead or other base metals into gold, though alchemists were also interested in many of the questions of modern chemistry. the modern word alchemy in turn is derived from the arabic word al - kimia ( Ψ§Ω„ΩƒΫŒΩ…ΫŒΨ§Ψ‘ ). this may have egyptian origins since al - kimia is derived from the ancient greek χημια, which is in turn derived from the word kemet, which is the ancient name of egypt in the egyptian language. alternately, al - kimia may derive from χημΡια ' cast together '. = = modern principles = = the current model of atomic structure is the quantum mechanical model. traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. the interactions, reactions and transformations that ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. different kinds of spectra are often used in chemical spectroscopy, e. g. ir, microwave, nmr, esr, etc. spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by analyzing their radiation spectra. the term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. = = = reaction = = = when a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds in the muon storage rings the muons are subject to a very large radial acceleration. the equivalence principle implies a large gravity force. it has no effect on the muon lifetime. grasping an object is a matter of first moving a prehensile organ at some position in the world, and then managing the contact relationship between the prehensile organ and the object. once the contact relationship has been established and made stable, the object is part of the body and it can move in the world. as any action, the action of grasping is ontologically anchored in the physical space while the correlative movement originates in the space of the body. evolution has found amazing solutions that allow organisms to rapidly and efficiently manage the relationship between their body and the world. it is then natural that roboticists consider taking inspiration of these natural solutions, while contributing to better understand their origin. the scientific revolution. aristotle also contributed to theories of the elements and the cosmos. he believed that the celestial bodies ( such as the planets and the sun ) had something called an unmoved mover that put the celestial bodies in motion. aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the ##itive material by selective exposure to a radiation source such as light. a photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. if a photosensitive material is selectively exposed to radiation ( e. g. by masking some of the radiation ) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs. this exposed region can then be removed or treated providing a mask for the underlying substrate. photolithography is typically used with metal or other thin film deposition, wet and dry etching. sometimes, photolithography is used to create structure without any kind of post etching. one example is su8 based lens where su8 based square blocks are generated. then the photoresist is melted to form a semi - sphere which acts as a lens. electron beam lithography ( often abbreviated as e - beam lithography ) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film ( called the resist ), ( " exposing " the resist ) and of selectively removing either exposed or non - exposed regions of the resist ( " developing " ). the purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. it was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures. the primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. this form of maskless lithography has found wide usage in photomask - making used in photolithography, low - volume production of semiconductor components, and research & development. the key limitation of electron beam lithography is throughput, i. e., the very long time it takes to expose an entire silicon wafer or glass substrate. a long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. also, the turn - around time for reworking or re - design is lengthened unnecessarily if the pattern is not being changed the second time. it is known that focused - ion beam lithography has the capability of writing extremely fine lines ( less than 50 nm line and space has been achieved ) without proximity effect. however, because the writing field in ion - beam lit an important question of theoretical physics is whether sound is able to propagate in vacuums at all and if this is the case, then it must lead to the reinterpretation of one zero - restmass particle which corresponds to vacuum - sound waves. taking the electron - neutrino as the corresponding particle, its observed non - vanishing rest - energy may only appear for neutrino - propagation inside material media. the idea may also influence the physics of dense matter, restricting the maximum speed of sound, both in vacuums and in matter to the speed of light. current model of atomic structure is the quantum mechanical model. traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. the interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. such behaviors are studied in a chemistry laboratory. the chemistry laboratory stereotypically uses various forms of laboratory glassware. however glassware is not central to chemistry, and a great deal of experimental ( as well as applied / industrial ) chemistry is done without it. a chemical reaction is a transformation of some substances into one or more different substances. the basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. it can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. the number of atoms on the left and the right in the equation for a chemical transformation is equal. ( when the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay. ) the type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. energy and entropy considerations are invariably important in almost all chemical studies. chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. they can be analyzed using the tools of chemical analysis, e. g. spectroscopy and chromatography. scientists engaged in chemical research are known as chemists. most chemists specialize in one or more sub - disciplines. several concepts are essential for the study of chemistry ; some of them are : = = = matter = = = in chemistry, matter is defined as anything that has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the the project consists to determine, mathematically, the trajectory that will take an artificial satellite to fight against the air resistance. during our work, we had to consider that our satellite will crash to the surface of our planet. we started our study by understanding the system of forces that are acting between our satellite and the earth. in this work, we had to study the second law of newton by taking knowledge of the air friction, the speed of the satellite which helped us to find the equation that relates the trajectory of the satellite itself, its speed and the density of the air depending on the altitude. finally, we had to find a mathematic relation that links the density with the altitude and then we had to put it into our movement equation. in order to verify our model, we ' ll see what happens if we give a zero velocity to the satellite. Question: The property of matter that resists changes in motion is called A) inertia. B) friction. C) gravity. D) weight.
A) inertia.
Context: ##ch which is stored in the chloroplast. starch is the characteristic energy store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower family asteraceae. some of the glucose is converted to sucrose ( common table sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. . these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabinol ( active ingredient in cannabis ), caffeine, morphine and nicotine come directly from plants. others are simple derivatives of botanical natural products. for example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. most alcoholic beverages come from fermentation of carbohydrate - rich plant products such as barley ( beer ), rice ( sake ) and grapes ( wine ). native americans have used various plants as ways of treating illness or disease for thousands of years. this knowledge native americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabinol ( active ingredient in cannabis ), caffeine, morphine and nicotine come directly from plants. others are simple derivatives of botanical natural products. for example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. most alcoholic beverages come from fermentation of carbohy industrial applications. this branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio - oils with photosynthetic micro - algae. green biotechnology is biotechnology applied to agricultural processes. an example would be the selection and domestication of plants via micropropagation. another example is the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of poll sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabino and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying elongation and the control of flowering. abscisic acid ( aba ) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. it inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. it was so named because it was originally thought to control abscission. ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. it is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. another class of phytohormones is the jasmonates, first isolated from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmos eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant – people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " – their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an o Question: Monarch butterflies use milkweed plants during all of their life stages. Milkweed plants grow in open areas, such as grasslands and wetlands. They also often grow between row crops. Given this information, which of these biotechnologies would pose the greatest threat to monarch butterflies? A) development of new antibiotics B) development of new herbicides C) development of disease-resistant crops D) development of insect-resistant crops
B) development of new herbicides
Context: oil umbrella ) ; for calculating the time of death ( allowing for weather and insect activity ) ; described how to wash and examine the dead body to ascertain the reason for death. at that time the book had described methods for distinguishing between suicide and faked suicide. he wrote the book on forensics stating that all wounds or dead bodies should be examined, not avoided. the book became the first form of literature to help determine the cause of death. in one of song ci ' s accounts ( washing away of wrongs ), the case of a person murdered with a sickle was solved by an investigator who instructed each suspect to bring his sickle to one location. ( he realized it was a sickle by testing various blades on an animal carcass and comparing the wounds. ) flies, attracted by the smell of blood, eventually gathered on a single sickle. in light of this, the owner of that sickle confessed to the murder. the book also described how to distinguish between a drowning ( water in the lungs ) and strangulation ( broken neck cartilage ), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident. methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the polygraph test. in ancient india, some suspects were made to fill their mouths with dried rice and spit it back out. similarly, in ancient china, those accused of a crime would have rice powder placed in their mouths. in ancient middle - eastern cultures, the accused were made to lick hot metal rods briefly. it is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth ; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva. = = education and training = = initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data - flow management software. however, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. in doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. instead, it urges a perspective that views forensic science as a discipline studying the informative potential of wounds or dead bodies should be examined, not avoided. the book became the first form of literature to help determine the cause of death. in one of song ci ' s accounts ( washing away of wrongs ), the case of a person murdered with a sickle was solved by an investigator who instructed each suspect to bring his sickle to one location. ( he realized it was a sickle by testing various blades on an animal carcass and comparing the wounds. ) flies, attracted by the smell of blood, eventually gathered on a single sickle. in light of this, the owner of that sickle confessed to the murder. the book also described how to distinguish between a drowning ( water in the lungs ) and strangulation ( broken neck cartilage ), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident. methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the polygraph test. in ancient india, some suspects were made to fill their mouths with dried rice and spit it back out. similarly, in ancient china, those accused of a crime would have rice powder placed in their mouths. in ancient middle - eastern cultures, the accused were made to lick hot metal rods briefly. it is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth ; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva. = = education and training = = initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data - flow management software. however, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. in doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces β€” remnants of criminal activity. embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners ' mindset to accept concepts and methodologies in forensic intelligence. recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, undersco his sickle to one location. ( he realized it was a sickle by testing various blades on an animal carcass and comparing the wounds. ) flies, attracted by the smell of blood, eventually gathered on a single sickle. in light of this, the owner of that sickle confessed to the murder. the book also described how to distinguish between a drowning ( water in the lungs ) and strangulation ( broken neck cartilage ), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident. methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the polygraph test. in ancient india, some suspects were made to fill their mouths with dried rice and spit it back out. similarly, in ancient china, those accused of a crime would have rice powder placed in their mouths. in ancient middle - eastern cultures, the accused were made to lick hot metal rods briefly. it is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth ; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva. = = education and training = = initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data - flow management software. however, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. in doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces β€” remnants of criminal activity. embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners ' mindset to accept concepts and methodologies in forensic intelligence. recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, underscore the necessity for the establishment of educational and training initiatives in the field of forensic intelligence. this article contends that a discernible gap exists between the perceived and actual comprehension of forensic intelligence among law enforcement and forensic science managers, positing that this asymmetry can be rectified only through educational interventions. . the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, ##drate - rich plant products such as barley ( beer ), rice ( sake ) and grapes ( wine ). native americans have used various plants as ways of treating illness or disease for thousands of years. this knowledge native americans have on plants has been recorded by enthnobotanists and then in turn has been used by pharmaceutical companies as a way of drug discovery. plants can synthesise coloured dyes and pigments such as the anthocyanins responsible for the red colour of red wine, yellow weld and blue woad used together to produce lincoln green, indoxyl, source of the blue dye indigo traditionally used to dye denim and the artist ' s pigments gamboge and rose madder. sugar, starch, cotton, linen, hemp, some types of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially important materials made from plant tissues or their secondary products. charcoal, a pure form of carbon made by pyrolysis of wood, has a long history as a metal - smelting fuel, as a filter material and adsorbent and as an artist ' s material and is one of the three ingredients of gunpowder. cellulose, the world ' s most abundant organic polymer, can be converted into energy, fuels, materials and chemical feedstock. products made from cellulose include rayon and cellophane, wallpaper paste, biobutanol and gun cotton. sugarcane, rapeseed and soy are some of the plants with a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil fuels, such as biodiesel. sweetgrass was used by native americans to ward off bugs like mosquitoes. these bug repelling properties of sweetgrass were later found by the american chemical society in the molecules phytol and coumarin. = = plant ecology = = plant ecology is the science of the functional relationships between plants and their habitats – the environments where they complete their life cycles. plant ecologists study the composition of local and regional floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop ##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and , and carpentry. the trade of the ship - wright. the trade of the wheel - wright. the trade of the wainwright : making wagons. ( the latin word for a two - wheeled wagon is carpentum, the maker of which was a carpenter. ) ( wright is the agent form of the word wrought, which itself is the original past passive participle of the word work, now superseded by the weak verb forms worker and worked respectively. ) blacksmithing and the various related smithing and metal - crafts. folk music played on acoustic instruments. mathematics ( particularly, pure mathematics ) organic farming and animal husbandry ( i. e. ; agriculture as practiced by all american farmers prior to world war ii ). milling in the sense of operating hand - constructed equipment with the intent to either grind grain, or the reduction of timber to lumber as practiced in a saw - mill. fulling, felting, drop spindle spinning, hand knitting, crochet, & similar textile preparation. the production of charcoal by the collier, for use in home heating, foundry operations, smelting, the various smithing trades, and for brushing ones teeth as in colonial america. glass - blowing. various subskills of food preservation : smoking salting pickling drying note : home canning is a counter example of a low technology since some of the supplies needed to pursue this skill rely on a global trade network and an existing manufacturing infrastructure. the production of various alcoholic beverages : wine : poorly preserved fruit juice. beer : a way to preserve the calories of grain products from decay. whiskey : an improved ( distilled ) form of beer. flint - knapping masonry as used in castles, cathedrals, and root cellars. = = = domestic or consumer = = = ( non exhaustive ) list of low - tech in a westerner ' s everyday life : getting around by bike, and repairing it with second - hand materials using a cargo bike to carry loads ( rather than a gasoline vehicle ) drying clothes on a clothesline or on a drying rack washing clothes by hand, or in a human - powered washing machine cooling one ' s home with a fan or an air expander ( rather than electrical appliances such as air conditioners ) using a bell as door bell a cellar, " desert fridge ", or icebox ( rather than a fridge or freezer ) long - distance travel by sailing boat ( rather than by plane ) a wicker bag or a tote bag ( rather than a plastic bag ) to which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures the surface of the membrane, retentate is removed from the same side further downstream, whereas the permeate flow is tracked on the other side. in dead - end filtration, the direction of the fluid flow is normal to the membrane surface. both flow geometries offer some advantages and disadvantages. generally, dead - end filtration is used for feasibility studies on a laboratory scale. the dead - end membranes are relatively easy to fabricate which reduces the cost of the separation process. the dead - end membrane separation process is easy to implement and the process is usually cheaper than cross - flow membrane filtration. the dead - end filtration process is usually a batch - type process, where the filtering solution is loaded ( or slowly fed ) into the membrane device, which then allows passage of some particles subject to the driving force. the main disadvantage of dead - end filtration is the extensive membrane fouling and concentration polarization. the fouling is usually induced faster at higher driving forces. membrane fouling and particle retention in a feed solution also builds up a concentration gradients and particle backflow ( concentration polarization ). the tangential flow devices are more cost and labor - intensive, but they are less susceptible to fouling due to the sweeping effects and high shear rates of the passing flow. the most commonly used synthetic membrane devices ( modules ) are flat sheets / plates, spiral wounds, and hollow fibers. flat membranes used in filtration and separation processes can be enhanced with surface patterning, where microscopic structures are introduced to improve performance. these patterns increase surface area, optimize water flow, and reduce fouling, leading to higher permeability and longer membrane lifespan. research has shown that such modifications can significantly enhance efficiency in water purification, energy applications, and industrial separations. flat plates are usually constructed as circular thin flat membrane surfaces to be used in dead - end geometry modules. spiral wounds are constructed from similar flat membranes but in the form of a " pocket " containing two membrane sheets separated by a highly porous support plate. several such pockets are then wound around a tube to create a tangential flow geometry and to reduce membrane fouling. hollow fiber modules consist of an assembly of self - supporting fibers with dense skin separation layers, and a more open matrix helping to withstand pressure gradients and maintain structural integrity. the hollow fiber modules can contain up to 10, 000 fibers ranging from 200 to 2500 ΞΌm in diameter ; the main advantage of hollow fiber modules is the very large surface area within Question: In the past, Native American Indians buried dead fish along with corn seeds. This technique was used because the decomposing dead fish would A) provide nutrients for the growing corn plant B) eliminate the need for weeding around the corn plant C) release oxygen for use by the corn plant D) supply all the water needed by the corn plant
A) provide nutrients for the growing corn plant
Context: variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated. the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the laws of equilibrium ; they must have had practical and intuitional knowledge of the principals involved. what archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system. " and again : " with astonishment we find ourselves on the threshold of modern science genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. one exciting effort drawing in soil scientists in the u. s. as of 2004 is the soil quality initiative. central to the soil quality initiative is developing indices of soil health and then monitoring them in a way that gives us long - term ( decade - to - decade ) feedback on our performance as stewards of the planet. the effort includes understanding the functions of soil microbiotic crusts and exploring the potential to sequester atmospheric carbon in soil organic matter. relating the concept of agriculture to soil quality, however, has not earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and , crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest ##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as ##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s consisting of several distinct layers, often referred to as spheres : the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the earth ' s surface and its various processes these correspond to rocks, water, air and life. also included by some are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' , glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = = further reading = = = = external links = = earth science picture of the day, a service of universities space research association, sponsored by nasa goddard space flight center. geoethics in planetary and space exploration. geology buzz : earth science archived 2021 - 11 - 04 at the wayback machine Question: Earth is composed of layers of material with different properties. Which of the following is most likely to be in constant motion? A) core B) mantle C) oceanic crust D) continental crust
B) mantle
Context: energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photos single carbon atom can form four single covalent bonds such as in methane, two double covalent bonds such as in carbon dioxide ( co2 ), or a triple covalent bond such as in carbon monoxide ( co ). moreover, carbon can form very long chains of interconnecting carbon – carbon bonds such as octane or ring - like structures such as glucose. the simplest form of an organic molecule is the hydrocarbon, which is a large family of organic compounds that are composed of hydrogen atoms bonded to a chain of carbon atoms. a hydrocarbon backbone can be substituted by other elements such as oxygen ( o ), hydrogen ( h ), phosphorus ( p ), and sulfur ( s ), which can change the chemical behavior of that compound. groups of atoms that contain these elements ( o -, h -, p -, and s - ) and are bonded to a central carbon atom or skeleton are called functional groups. there are six prominent functional groups that can be found in organisms : amino group, carboxyl group, carbonyl group, hydroxyl group, phosphate group, and sulfhydryl group. in 1953, the miller – urey experiment showed that organic compounds could be synthesized abiotically within a closed system mimicking the conditions of early earth, thus suggesting that complex organic molecules could have arisen spontaneously in early earth ( see abiogenesis ). = = = macromolecules = = = macromolecules are large molecules made up of smaller subunits or monomers. monomers include sugars, amino acids, and nucleotides. carbohydrates include monomers and polymers of sugars. lipids are the only class of macromolecules that are not made up of polymers. they include steroids, phospholipids, and fats, largely nonpolar and hydrophobic ( water - repelling ) substances. proteins are the most diverse of the macromolecules. they include enzymes, transport proteins, large signaling molecules, antibodies, and structural proteins. the basic unit ( or monomer ) of a protein is an amino acid. twenty amino acids are used in proteins. nucleic acids are polymers of nucleotides. their function is to store, transmit, and express hereditary information. = = cells = = cell theory states that cells are the fundamental units of life, that all living things are composed of one or more cells, and that all cells arise from preexisting cells through cell division ##vary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an ex is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside substrate - level phosphorylation, which does not require oxygen. = = = photosynthesis = = = photosynthesis is a process used by plants and other organisms to convert light energy into chemical energy that can later be released to fuel the organism ' s metabolic activities via cellular respiration. this chemical energy is stored in carbohydrate molecules, such as sugars, which are synthesized from carbon dioxide and water. in most cases, oxygen is released as a waste product. most plants, algae, and cyanobacteria perform photosynthesis, which is largely responsible for producing and maintaining the oxygen content of the earth ' s atmosphere, and supplies most of the energy necessary for life on earth. photosynthesis has four stages : light absorption, electron transport, atp synthesis, and carbon fixation. light absorption is the initial step of photosynthesis whereby light energy is absorbed by chlorophyll pigments attached to proteins in the thylakoid membranes. the absorbed light energy is used to remove electrons from a donor ( water ) to a primary electron acceptor, a quinone designated as q. in the second stage, electrons move from the quinone primary electron acceptor through a series of electron carriers until they reach a final electron acceptor, which is usually the oxidized form of nadp +, which is reduced to nadph, a process that takes place in a protein complex called photosystem i ( psi ). the transport of electrons is coupled to the movement of protons ( or hydrogen ) from the stroma to the thylakoid membrane, which forms a ph gradient across the membrane as hydrogen becomes more concentrated in the lumen than in the stroma. this is analogous to the proton - motive force generated across the inner mitochondrial membrane in aerobic respiration. during the third stage of photosynthesis, the movement of protons down their concentration gradients from the thylakoid lumen to the stroma through the atp synthase is coupled to the synthesis of atp by that same atp synthase. the nadph and atps generated by the light - dependent reactions in the second and third stages, respectively, provide the energy and electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the electrons to drive the synthesis of glucose by fixing atmospheric carbon dioxide into existing organic carbon compounds, such as ribulose bisphosphate ( rubp ) in a sequence of light - independent ( or dark ) reactions called the calvin cycle. = = = cell signaling = = = cell signaling ( or communication ) is the ability of cells to receive, process, and transmit signals with its environment and with itself. signals can be non - chemical such as light, electrical impulses, and heat, or chemical signals ( or ligands ) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. there are generally four types of chemical signals : autocrine, paracrine, juxtacrine, and hormones. in autocrine signaling, the ligand affects the same cell that releases it. tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self - division. in paracrine signaling, the ligand diffuses to nearby cells and affects them. for example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle cell. in juxtacrine signaling, there is direct contact between the signaling and responding cells. finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. for instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. other types of receptors include protein kinase receptors ( e. g., receptor for the hormone insulin ) and g protein - coupled receptors. activation of g protein - coupled receptors can initiate second messenger cascades. the process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction. = = = cell cycle = = = the cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. these events include the duplication of its dna and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. in eukaryotes ( i. e., animal, plant, fungal, and the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used? shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured - pyrolized to convert the furfuryl alcohol to carbon. to provide oxidation resistance for reusability, the outer layers of the rcc are converted to silicon carbide. other examples can be seen in the " plastic " casings of television sets, cell - phones and so on. these plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene ( abs ) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. these additions may be termed reinforcing fibers, or dispersants, depending on their purpose. = = = polymers = = = polymers are chemical compounds made up of a large number of identical components linked together like chains. polymers are the raw materials ( the resins ) used to make what are commonly called plastics and rubber. plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride ( pvc ), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. rubbers include natural rubber, styrene - butadiene rubber, chloroprene, and butadiene rubber. plastics are generally classified as commodity, specialty and engineering plastics. polyvinyl chloride ( pvc ) is widely used, inexpensive, and annual production quantities are large. it lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. its fabrication and processing are simple and well - established. Question: Which mechanism helps carbon to cycle from the atmosphere to living organisms? A) tissue decomposition B) cellular respiration C) photosynthesis D) transpiration
C) photosynthesis
Context: background : african swine fever is among the most devastating viral diseases of pigs. despite nearly a century of research, there is still no safe and effective vaccine available. the current situation is that either vaccines are safe but not effective, or they are effective but not safe. findings : the asf vaccine prepared using the inactivation method with propiolactone provided 98. 6 % protection within 100 days after three intranasal immunizations, spaced 7 days apart. conclusions : an inactivated vaccine made from complete african swine fever virus particles using propiolactone is safe and effective for controlling asf through mucosal immunity. covid - 19, also known as novel coronavirus disease, is a highly contagious disease that first surfaced in china in late 2019. sars - cov - 2 is a coronavirus that belongs to the vast family of coronaviruses that causes this disease. the sickness originally appeared in wuhan, china in december 2019 and quickly spread to over 213 nations, becoming a global pandemic. fever, dry cough, and tiredness are the most typical covid - 19 symptoms. aches, pains, and difficulty breathing are some of the other symptoms that patients may face. the majority of these symptoms are indicators of respiratory infections and lung abnormalities, which radiologists can identify. chest x - rays of covid - 19 patients seem similar, with patchy and hazy lungs rather than clear and healthy lungs. on x - rays, however, pneumonia and other chronic lung disorders can resemble covid - 19. trained radiologists must be able to distinguish between covid - 19 and an illness that is less contagious. our ai algorithm seeks to give doctors a quantitative estimate of the risk of deterioration. so that patients at high risk of deterioration can be triaged and treated efficiently. the method could be particularly useful in pandemic hotspots when screening upon admission is important for allocating limited resources like hospital beds. , lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. geographic redundancy locations can be more than 621 miles ( 999 km ) continental, more than 62 miles apart and less than 93 miles ( 150 km ) apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than 300 feet ( 91 m ) apart on the same campus. the following methods can reduce the risks of damage by a fire conflagration : large buildings at least 80 feet ( 24 m ) to 110 feet ( 34 m ) apart, but sometimes a minimum of 210 feet ( 64 m ) apart. : 9 high - rise buildings at least 82 feet ( 25 m ) apart : 12 open spaces clear of flammable vegetation within 200 feet ( 61 m ) on each side of objects different wings on the same building, in rooms that are separated by more than 300 feet ( 91 m ) different floors on the same wing of a building in rooms that are horizontally offset by a minimum of 70 feet ( 21 m ) with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70 - foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor geographic redundancy is used by amazon web services ( aws ), google cloud platform ( gcp ), microsoft azure, netflix, dropbox, salesforce, linkedin, paypal, twitter, facebook, apple icloud, cisco meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. as another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles ( 3. 2 km ) away from the shore, with an elevation of at least 5 feet ( 1. 5 m ) above sea level. for additional protection, they can be located at least 100 feet ( 30 m ) away from flood plain areas. = = functions of redundancy = = the two functions of redundancy are passive redundancy and active redundancy. both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. passive redundancy uses excess capacity to reduce the impact of component failures. one common form of passive redundancy is the extra strength of cabling and struts used in bridges. listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. these occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. about 90 % of medical visits can be treated by the primary care provider. these include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes. secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. referrals are made for those patients who required the expertise or procedures performed by specialists. these include both ambulatory care and inpatient services, emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting. tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. these include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high - risk pregnancy, radiation oncology, etc. modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means. in low - income countries, modern healthcare is often too expensive for the average person. international healthcare policy researchers have advocated that " user fees " be removed in these areas to ensure access, although even after removal, significant costs and barriers remain. separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. in the western world there are centuries of tradition for separating pharmacists from physicians. in asian countries, it is traditional for physicians to also provide drugs. = = branches = = working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. examples include : nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, medical physicists, surgeons, surgeon ' s assistant, surgical techno qualitative evidence suggests that heresy within the medieval catholic church had many of the characteristics of a scale - free network. from the perspective of the church, heresy can be seen as a virus. the virus persisted for long periods of time, breaking out again even when the church believed it to have been eradicated. a principal mechanism of heresy was through a small number of individuals with very large numbers of social contacts. initial attempts by the inquisition to suppress the virus by general persecution, or even mass slaughtering, of populations thought to harbour the " disease " failed. gradually, however, the inquisition learned about the nature of the social networks by which heresy both spread and persisted. eventually, a policy of targeting key individuals was implemented, which proved to be much more successful. often called physicians. these terms, internist or physician ( in the narrow sense, common outside north america ), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities. because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. formerly, many internists were not subspecialized ; such general physicians would see any complex nonsurgical problem ; this style of practice has become much less common. in modern urban practice, most internists are subspecialists : that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. for example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys. in the commonwealth of nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians ( or internists ) who have subspecialized by age of patient rather than by organ system. elsewhere, especially in north america, general pediatrics is often a form of primary care. there are many subspecialities ( or subdisciplines ) of internal medicine : training in internal medicine ( as opposed to surgical training ), varies considerably across the world : see the articles on medical education for more details. in north america, it requires at least three years of residency training after medical school, which can then be followed by a one - to three - year fellowship in the subspecialties listed above. in general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the us. this difference does not apply in the uk where all doctors are now required by law to work less than 48 hours per week on average. = = = = diagnostic specialties = = = = clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. in the united states, these services are supervised by a pathologist. the personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of urinary tract infection ( utis ) is referred as one of the most common infection in medical sectors worldwide and antimicrobial resistance ( amr ) is also a global threat to human that is related with many diseases. as antibiotics used for the treatment of infectious diseases, the rate of resistance is increasing day by day. gram positive pathogens are commonly found in urine sample collected from different age groups of people, associated with uti. the study was conducted in a diagnostic center in dhaka, bangladesh with total 1308 urine samples from november 2021 to april 2022. gram positive pathogens were isolated and antimicrobial susceptibility tests were done. from total 121 samples of gram positive bacteria the highest prevalence rate of utis was found in age group of 21 - 30 year. mostly enterococcus spp. ( 33. 05 % ) staphylococcus aureus ( 27. 27 % ), streptococcus spp. ( 20. 66 % ), beta - hemolytic streptococci ( 19. 00 % ) were found as causative agents of uti compared to others. the majority of isolates have been detected as multi - drug resistant ( mdr ). the higher percentage of antibiotic resistance were found against azithromycin ( 75 % ), and cefixime ( 64. 46 % ). this research focused on the regular basis of surveillance for the gram - positive bacteria antibiotic susceptibility to increase awareness about the use of proper antibiotic thus minimize the drug resistance. multi - strain diseases are diseases that consist of several strains, or serotypes. the serotypes may interact by antibody - dependent enhancement ( ade ), in which infection with a single serotype is asymptomatic, but infection with a second serotype leads to serious illness accompanied by greater infectivity. it has been observed from serotype data of dengue hemorrhagic fever that outbreaks of the four serotypes occur asynchronously. both autonomous and seasonally driven outbreaks were studied in a model containing ade. for sufficiently small ade, the number of infectives of each serotype synchronizes, with outbreaks occurring in phase. when the ade increases past a threshold, the system becomes chaotic, and infectives of each serotype desynchronize. however, certain groupings of the primary and second ary infectives remain synchronized even in the chaotic regime. much of their work in hospitals. formerly, many internists were not subspecialized ; such general physicians would see any complex nonsurgical problem ; this style of practice has become much less common. in modern urban practice, most internists are subspecialists : that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. for example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys. in the commonwealth of nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians ( or internists ) who have subspecialized by age of patient rather than by organ system. elsewhere, especially in north america, general pediatrics is often a form of primary care. there are many subspecialities ( or subdisciplines ) of internal medicine : training in internal medicine ( as opposed to surgical training ), varies considerably across the world : see the articles on medical education for more details. in north america, it requires at least three years of residency training after medical school, which can then be followed by a one - to three - year fellowship in the subspecialties listed above. in general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the us. this difference does not apply in the uk where all doctors are now required by law to work less than 48 hours per week on average. = = = = diagnostic specialties = = = = clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. in the united states, these services are supervised by a pathologist. the personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. these kinds of tests can be divided into recordings of : ( 1 ) spontaneous or continuously running electrical activity, or ( 2 ) stimulus evoked responses. subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. sometimes Question: A city has an outbreak of a disease that affects an unusually large portion of its population at the same time. Which term best describes the outbreak? A) pandemic B) plague C) epidemic D) infection
C) epidemic
Context: in the year 1598 philipp uffenbach published a printed diptych sundial, which is a forerunner of franz ritters horizantal sundial. uffenbach ' s sundial contains apart from the usual information on a sundial ascending signs of the zodiac, several brigthest stars, an almucantar and most important the oldest gnomonic world map known so far. the sundial is constructed for the polar height of 50 1 / 6 degrees, the height of frankfurt / main the town of his citizenship. oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars. excess lightweight products of slow neutron capture in the photosphere, over the mass range of 25 to 207 amu, confirm the solar mass separation recorded by excess lightweight isotopes in the solar wind, over the mass range of 3 to 136 amu [ solar abundance of the elements, meteoritics, volume 18, 1983, pages 209 to 222 ]. both measurements show that major elements inside the sun are fe, o, ni, si and s, like those in rocky planets. observed solar neutrino fluxes are employed to constrain the interior composition of the sun. including the effects of neutrino flavor mixing, the results from homestake, sudbury, and gallium experiments constrain the mg, si, and fe abundances in the solar interior to be within a factor 0. 89 to 1. 34 of the surface values with 68 % confidence. if the o and / or ne abundances are increased in the interior to resolve helioseismic discrepancies with recent standard solar models, then the nominal interior mg, si, and fe abundances are constrained to a range of 0. 83 to 1. 24 relative to the surface. additional research is needed to determine whether the sun ' s interior is metal poor relative to its surface. the magnetic field of the sun is the underlying cause of the many diverse phenomena combined under the heading of solar activity. here we describe the magnetic field as it threads its way from the bottom of the convection zone, where it is built up by the solar dynamo, to the solar surface, where it manifests itself in the form of sunspots and faculae, and beyond into the outer solar atmosphere and, finally, into the heliosphere. on the way it, transports energy from the surface and the subsurface layers into the solar corona, where it heats the gas and accelerates the solar wind. much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable the group velocity of light has been measured at eight different wavelengths between 385 nm and 532 nm in the mediterranean sea at a depth of about 2. 2 km with the antares optical beacon systems. a parametrisation of the dependence of the refractive index on wavelength based on the salinity, pressure and temperature of the sea water at the antares site is in good agreement with these measurements. the location of a repeat plume detected at europa is found to be coincident with the strongest ionosphere detection made by galileo radio occultation in 1997. two planetary nebulae are shown to belong to the sagittarius dwarf galaxy, on the basis of their radial velocities. this is only the second dwarf spheroidal galaxy, after fornax, found to contain planetary nebulae. their existence confirms that this galaxy is at least as massive as the fornax dwarf spheroidal which has a single planetary nebula, and suggests a mass of a few times 10 * * 7 solar masses. the two planetary nebulae are located along the major axis of the galaxy, near the base of the tidal tail. there is a further candidate, situated at a very large distance along the direction of the tidal tail, for which no velocity measurement is available. the location of the planetary nebulae and globular clusters of the sagittarius dwarf galaxy suggests that a significant fraction of its mass is contained within the tidal tail. Question: In which structure is the Sun located? A) Milky Way Galaxy B) Andromeda Galaxy C) Cat's Eye Nebula D) Horseshoe Nebula
A) Milky Way Galaxy
Context: = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling and the risks of creating more pollution. = = = e - waste recycling = = = the recycling of electronic waste ( e - waste ) has seen significant technological advancements due to increasing environmental concerns and the growing volume of electronic product disposals. traditional e - waste recycling methods, which often involve manual disassemb ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications use less energy than conventional thermal separation processes such as distillation, sublimation or crystallization. the separation process is purely physical and both fractions ( permeate and retentate ) can be obtained as useful products. cold separation using membrane technology is widely used in the food technology, biotechnology and pharmaceutical industries. furthermore, using membranes enables separations to take place that would be impossible using thermal separation methods. for example, it is impossible to separate the constituents of azeotropic liquids or solutes which form isomorphic crystals by distillation or recrystallization but such separations can be achieved using membrane technology. depending on the type of membrane, the selective separation of certain individual substances or substance mixtures is possible. important technical applications include the production of drinking water by reverse osmosis. in waste water treatment, membrane technology is becoming increasingly important. ultra / microfiltration can be very effective in removing colloids and macromolecules from wastewater. this is needed if wastewater is discharged into sensitive waters especially those designated for contact water sports and recreation. about half of the market is in medical applications such as artificial kidneys to remove toxic substances by hemodialysis and as artificial lung for bubble - free supply of oxygen in the blood. the importance of membrane technology is growing in the field of environmental protection ( nano - mem - pro ippc database ). even in modern energy recovery techniques, membranes are increasingly used, for example in fuel cells and in osmotic power plants. = = mass transfer = = two basic models can be distinguished for mass transfer through the membrane : the solution - diffusion model and the hydrodynamic model. in real membranes, these two transport mechanisms certainly occur side by side, especially during ultra - filtration. = = = solution - diffusion model = = = in the solution - diffusion model, transport occurs only by diffusion. the component that needs to be transported must first be dissolved in the membrane. the general approach of the solution - diffusion model is to assume that the chemical potential of the feed and permeate fluids are in equilibrium with the adjacent membrane surfaces such that appropriate expressions for the chemical potential in the fluid and membrane phases can be equated at the solution - membrane interface. this principle is more important for dense membranes without natural pores such as those used for reverse osmosis and in fuel cells. during the filtration process a boundary layer forms on the membrane. this concentration gradient is created by molecules which cannot pass through the membrane. the you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ), ##nts from the air to reduce the potential adverse effects on humans and the environment. the process of air purification may be performed using methods such as mechanical filtration, ionization, activated carbon adsorption, photocatalytic oxidation, and ultraviolet light germicidal irradiation. = = = sewage treatment = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the ( create a critical mass ) for detonation. it also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. the procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - the most puzzling issue in the foundations of quantum mechanics is perhaps that of the status of the wave function of a system in a quantum universe. is the wave function objective or subjective? does it represent the physical state of the system or merely our information about the system? and if the former, does it provide a complete description of the system or only a partial description? we shall address these questions here mainly from a bohmian perspective, and shall argue that part of the difficulty in ascertaining the status of the wave function in quantum mechanics arises from the fact that there are two different sorts of wave functions involved. the most fundamental wave function is that of the universe. from it, together with the configuration of the universe, one can define the wave function of a subsystem. we argue that the fundamental wave function, the wave function of the universe, has a law - like character. defective body parts. inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures. prostheses are specifically not orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis. prostheses are technically the complete finished item. for instance, a c - leg knee alone is not a prosthesis, but only a prosthetic component. the complete prosthesis would consist of the attachment system to the residual limb – usually a " socket ", and all the attachment hardware components all the way down to and including the terminal device. despite the technical difference, the terms are often used interchangeably. the terms " prosthetic " and " orthotic " are adjectives used to describe devices such as a prosthetic knee. the terms " prosthetics " and " orthotics " are used to describe the respective allied health fields. an occupational therapist ' s role in prosthetics include therapy, training and evaluations. prosthetic training includes orientation to prosthetics components and terminology, donning and doffing, wearing schedule, and how to care for residual limb and the prosthesis. = = = exoskeletons = = = a powered exoskeleton is a wearable mobile machine that is powered by a system of electric motors, pneumatics, levers, hydraulics, or a combination of technologies that allow for limb movement with increased strength and endurance. its design aims to provide back support, sense the user ' s motion, and send a signal to motors which manage the gears. the exoskeleton supports the shoulder, waist and thigh, and assists movement for lifting and holding heavy items, while lowering back stress. = = = adaptive seating and positioning = = = people with balance and motor function challenges often need specialized equipment to sit or stand safely and securely. this equipment is frequently specialized for specific settings such as in a classroom or nursing home. positioning is often important in seating arrangements to ensure that user ' s body pressure is distributed equally without inhibiting movement in a desired way. positioning devices have been developed to aid in allowing people to stand and bear weight on their legs without risk of a fall. generally, dead - end filtration is used for feasibility studies on a laboratory scale. the dead - end membranes are relatively easy to fabricate which reduces the cost of the separation process. the dead - end membrane separation process is easy to implement and the process is usually cheaper than cross - flow membrane filtration. the dead - end filtration process is usually a batch - type process, where the filtering solution is loaded ( or slowly fed ) into the membrane device, which then allows passage of some particles subject to the driving force. the main disadvantage of dead - end filtration is the extensive membrane fouling and concentration polarization. the fouling is usually induced faster at higher driving forces. membrane fouling and particle retention in a feed solution also builds up a concentration gradients and particle backflow ( concentration polarization ). the tangential flow devices are more cost and labor - intensive, but they are less susceptible to fouling due to the sweeping effects and high shear rates of the passing flow. the most commonly used synthetic membrane devices ( modules ) are flat sheets / plates, spiral wounds, and hollow fibers. flat membranes used in filtration and separation processes can be enhanced with surface patterning, where microscopic structures are introduced to improve performance. these patterns increase surface area, optimize water flow, and reduce fouling, leading to higher permeability and longer membrane lifespan. research has shown that such modifications can significantly enhance efficiency in water purification, energy applications, and industrial separations. flat plates are usually constructed as circular thin flat membrane surfaces to be used in dead - end geometry modules. spiral wounds are constructed from similar flat membranes but in the form of a " pocket " containing two membrane sheets separated by a highly porous support plate. several such pockets are then wound around a tube to create a tangential flow geometry and to reduce membrane fouling. hollow fiber modules consist of an assembly of self - supporting fibers with dense skin separation layers, and a more open matrix helping to withstand pressure gradients and maintain structural integrity. the hollow fiber modules can contain up to 10, 000 fibers ranging from 200 to 2500 ΞΌm in diameter ; the main advantage of hollow fiber modules is the very large surface area within an enclosed volume, increasing the efficiency of the separation process. the disc tube module uses a cross - flow geometry and consists of a pressure tube and hydraulic discs, which are held by a central tension rod, and membrane cushions that lie between two discs. = = membrane performance and governing equations = = the selection of synthetic membranes Question: Removing waste from the body is the primary function of which body system? A) excretory B) nervous C) circulatory D) skeletal
A) excretory
Context: one of the greatest discoveries of modern times is that of the expanding universe, almost invariably attributed to hubble ( 1929 ). what is not widely known is that the original treatise by lemaitre ( 1927 ) contained a rich fusion of both theory and of observation. stiglers law of eponymy is yet again affirmed : no scientific discovery is named after its original discoverer ( merton, 1957 ). an appeal is made for a lemaitre telescope, to honour the discoverer of the expanding universe. while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern. the universe is found to have undergone several phases in which the gravitational constant had different behaviors. during some epochs the energy density of the universe remained constant and the universe remained static. in the radiation dominated epoch the radiation field satisfies stefan ' s formula while the scale factor varies linearly with time. the model enhances the formation of the structure in the universe as observed today. observations of an ancient stellar stream provide the first evidence of a vanished population of extremely metal - poor stellar clusters. their remnants might reveal how the early assembly of the milky way proceeded. the origin of the arc - shaped stellar complexes in the lmc4 region is still unknown. these perfect arcs could not have been formed by o - stars and sne in their centers ; the strong arguments exist also against the possibility of their formation from infalling gas clouds. the origin from microquasars / grb jets is not excluded, because there is the strong concentration of x - ray binaries in the same region and the massive old cluster ngc 1978, probable site of formation of binaries with compact components, is there also. the last possibility is that the source of energy for formation of the stellar arcs and the lmc4 supershell might be the the giant jet from the nucleus of the milky way, which might be active a dozen myr ago. the union of space telescopes and interstellar spaceships guarantees that if extraterrestrial civilizations were common, someone would have come here long ago. two planetary nebulae are shown to belong to the sagittarius dwarf galaxy, on the basis of their radial velocities. this is only the second dwarf spheroidal galaxy, after fornax, found to contain planetary nebulae. their existence confirms that this galaxy is at least as massive as the fornax dwarf spheroidal which has a single planetary nebula, and suggests a mass of a few times 10 * * 7 solar masses. the two planetary nebulae are located along the major axis of the galaxy, near the base of the tidal tail. there is a further candidate, situated at a very large distance along the direction of the tidal tail, for which no velocity measurement is available. the location of the planetary nebulae and globular clusters of the sagittarius dwarf galaxy suggests that a significant fraction of its mass is contained within the tidal tail. intense research in the materials science community due to the unique properties that they exhibit. nanostructure deals with objects and structures that are in the 1 – 100 nm range. in many materials, atoms or molecules agglomerate to form objects at the nanoscale. this causes many interesting electrical, magnetic, optical, and mechanical properties. in describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. nanotextured surfaces have one dimension on the nanoscale, i. e., only the thickness of the surface of an object is between 0. 1 and 100 nm. nanotubes have two dimensions on the nanoscale, i. e., the diameter of the tube is between 0. 1 and 100 nm ; its length could be much greater. finally, spherical nanoparticles have three dimensions on the nanoscale, i. e., the particle is between 0. 1 and 100 nm in each spatial dimension. the terms nanoparticles and ultrafine particles ( ufp ) often are used synonymously although ufp can reach into the micrometre range. the term ' nanostructure ' is often used, when referring to magnetic technology. nanoscale structure in biology is often called ultrastructure. = = = = microstructure = = = = microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25Γ— magnification. it deals with objects from 100 nm to a few cm. the microstructure of a material ( which can be broadly classified into metallic, polymeric, ceramic and composite ) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high / low temperature behavior, wear resistance, and so on. most of the traditional materials ( such as metals and ceramics ) are microstructured. the manufacture of a perfect crystal of a material is physically impossible. for example, any crystalline material will contain defects such as precipitates, grain boundaries ( hall – petch relationship ), vacancies, interstitial atoms or substitutional atoms. the microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties. = = = = macrostructure = = = = macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of two types of stars are known to have strong, large scale magnetic fields : the main sequence ap stars and the magnetic white dwarfs. this suggest that the former might be the progenitors of the latter. in order to test this idea, i have carried out a search for large scale magnetic fields in stars with evolutionary states which are intermediate, i. e. in horizontal branch stars and in hot subdwarfs. there are a few different mechanisms that can cause white dwarf stars to vary in brightness, providing opportunities to probe the physics, structures, and formation of these compact stellar remnants. the observational characteristics of the three most common types of white dwarf variability are summarized : stellar pulsations, rotation, and ellipsoidal variations from tidal distortion in binary systems. stellar pulsations are emphasized as the most complex type of variability, which also has the greatest potential to reveal the conditions of white dwarf interiors. Question: Which discovery revealed that the universe contains many structures composed of millions of stars? A) novas B) galaxies C) black holes D) solar systems
B) galaxies
Context: stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomi pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " – their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gymnosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable , the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell – which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell – which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ) ##nosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of , dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both with one allele inducing a change on the other. = = plant evolution = = the chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, ( commonly but incorrectly known as " blue - green algae " ) and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " – their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gym Question: If a tall tree falls over in a crowded forest, which resource becomes available to the surrounding plants? A) air B) soil C) water D) sunlight
D) sunlight
Context: ##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s , crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest ##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " consisting of several distinct layers, often referred to as spheres : the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the earth ' s surface and its various processes these correspond to rocks, water, air and life. also included by some are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth ##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to ##hosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make Question: What best describes the mantle of Earth? A) a thin layer that is located on the surface B) a solid layer made of iron and nickel C) the largest layer between the crust and outer core D) the smallest layer that is made up of molten rock
C) the largest layer between the crust and outer core
Context: cell or tissue growth in vitro. a physiological environment can consist of many different parameters such as temperature, pressure, oxygen or carbon dioxide concentration, or osmolality of fluid environment, and it can extend to all kinds of biological, chemical or mechanical stimuli. therefore, there are systems that may include the application of forces such as electromagnetic forces, mechanical pressures, or fluid pressures to the tissue. these systems can be two - or three - dimensional setups. bioreactors can be used in both academic and industry applications. general - use and application - specific bioreactors are also commercially available, which may provide static chemical stimulation or a combination of chemical and mechanical stimulation. cell proliferation and differentiation are largely influenced by mechanical and biochemical cues in the surrounding extracellular matrix environment. bioreactors are typically developed to replicate the specific physiological environment of the tissue being grown ( e. g., flex and fluid shearing for heart tissue growth ). this can allow specialized cell lines to thrive in cultures replicating their native environments, but it also makes bioreactors attractive tools for culturing stem cells. a successful stem - cell - based bioreactor is effective at expanding stem cells with uniform properties and / or promoting controlled, reproducible differentiation into selected mature cell types. there are a variety of bioreactors designed for 3d cell cultures. there are small plastic cylindrical chambers, as well as glass chambers, with regulated internal humidity and moisture specifically engineered for the purpose of growing cells in three dimensions. the bioreactor uses bioactive synthetic materials such as polyethylene terephthalate membranes to surround the spheroid cells in an environment that maintains high levels of nutrients. they are easy to open and close, so that cell spheroids can be removed for testing, yet the chamber is able to maintain 100 % humidity throughout. this humidity is important to achieve maximum cell growth and function. the bioreactor chamber is part of a larger device that rotates to ensure equal cell growth in each direction across three dimensions. quinxell technologies now under quintech life sciences from singapore has developed a bioreactor known as the tisxell biaxial bioreactor which is specially designed for the purpose of tissue engineering. it is the first bioreactor in the world to have a spherical glass chamber with biaxial rotation ; specifically to mimic the rotation of the fetus in the womb ; which provides a conducive environment for the growth of tissues. multiple forms of mechanical stimulation have also been combined into a single their mechanical properties. = = tissue culture = = in many cases, creation of functional tissues and biological structures in vitro requires extensive culturing to promote survival, growth and inducement of functionality. in general, the basic requirements of cells must be maintained in culture, which include oxygen, ph, humidity, temperature, nutrients and osmotic pressure maintenance. tissue engineered cultures also present additional problems in maintaining culture conditions. in standard cell culture, diffusion is often the sole means of nutrient and metabolite transport. however, as a culture becomes larger and more complex, such as the case with engineered organs and whole tissues, other mechanisms must be employed to maintain the culture, such as the creation of capillary networks within the tissue. another issue with tissue culture is introducing the proper factors or stimuli required to induce functionality. in many cases, simple maintenance culture is not sufficient. growth factors, hormones, specific metabolites or nutrients, chemical and physical stimuli are sometimes required. for example, certain cells respond to changes in oxygen tension as part of their normal development, such as chondrocytes, which must adapt to low oxygen conditions or hypoxia during skeletal development. others, such as endothelial cells, respond to shear stress from fluid flow, which is encountered in blood vessels. mechanical stimuli, such as pressure pulses seem to be beneficial to all kind of cardiovascular tissue such as heart valves, blood vessels or pericardium. = = = bioreactors = = = in tissue engineering, a bioreactor is a device that attempts to simulate a physiological environment in order to promote cell or tissue growth in vitro. a physiological environment can consist of many different parameters such as temperature, pressure, oxygen or carbon dioxide concentration, or osmolality of fluid environment, and it can extend to all kinds of biological, chemical or mechanical stimuli. therefore, there are systems that may include the application of forces such as electromagnetic forces, mechanical pressures, or fluid pressures to the tissue. these systems can be two - or three - dimensional setups. bioreactors can be used in both academic and industry applications. general - use and application - specific bioreactors are also commercially available, which may provide static chemical stimulation or a combination of chemical and mechanical stimulation. cell proliferation and differentiation are largely influenced by mechanical and biochemical cues in the surrounding extracellular matrix environment. bioreactors are typically developed to replicate the specific physiological environment of the tissue being grown ( e. g., flex and fluid shearing for heart tissue growth ). this can in this article i explain in detail a method for making small amounts of liquid oxygen in the classroom if there is no access to a cylinder of compressed oxygen gas. i also discuss two methods for identifying the fact that it is liquid oxygen as opposed to liquid nitrogen. we have combined measurements of the kinematics, morphology, and oxygen abundance of the ionized gas in \ izw18, one of the most metal - poor galaxies known, to examine the star formation history and chemical mixing processes. blood vessels. mechanical stimuli, such as pressure pulses seem to be beneficial to all kind of cardiovascular tissue such as heart valves, blood vessels or pericardium. = = = bioreactors = = = in tissue engineering, a bioreactor is a device that attempts to simulate a physiological environment in order to promote cell or tissue growth in vitro. a physiological environment can consist of many different parameters such as temperature, pressure, oxygen or carbon dioxide concentration, or osmolality of fluid environment, and it can extend to all kinds of biological, chemical or mechanical stimuli. therefore, there are systems that may include the application of forces such as electromagnetic forces, mechanical pressures, or fluid pressures to the tissue. these systems can be two - or three - dimensional setups. bioreactors can be used in both academic and industry applications. general - use and application - specific bioreactors are also commercially available, which may provide static chemical stimulation or a combination of chemical and mechanical stimulation. cell proliferation and differentiation are largely influenced by mechanical and biochemical cues in the surrounding extracellular matrix environment. bioreactors are typically developed to replicate the specific physiological environment of the tissue being grown ( e. g., flex and fluid shearing for heart tissue growth ). this can allow specialized cell lines to thrive in cultures replicating their native environments, but it also makes bioreactors attractive tools for culturing stem cells. a successful stem - cell - based bioreactor is effective at expanding stem cells with uniform properties and / or promoting controlled, reproducible differentiation into selected mature cell types. there are a variety of bioreactors designed for 3d cell cultures. there are small plastic cylindrical chambers, as well as glass chambers, with regulated internal humidity and moisture specifically engineered for the purpose of growing cells in three dimensions. the bioreactor uses bioactive synthetic materials such as polyethylene terephthalate membranes to surround the spheroid cells in an environment that maintains high levels of nutrients. they are easy to open and close, so that cell spheroids can be removed for testing, yet the chamber is able to maintain 100 % humidity throughout. this humidity is important to achieve maximum cell growth and function. the bioreactor chamber is part of a larger device that rotates to ensure equal cell growth in each direction across three dimensions. quinxell technologies now under quintech life sciences from singapore has developed a bioreactor known as the tisxell biaxial bioreactor which is specially designed for the purpose of an alternative explanation of 1 / f - noise in manganites is suggested and discussed while co - coculturing epithelial and adipocyte cells. the hystem kit is another 3 - d platform containing ecm components and hyaluronic acid that has been used for cancer research. additionally, hydrogel constituents can be chemically modified to assist in crosslinking and enhance their mechanical properties. = = tissue culture = = in many cases, creation of functional tissues and biological structures in vitro requires extensive culturing to promote survival, growth and inducement of functionality. in general, the basic requirements of cells must be maintained in culture, which include oxygen, ph, humidity, temperature, nutrients and osmotic pressure maintenance. tissue engineered cultures also present additional problems in maintaining culture conditions. in standard cell culture, diffusion is often the sole means of nutrient and metabolite transport. however, as a culture becomes larger and more complex, such as the case with engineered organs and whole tissues, other mechanisms must be employed to maintain the culture, such as the creation of capillary networks within the tissue. another issue with tissue culture is introducing the proper factors or stimuli required to induce functionality. in many cases, simple maintenance culture is not sufficient. growth factors, hormones, specific metabolites or nutrients, chemical and physical stimuli are sometimes required. for example, certain cells respond to changes in oxygen tension as part of their normal development, such as chondrocytes, which must adapt to low oxygen conditions or hypoxia during skeletal development. others, such as endothelial cells, respond to shear stress from fluid flow, which is encountered in blood vessels. mechanical stimuli, such as pressure pulses seem to be beneficial to all kind of cardiovascular tissue such as heart valves, blood vessels or pericardium. = = = bioreactors = = = in tissue engineering, a bioreactor is a device that attempts to simulate a physiological environment in order to promote cell or tissue growth in vitro. a physiological environment can consist of many different parameters such as temperature, pressure, oxygen or carbon dioxide concentration, or osmolality of fluid environment, and it can extend to all kinds of biological, chemical or mechanical stimuli. therefore, there are systems that may include the application of forces such as electromagnetic forces, mechanical pressures, or fluid pressures to the tissue. these systems can be two - or three - dimensional setups. bioreactors can be used in both academic and industry applications. general - use and application - specific bioreactors are also commercially available, which may provide static chemical stimulation or a required. for example, certain cells respond to changes in oxygen tension as part of their normal development, such as chondrocytes, which must adapt to low oxygen conditions or hypoxia during skeletal development. others, such as endothelial cells, respond to shear stress from fluid flow, which is encountered in blood vessels. mechanical stimuli, such as pressure pulses seem to be beneficial to all kind of cardiovascular tissue such as heart valves, blood vessels or pericardium. = = = bioreactors = = = in tissue engineering, a bioreactor is a device that attempts to simulate a physiological environment in order to promote cell or tissue growth in vitro. a physiological environment can consist of many different parameters such as temperature, pressure, oxygen or carbon dioxide concentration, or osmolality of fluid environment, and it can extend to all kinds of biological, chemical or mechanical stimuli. therefore, there are systems that may include the application of forces such as electromagnetic forces, mechanical pressures, or fluid pressures to the tissue. these systems can be two - or three - dimensional setups. bioreactors can be used in both academic and industry applications. general - use and application - specific bioreactors are also commercially available, which may provide static chemical stimulation or a combination of chemical and mechanical stimulation. cell proliferation and differentiation are largely influenced by mechanical and biochemical cues in the surrounding extracellular matrix environment. bioreactors are typically developed to replicate the specific physiological environment of the tissue being grown ( e. g., flex and fluid shearing for heart tissue growth ). this can allow specialized cell lines to thrive in cultures replicating their native environments, but it also makes bioreactors attractive tools for culturing stem cells. a successful stem - cell - based bioreactor is effective at expanding stem cells with uniform properties and / or promoting controlled, reproducible differentiation into selected mature cell types. there are a variety of bioreactors designed for 3d cell cultures. there are small plastic cylindrical chambers, as well as glass chambers, with regulated internal humidity and moisture specifically engineered for the purpose of growing cells in three dimensions. the bioreactor uses bioactive synthetic materials such as polyethylene terephthalate membranes to surround the spheroid cells in an environment that maintains high levels of nutrients. they are easy to open and close, so that cell spheroids can be removed for testing, yet the chamber is able to maintain 100 % humidity throughout. this humidity is important to achieve maximum cell growth and function. the ##nosperms and angiosperms. gymnosperms produce " naked seeds " not fully enclosed in an ovary ; modern representatives include conifers, cycads, ginkgo, and gnetales. angiosperms produce seeds enclosed in a structure such as a carpel or an ovary. ongoing research on the molecular phylogenetics of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. = = plant physiology = = plant physiology encompasses all the internal chemical and physical activities of plants associated with life. chemicals obtained from the air, soil and water form the basis of all plant metabolism. the energy of sunlight, captured by oxygenic photosynthesis and released by cellular respiration, is the basis of almost all life. photoautotrophs, including all green plants, algae and cyanobacteria gather energy directly from sunlight by photosynthesis. heterotrophs including all animals, all fungi, all completely parasitic plants, and non - photosynthetic bacteria take in organic molecules produced by photoautotrophs and respire them or use them in the construction of cells and tissues. respiration is the oxidation of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of to maintain the culture, such as the creation of capillary networks within the tissue. another issue with tissue culture is introducing the proper factors or stimuli required to induce functionality. in many cases, simple maintenance culture is not sufficient. growth factors, hormones, specific metabolites or nutrients, chemical and physical stimuli are sometimes required. for example, certain cells respond to changes in oxygen tension as part of their normal development, such as chondrocytes, which must adapt to low oxygen conditions or hypoxia during skeletal development. others, such as endothelial cells, respond to shear stress from fluid flow, which is encountered in blood vessels. mechanical stimuli, such as pressure pulses seem to be beneficial to all kind of cardiovascular tissue such as heart valves, blood vessels or pericardium. = = = bioreactors = = = in tissue engineering, a bioreactor is a device that attempts to simulate a physiological environment in order to promote cell or tissue growth in vitro. a physiological environment can consist of many different parameters such as temperature, pressure, oxygen or carbon dioxide concentration, or osmolality of fluid environment, and it can extend to all kinds of biological, chemical or mechanical stimuli. therefore, there are systems that may include the application of forces such as electromagnetic forces, mechanical pressures, or fluid pressures to the tissue. these systems can be two - or three - dimensional setups. bioreactors can be used in both academic and industry applications. general - use and application - specific bioreactors are also commercially available, which may provide static chemical stimulation or a combination of chemical and mechanical stimulation. cell proliferation and differentiation are largely influenced by mechanical and biochemical cues in the surrounding extracellular matrix environment. bioreactors are typically developed to replicate the specific physiological environment of the tissue being grown ( e. g., flex and fluid shearing for heart tissue growth ). this can allow specialized cell lines to thrive in cultures replicating their native environments, but it also makes bioreactors attractive tools for culturing stem cells. a successful stem - cell - based bioreactor is effective at expanding stem cells with uniform properties and / or promoting controlled, reproducible differentiation into selected mature cell types. there are a variety of bioreactors designed for 3d cell cultures. there are small plastic cylindrical chambers, as well as glass chambers, with regulated internal humidity and moisture specifically engineered for the purpose of growing cells in three dimensions. the bioreactor uses bioactive synthetic materials such as polyethylene terephthala Question: When Javier exercises, his muscle cells need more oxygen. Which would help Javier's muscle cells receive more oxygen? A) decreasing his respiration rate B) increasing his perspiration rate C) increasing the rate his heart beats D) decreasing the rate his blood flow
C) increasing the rate his heart beats
Context: , subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from . throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β€” one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer , tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β€” one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like molecular diffusion processes give rise to significant changes in the primary microstructural features. this includes the gradual elimination of porosity, which is typically accompanied by a net shrinkage and overall densification of the component. thus, the pores in the object may close up, resulting in a denser product of significantly greater strength and fracture toughness. another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. the growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. various forms that are characteristic of its life cycle. there are four key processes that underlie development : determination, differentiation, morphogenesis, and growth. determination sets the developmental fate of a cell, which becomes more restrictive during development. differentiation is the process by which specialized cells arise from less specialized cells such as stem cells. stem cells are undifferentiated or partially differentiated cells that can differentiate into various types of cells and proliferate indefinitely to produce more of the same stem cell. cellular differentiation dramatically changes a cell ' s size, shape, membrane potential, metabolic activity, and responsiveness to signals, which are largely due to highly controlled modifications in gene expression and epigenetics. with a few exceptions, cellular differentiation almost never involves a change in the dna sequence itself. thus, different cells can have very different physical characteristics despite having the same genome. morphogenesis, or the development of body form, is the result of spatial differences in gene expression. a small fraction of the genes in an organism ' s genome called the developmental - genetic toolkit control the development of that organism. these toolkit genes are highly conserved among phyla, meaning that they are ancient and very similar in widely separated groups of animals. differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. among the most important toolkit genes are the hox genes. hox genes determine where repeating parts, such as the many vertebrae of snakes, will grow in a developing embryo or larva. = = evolution = = = = = evolutionary processes = = = evolution is a central organizing concept in biology. it is the change in heritable characteristics of populations over successive generations. in artificial selection, animals were selectively bred for specific traits. given that traits are inherited, populations contain a varied mix of traits, and reproduction is able to increase any population, darwin argued that in the natural world, it was nature that played the role of humans in selecting for specific traits. darwin inferred that individuals who possessed heritable traits better adapted to their environments are more likely to survive and produce more offspring than other individuals. he further inferred that this would lead to the accumulation of favorable traits over successive generations, thereby increasing the match between the organisms and their environment. = = = speciation = = = a species is a group of organisms that mate with one another and speciation is the process by which one lineage splits into two lineages as a result of having evolved independently from each other ( or underlined when italics are not available ). the evolutionary relationships and heredity of a group of organisms is called its phylogeny. phylogenetic studies attempt to discover phylogenies. the basic approach is to use similarities based on shared inheritance to determine relationships. as an example, species of pereskia are trees or bushes with prominent leaves. they do not obviously resemble a typical leafless cactus such as an echinocactus. however, both pereskia and echinocactus have spines produced from areoles ( highly specialised pad - like structures ) suggesting that the two genera are indeed related. judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. the cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups ( homoplasies ) or those left over from ancestors ( plesiomorphies ) – and derived characters, which have been passed down from innovations in a shared ancestor ( apomorphies ). only derived characters, such as the spine - producing areoles of cacti, provide evidence for descent from a common ancestor. the results of cladistic analyses are expressed as cladograms : tree - like diagrams showing the pattern of evolutionary branching and descent. from the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly dna sequences, rather than morphological characters like the presence or absence of spines and areoles. the difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. clive stace describes this as having " direct access to the genetic basis of evolution. " as a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants. in 1998, the angiosperm phylogeny group published a phylogeny for flowering plants based on an analysis of the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β€” one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states Question: Which process has most likely occurred when new traits appear in a species? A) selective breeding B) genetic mutation C) crossbreeding D) cloning
B) genetic mutation
Context: by physicians, physician assistants, nurse practitioners, or other health professionals who have first contact with a patient seeking medical treatment or care. these occur in physician offices, clinics, nursing homes, schools, home visits, and other places close to patients. about 90 % of medical visits can be treated by the primary care provider. these include treatment of acute and chronic illnesses, preventive care and health education for all ages and both sexes. secondary care medical services are provided by medical specialists in their offices or clinics or at local community hospitals for a patient referred by a primary care provider who first diagnosed or treated the patient. referrals are made for those patients who required the expertise or procedures performed by specialists. these include both ambulatory care and inpatient services, emergency departments, intensive care medicine, surgery services, physical therapy, labor and delivery, endoscopy units, diagnostic laboratory and medical imaging services, hospice centers, etc. some primary care providers may also take care of hospitalized patients and deliver babies in a secondary care setting. tertiary care medical services are provided by specialist hospitals or regional centers equipped with diagnostic and treatment facilities not generally available at local hospitals. these include trauma centers, burn treatment centers, advanced neonatology unit services, organ transplants, high - risk pregnancy, radiation oncology, etc. modern medical care also depends on information – still delivered in many health care settings on paper records, but increasingly nowadays by electronic means. in low - income countries, modern healthcare is often too expensive for the average person. international healthcare policy researchers have advocated that " user fees " be removed in these areas to ensure access, although even after removal, significant costs and barriers remain. separation of prescribing and dispensing is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. in the western world there are centuries of tradition for separating pharmacists from physicians. in asian countries, it is traditional for physicians to also provide drugs. = = branches = = working together as an interdisciplinary team, many highly trained health professionals besides medical practitioners are involved in the delivery of modern health care. examples include : nurses, emergency medical technicians and paramedics, laboratory scientists, pharmacists, podiatrists, physiotherapists, respiratory therapists, speech therapists, occupational therapists, radiographers, dietitians, and bioengineers, medical physicists, surgeons, surgeon ' s assistant, surgical techno and peripheral blood. they concluded from the results that immuno - cytochemical staining of bone marrow and peripheral blood is a sensitive and simple way to detect and quantify breast cancer cells. one of the main reasons for metastatic relapse in patients with solid tumours is the early dissemination of malignant cells. the use of monoclonal antibodies ( mabs ) specific for cytokeratins can identify disseminated individual epithelial tumor cells in the bone marrow. one study reports on having developed an immuno - cytochemical procedure for simultaneous labeling of cytokeratin component no. 18 ( ck18 ) and prostate specific antigen ( psa ). this would help in the further characterization of disseminated individual epithelial tumor cells in patients with prostate cancer. the twelve control aspirates from patients with benign prostatic hyperplasia showed negative staining, which further supports the specificity of ck18 in detecting epithelial tumour cells in bone marrow. in most cases of malignant disease complicated by effusion, neoplastic cells can be easily recognized. however, in some cases, malignant cells are not so easily seen or their presence is too doubtful to call it a positive report. the use of immuno - cytochemical techniques increases diagnostic accuracy in these cases. ghosh, mason and spriggs analysed 53 samples of pleural or peritoneal fluid from 41 patients with malignant disease. conventional cytological examination had not revealed any neoplastic cells. three monoclonal antibodies ( anti - cea, ca 1 and hmfg - 2 ) were used to search for malignant cells. immunocytochemical labelling was performed on unstained smears, which had been stored at - 20 Β°c up to 18 months. twelve of the forty - one cases in which immuno - cytochemical staining was performed, revealed malignant cells. the result represented an increase in diagnostic accuracy of approximately 20 %. the study concluded that in patients with suspected malignant disease, immuno - cytochemical labeling should be used routinely in the examination of cytologically negative samples and has important implications with respect to patient management. another application of immuno - cytochemical staining is for the detection of two antigens in the same smear. double staining with light chain antibodies and with t and b cell markers can indicate the neoplastic origin of a lymph , social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient ' s problem. on subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations. = = institutions = = contemporary interventions lacked sufficient evidence to support either benefit or harm. in modern clinical practice, physicians and physician assistants personally assess patients to diagnose, prognose, treat, and prevent disease using clinical judgment. the doctor - patient relationship typically begins with an interaction with an examination of the patient ' s medical history and medical record, followed by a medical interview and a physical examination. basic diagnostic medical devices ( e. g., stethoscope, tongue depressor ) are typically used. after examining for signs and interviewing for symptoms, the doctor may order medical tests ( e. g., blood tests ), take a biopsy, or prescribe pharmaceutical drugs or other therapies. differential diagnosis methods help to rule out conditions based on the information provided. during the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. the medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. follow - ups may be shorter but follow the same general procedure, and specialists follow a similar process. the diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. the components of the medical interview and encounter are : chief complaint ( cc ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ), , followed by a medical interview and a physical examination. basic diagnostic medical devices ( e. g., stethoscope, tongue depressor ) are typically used. after examining for signs and interviewing for symptoms, the doctor may order medical tests ( e. g., blood tests ), take a biopsy, or prescribe pharmaceutical drugs or other therapies. differential diagnosis methods help to rule out conditions based on the information provided. during the encounter, properly informing the patient of all relevant facts is an important part of the relationship and the development of trust. the medical encounter is then documented in the medical record, which is a legal document in many jurisdictions. follow - ups may be shorter but follow the same general procedure, and specialists follow a similar process. the diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. the components of the medical interview and encounter are : chief complaint ( cc ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. diagnostic radiology is concerned with imaging of the body, e. g. by x - rays, x - ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also known as anaesthetics ) : concerned with the perioperative management of the surgical patient. the anesthesiologist ' s role during surgery is to prevent derangement in the vital organs ' ( i. e. brain, heart, kidneys ) functions and postoperative pain. outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. emergency medicine is concerned with the diagnosis and treatment of acute or life - threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. family medicine, family ##al radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also known as anaesthetics ) : concerned with the perioperative management of the surgical patient. the anesthesiologist ' s role during surgery is to prevent derangement in the vital organs ' ( i. e. brain, heart, kidneys ) functions and postoperative pain. outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. emergency medicine is concerned with the diagnosis and treatment of acute or life - threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. family medicine, family practice, general practice or primary care is, in many countries, the first port - of - call for patients with non - emergency medical problems. family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. medical genetics is concerned with the sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient ' s problem. on subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations. = = institutions = = contemporary medicine is, in general, conducted within health care systems. legal, credentialing, and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. the characteristics of any given health care system have a significant impact on the way medical care is provided. from ancient times, ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system Question: A patient visits the doctor for a checkup and is diagnosed with skin cancer. Which of the following is the most likely cause of this disease? A) poor eating and sleeping habits B) exposure to ultraviolet rays C) a defect in the person's immune system D) working with plants and animals
B) exposure to ultraviolet rays
Context: electromagnetic soliton - particle with both quasi - static and quick - oscillating wave parts is considered. its mass, spin, charge, and magnetic moment appear naturally when the interaction with distant solitons is considered. the substantiation of dirac equation for the wave part of the interacting soliton - particle is given. this is a comment on phys. rev. lett. 98, 180403 ( 2007 ) [ arxiv : 0704. 2162 ]. an important question of theoretical physics is whether sound is able to propagate in vacuums at all and if this is the case, then it must lead to the reinterpretation of one zero - restmass particle which corresponds to vacuum - sound waves. taking the electron - neutrino as the corresponding particle, its observed non - vanishing rest - energy may only appear for neutrino - propagation inside material media. the idea may also influence the physics of dense matter, restricting the maximum speed of sound, both in vacuums and in matter to the speed of light. relativistically covariant equation of motion for real dust particle under the action of electromagnetic radiation is derived. the particle is neutral in charge. equation of motion is expressed in terms of particle ' s optical properties, standardly used in optics for stationary particles. energy is no doubt an intuitive concept. following a previous analysis on the nature of elementary particles and associated elementary quantum fields, the peculiar status and role of energy is scrutinised further at elementary and larger scales. energy physical characterisation shows that it is a primordial component of reality highlighting the quantum fields natural tendencies to interact, the elementary particles natural tendency to constitute complex bodies and every material thing natural tendency to actualise and be active. energy therefore is a primordial notion in need of a proper assessment. we calculate the transmission coefficient for electrons passing through the helically shaped potential barrier, which can be, for example, produced by dna molecules. nous avons obtenu des formules explicites representant les fonctions e ( z ) apparaissant dans la theorie des ` ` espaces de sonine ' ' associes par de branges a la transformation de fourier. the motion of celestial bodies through a higher power such as god. aristotle did not have the technological advancements that would have explained the motion of celestial bodies. in addition, aristotle had many views on the elements. he believed that everything was derived of the elements earth, water, air, fire, and lastly the aether. the aether was a celestial element, and therefore made up the matter of the celestial bodies. the elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. the motion of these elements begins with earth being the closest to " the earth, " then water, air, fire, and finally aether. in addition to the makeup of all things, aristotle came up with theories as to why things did not return to their natural motion. he understood that water sits above earth, air above water, and fire above air in their natural state. he explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state. the important legacy of this period included substantial advances in factual knowledge, especially in anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy ; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes ; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research. in the hellenistic age scholars frequently employed the principles developed in earlier greek thought : the application of mathematics and deliberate empirical research, in their scientific investigations. thus, clear unbroken lines of influence lead from ancient greek and hellenistic philosophers, to medieval muslim philosophers and scientists, to the european renaissance and enlightenment, to the secular sciences of the modern day. neither reason nor inquiry began with the ancient greeks, but the socratic method did, along with the idea of forms, give great advances in geometry, logic, and the natural sciences. according to benjamin farrington, former professor of classics at swansea university : " men were weighing for thousands of years before archimedes worked out the laws of equilibrium ; they must have had practical and intuitional knowledge of the principals involved. what archimedes did was to sort out the theoretical implications of this practical knowledge and present the resulting body of knowledge as a logically coherent system. " and again : " with astonishment we find ourselves on the threshold of modern science ##odynamic and mechanical descriptions of physical properties. = = = = nanostructure = = = = materials, which atoms and molecules form constituents in the nanoscale ( i. e., they form nanostructures ) are called nanomaterials. nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit. nanostructure deals with objects and structures that are in the 1 – 100 nm range. in many materials, atoms or molecules agglomerate to form objects at the nanoscale. this causes many interesting electrical, magnetic, optical, and mechanical properties. in describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale. nanotextured surfaces have one dimension on the nanoscale, i. e., only the thickness of the surface of an object is between 0. 1 and 100 nm. nanotubes have two dimensions on the nanoscale, i. e., the diameter of the tube is between 0. 1 and 100 nm ; its length could be much greater. finally, spherical nanoparticles have three dimensions on the nanoscale, i. e., the particle is between 0. 1 and 100 nm in each spatial dimension. the terms nanoparticles and ultrafine particles ( ufp ) often are used synonymously although ufp can reach into the micrometre range. the term ' nanostructure ' is often used, when referring to magnetic technology. nanoscale structure in biology is often called ultrastructure. = = = = microstructure = = = = microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25Γ— magnification. it deals with objects from 100 nm to a few cm. the microstructure of a material ( which can be broadly classified into metallic, polymeric, ceramic and composite ) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high / low temperature behavior, wear resistance, and so on. most of the traditional materials ( such as metals and ceramics ) are microstructured. the manufacture of a perfect crystal of a material is physically impossible. for example, any crystalline material will contain defects such as precipitates, grain boundaries ( hall – petch relationship ), vacancies, interstitial atoms or substitutional atoms. the micro development and interaction of starting vortices initiated by dielectric barrier discharge ( dbd ) plasma actuators in quiescent air are illustrated in the attached fluid dynamics videos. these include a series of smoke flow visualisations, showing the starting vortices moving parallel or normal to the wall at several different actuator configurations. Question: Which of the following forms of energy can travel by vibrating particles of air? A) electrical B) light C) magnetic D) sound
D) sound
Context: joints. = = = metal alloys = = = the alloys of iron ( steel, stainless steel, cast iron, tool steel, alloy steels ) make up the largest proportion of metals today both by quantity and commercial value. iron alloyed with various proportions of carbon gives low, mid and high carbon steels. an iron - carbon alloy is only considered steel if the carbon level is between 0. 01 % and 2. 00 % by weight. for steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. heat treatment processes such as quenching and tempering can significantly change these properties, however. in contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. cast iron is defined as an iron – carbon alloy with more than 2. 00 %, but less than 6. 67 % carbon. stainless steel is defined as a regular steel alloy with greater than 10 % by weight alloying content of chromium. nickel and molybdenum are typically also added in stainless steels. other significant metallic alloys are those of aluminium, titanium, copper and magnesium. copper alloys have been known for a long time ( since the bronze age ), while the alloys of the other three metals have been relatively recently developed. due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. the alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. these materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. = = = semiconductors = = = a semiconductor is a material that has a resistivity between a conductor and insulator. modern day electronics run on semiconductors, and the industry had an estimated us $ 530 billion market in 2021. its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. semiconductor materials are used to build diodes, transistors, light - emitting diodes ( leds ), and analog and digital electric circuits, among their many uses. semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. semiconductor devices are manufactured both as single discrete devices and as integrated circuits ( ics ), which consist of a number β€” from a is also higher at high temperature, as shown by carnot ' s theorem. in a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. such engines are possible in laboratory settings, but mass - production is not feasible with current technology. work is being done in developing ceramic parts for gas turbine engines. currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemical . currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemically - durable crystalline materials based on polycrystalline ceramics and large single crystals. alumina ceramics are widely utilized in the chemical industry due to their excellent chemical stability and high resistance to corrosion. it is used as acid - resistant pump impellers and pump bodies, ensuring long - lasting performance in transferring aggressive fluids. they are also used in acid - carrying pipe linings to prevent contamination and maintain fluid purity, which is crucial in industries like pharmaceuticals and food processing. valves made from alumina ceramics demonstrate exceptional durability and resistance to chemical attack, making them reliable for controlling the flow of corrosive liquids. = casting, also called the lost wax process, die casting, centrifugal casting, both vertical and horizontal, and continuous castings. each of these forms has advantages for certain metals and applications considering factors like magnetism and corrosion. forging – a red - hot billet is hammered into shape. rolling – a billet is passed through successively narrower rollers to create a sheet. extrusion – a hot and malleable metal is forced under pressure through a die, which shapes it before it cools. machining – lathes, milling machines and drills cut the cold metal to shape. sintering – a powdered metal is heated in a non - oxidizing environment after being compressed into a die. fabrication – sheets of metal are cut with guillotines or gas cutters and bent and welded into structural shape. laser cladding – metallic powder is blown through a movable laser beam ( e. g. mounted on a nc 5 - axis machine ). the resulting melted metal reaches a substrate to form a melt pool. by moving the laser head, it is possible to stack the tracks and build up a three - dimensional piece. 3d printing – sintering or melting amorphous powder metal in a 3d space to make any object to shape. cold - working processes, in which the product ' s shape is altered by rolling, fabrication or other processes, while the product is cold, can increase the strength of the product by a process called work hardening. work hardening creates microscopic defects in the metal, which resist further changes of shape. = = = heat treatment = = = metals can be heat - treated to alter the properties of strength, ductility, toughness, hardness and resistance to corrosion. common heat treatment processes include annealing, precipitation strengthening, quenching, and tempering : annealing process softens the metal by heating it and then allowing it to cool very slowly, which gets rid of stresses in the metal and makes the grain structure large and soft - edged so that, when the metal is hit or stressed it dents or perhaps bends, rather than breaking ; it is also easier to sand, grind, or cut annealed metal. quenching is the process of cooling metal very quickly after heating, thus " freezing " the metal ' s molecules in the very hard martensite form, which makes the metal harder. tempering relieves stresses in the metal that were caused by the hardening process ; tempering makes the metal less hard while making it better able to sustain the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress building block. ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another material. cermets are ceramic particles containing some metals. the wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties. ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. this process involves the strategic addition of second - phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. this approach enhances fracture toughness, paving the way for the creation of advanced, high - performance ceramics in various industries. = = = composites = = = another application of materials science in industry is making composite materials. these are structured materials composed of two or more macroscopic phases. applications range from structural elements such as steel - reinforced concrete, to the thermal insulating tiles, which play a key and integral role in nasa ' s space shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re - entry into the earth ' s atmosphere. one example is reinforced carbon - carbon ( rcc ), the light gray material, which withstands re - entry temperatures up to 1, 510 Β°c ( 2, 750 Β°f ) and protects the space shuttle ' s wing leading edges and nose cap. rcc is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. after curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle in products for both consumers and manufacturers. metallurgy is distinct from the craft of metalworking. metalworking relies on metallurgy in a similar manner to how medicine relies on medical science for technical advancement. a specialist practitioner of metallurgy is known as a metallurgist. the science of metallurgy is further subdivided into two broad categories : chemical metallurgy and physical metallurgy. chemical metallurgy is chiefly concerned with the reduction and oxidation of metals, and the chemical performance of metals. subjects of study in chemical metallurgy include mineral processing, the extraction of metals, thermodynamics, electrochemistry, and chemical degradation ( corrosion ). in contrast, physical metallurgy focuses on the mechanical properties of metals, the physical properties of metals, and the physical performance of metals. topics studied in physical metallurgy include crystallography, material characterization, mechanical metallurgy, phase transformations, and failure mechanisms. historically, metallurgy has predominately focused on the production of metals. metal production begins with the processing of ores to extract the metal, and includes the mixture of metals to make alloys. metal alloys are often a blend of at least two different metallic elements. however, non - metallic elements are often added to alloys in order to achieve properties suitable for an application. the study of metal production is subdivided into ferrous metallurgy ( also known as black metallurgy ) and non - ferrous metallurgy, also known as colored metallurgy. ferrous metallurgy involves processes and alloys based on iron, while non - ferrous metallurgy involves processes and alloys based on other metals. the production of ferrous metals accounts for 95 % of world metal production. modern metallurgists work in both emerging and traditional areas as part of an interdisciplinary team alongside material scientists and other engineers. some traditional areas include mineral processing, metal production, heat treatment, failure analysis, and the joining of metals ( including welding, brazing, and soldering ). emerging areas for metallurgists include nanotechnology, superconductors, composites, biomedical materials, electronic materials ( semiconductors ) and surface engineering. = = etymology and pronunciation = = metallurgy derives from the ancient greek μΡταλλουργος, metallourgos, " worker in metal ", from μΡταλλον, metallon, " mine, metal " + Ρργον, ergon applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle transition and lose their toughness, becoming more brittle and prone to cracking. metals under continual cyclic loading can suffer from metal fatigue. metals under constant stress at elevated temperatures can creep. = = = metalworking processes = = = casting – molten metal is poured into a shaped mold. variants of casting include sand casting, investment Question: A chef uses a metal spoon to stir noodles cooking in a pan. After five minutes, she notices that the thermal energy from the pan has made the spoon A) cold. B) hot. C) wet. D) dry.
B) hot.
Context: time - dependent distribution of the global extinction of megafauna is compared with the growth of human population. there is no correlation between the two processes. furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna. the prevalence of sexual reproduction ( " sex " ) in eukaryotes is an enigma of evolutionary biology. sex increases genetic variation only tells its long - term superiority in essence. the accumulation of harmful mutations causes an immediate and ubiquitous pressure for organisms. contrary to the common sense, our theoretical model suggests that reproductive rate can influence initiatively the accumulation of harmful mutations. the interaction of reproductive rate and the integrated harm of mutations causes a critical reproductive rate r *. a population will become irreversibly extinct once the reproductive rate reduces to lower than r *. a sexual population has a r * lower than 1 and an asexual population has a r * higher than 1. the mean reproductive rate of a population reached to the carrying capacity has to reduce to 1. that explains the widespread sex as well as the persistence of facultative and asexual organisms. computer simulations support significantly our conclusion. ##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models ( e. g., trunks of trees, boulders and accumulations of gravel ) from a river bed furnishes a simple and efficient means of increasing the discharging capacity of its channel. such removals will consequently lower the height of floods upstream. every impediment to the flow, in proportion to its extent, raises the level of the river above it so as to produce the additional artificial fall necessary to convey the flow through the restricted channel, thereby reducing the total available fall. reducing the length of the channel by substituting straight cuts for a winding course is the only way in which the effective fall can be increased. this involves some loss of capacity in the channel as a whole, and in the case of a large river with a considerable flow it is difficult to maintain a straight cut owing to the tendency of the current to erode the banks and form again a sinuous channel. even if the cut is preserved by protecting the banks, it is liable to produce changes shoals and raise the flood - level in the channel just below its termination. nevertheless, where the available fall is exceptionally small, as in land originally reclaimed from the sea, such as the english fenlands, and where, in consequence, the drainage is in a great measure artificial, straight channels have been formed for the rivers. because of the perceived value in protecting these fertile, low - lying lands from inundation, additional straight channels have also been provided for the discharge of rainfall, known as drains in the fens. even extensive modification of the course of a river combined with an enlargement of its channel often produces only a limited reduction in flood damage. consequently, such floodworks are only commensurate with the expenditure involved where significant assets ( such as a town ) are under threat. additionally, even when successful, such floodworks may simply move the problem further downstream and threaten some other town. recent floodworks in europe have included restoration of natural floodplains and winding courses, so that floodwater is held back and released more slowly. human intervention sometimes inadvertently modifies the course or characteristics of a river, for example by introducing obstructions such as mining refuse, sluice gates for mills, fish - traps, unduly wide piers for bridges and solid weirs. by impeding flow these measures can raise the flood - level upstream. regulations for the management of rivers may include stringent prohibitions with regard to pollution, requirements for enlarging sluice - ways and the compulsory raising of their gates for the passage of floods i compare the burst detection sensitivity of cgro ' s batse, swift ' s bat, the glast burst monitor ( gbm ) and exist as a function of a burst ' s spectrum and duration. a detector ' s overall burst sensitivity depends on its energy sensitivity and set of accumulations times delta t ; these two factors shape the detected burst population. for example, relative to batse, the bat ' s softer energy band decreases the detection rate of short, hard bursts, while the bat ' s longer accumulation times increase the detection rate of long, soft bursts. consequently, swift is detecting long, low fluence bursts ( 2 - 3x fainter than batse ). the less of it people would be prepared to buy ( other things unchanged ). as the price of a commodity falls, consumers move toward it from relatively more expensive goods ( the substitution effect ). in addition, purchasing power from the price decline increases ability to buy ( the income effect ). other factors can change demand ; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. all determinants are predominantly taken as constant factors of demand and supply. supply is the relation between the price of a good and the quantity available for sale at that price. it may be represented as a table or graph relating price and quantity supplied. producers, for example business firms, are hypothesised to be profit maximisers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. supply is typically represented as a function relating price and quantity, if other factors are unchanged. that is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. the higher price makes it profitable to increase production. just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. the " law of supply " states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply. market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. at a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. this is posited to bid the price up. at a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. this pushes the price down. the model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilise at the price that makes quantity supplied equal to quantity demanded. similarly, demand - and - supply theory predicts a new price - quantity combination from a shift in demand ( as to the figure ), or in supply. = = = firms = = = people frequently do not trade directly on markets. instead, on the supply side, they may work while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern. the model of neutrino mass matrix with minimal texture is now tightly constrained by experiment so that it can yield a prediction for the phase of cp violation. this phase is predicted to lie in the range $ \ delta _ { cp } = 0. 77 \ pi - 1. 24 \ pi $. if neutrino oscillation experiment would find the cp violation phase outside this range, this means that the minimal - texture neutrino mass matrix, the element of which is all real, fails and the neutrino mass matrix must be complex, i. e., the phase must be present that is responsible for leptogenesis. in this talk, i will explain how to reduce the spectral index to be n _ s = 0. 96 for supernatural inflation. i will also show the constraint to the reheating temperature from big bang nucleosynthesis of both thermal and non - thermal gravitino production. the threshold pump power for modelocking decreased by 18 % when the temperature was increased from 25 to 100 degrees c, where a swcnt / pdms coated tapered fiber was used as the saturable absorber in a fiber laser. further, the pump power at which multi - pulse operation began decreased by 24 %, and the pump power range over which fundamental modelocking could be maintained decreased by 59 % over the same temperature range. this decrease in stability is attributed to the large thermo - optic coefficient of the pdms polymer, which results in a 40 % reduction of the overlap between the evanescent field and swcnt coating of the taper fiber over a temperature range of 75 degrees c. Question: Which will most likely cause a decrease in predator populations? A) an increase in prey populations B) a decrease in prey populations C) a decrease in decomposers D) an increase in producers
B) a decrease in prey populations
Context: oil umbrella ) ; for calculating the time of death ( allowing for weather and insect activity ) ; described how to wash and examine the dead body to ascertain the reason for death. at that time the book had described methods for distinguishing between suicide and faked suicide. he wrote the book on forensics stating that all wounds or dead bodies should be examined, not avoided. the book became the first form of literature to help determine the cause of death. in one of song ci ' s accounts ( washing away of wrongs ), the case of a person murdered with a sickle was solved by an investigator who instructed each suspect to bring his sickle to one location. ( he realized it was a sickle by testing various blades on an animal carcass and comparing the wounds. ) flies, attracted by the smell of blood, eventually gathered on a single sickle. in light of this, the owner of that sickle confessed to the murder. the book also described how to distinguish between a drowning ( water in the lungs ) and strangulation ( broken neck cartilage ), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident. methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the polygraph test. in ancient india, some suspects were made to fill their mouths with dried rice and spit it back out. similarly, in ancient china, those accused of a crime would have rice powder placed in their mouths. in ancient middle - eastern cultures, the accused were made to lick hot metal rods briefly. it is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth ; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva. = = education and training = = initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data - flow management software. however, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. in doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. instead, it urges a perspective that views forensic science as a discipline studying the informative potential of of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent are continuous lines used to depict edges directly visible from a particular angle. hidden – are short - dashed lines that may be used to represent edges that are not directly visible. center – are alternately long - and short - dashed lines that may be used to represent the axes of circular features. cutting plane – are thin, medium - dashed lines, or thick alternately long - and double short - dashed that may be used to define sections for section views. section – are thin lines in a pattern ( pattern determined by the material being " cut " or " sectioned " ) used to indicate surfaces in section views resulting from " cutting ". section lines are commonly referred to as " cross - hatching ". phantom – ( not shown ) are alternately long - and double short - dashed thin lines used to represent a feature or component that is not part of the specified part or assembly. e. g. billet ends that may be used for testing, or the machined product that is the focus of a tooling drawing. lines can also be classified by a letter classification in which each line is given a letter. type a lines show the outline of the feature of an object. they are the thickest lines on a drawing and done with a pencil softer than hb. type b lines are dimension lines and are used for dimensioning, projecting, extending, or leaders. a harder pencil should be used, such as a 2h pencil. type c lines are used for breaks when the whole object is not shown. these are freehand drawn and only for short breaks. 2h pencil type d lines are similar to type c, except these are zigzagged and only for longer breaks. 2h pencil type e lines indicate hidden outlines of internal features of an object. these are dotted lines. 2h pencil type f lines are type e lines, except these are used for drawings in electrotechnology. 2h pencil type g lines are used for centre lines. these are dotted lines, but a long line of 10 – 20 mm, then a 1 mm gap, then a small line of 2 mm. 2h pencil type h lines are the same as type g, except that every second long line is thicker. these indicate the cutting plane of an object. 2h pencil type k lines indicate the alternate positions of an object and the line taken by that object. these are drawn with a long line of 10 – 20 mm, then a small gap, then a small line of 2 mm, then a gap, then another small line. 2h one might ask why is it important to know the mechanism of fracture in leaves when mother nature is doing her job perfectly. i could list the following reasons to address that question : ( a ) leaves are natural composite structures, during millions of years of evolution, they have adapted themselves to their surrounding environment and their design is optimized, one can apply the knowledge gained from studying the fracture mechanism of leaves to the development of new composite materials ; ( b ) other soft tissues like skin and blood vessel have similar structure at some scales and may possess the same fracture mechanism. the gained knowledge can also be applied to these materials ; ( c ) global need for food is skyrocketing. there are few countries, including the united states, that have all the potentials ( i. e. water, soil, sunlight, and manpower ) to play a major role in the future world food supplying market. if we can increase the output of our farms and forests, by means of protecting them against herbivores [ beck 1965 ], pathogens [ campbell et al. 1980 ], and other physical damages, our share of the future market will be higher. it will also enforce our national food security because we will not be dependent on food import. we do not yet know how much of our farms and forests output can be saved if we can genetically design tougher materials, but the whole idea does worth to be studied. process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to . species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. a familiar example is peppermint, mentha Γ— piperita, a sterile hybrid between mentha aquatica and spearmint, mentha spicata. the many cultivated varieties of wheat are the result of multiple inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant – people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid cortisol, corticosterone and aldosterone activate full - length glucocorticoid receptor ( gr ) from elephant shark, a cartilaginous fish belonging to the oldest group of jawed vertebrates. activation by aldosterone a mineralocorticoid, indicates partial divergence of elephant shark gr from the mr. progesterone activates elephant shark mr, but not elephant shark gr. progesterone inhibits steroid binding to elephant shark gr, but not to human gr. deletion of the n - terminal domain ( ntd ) from elephant shark gr ( truncated gr ) reduced the response to corticosteroids, while truncated and full - length elephant shark mr had similar responses to corticosteroids. chimeras of elephant shark gr ntd fused to mr dbd + lbd had increased activation by corticosteroids and progesterone compared to full - length elephant shark mr. elephant shark mr ntd fused to gr dbd + lbd had similar activation as full - length elephant shark mr, indicating that activation of human gr by the ntd evolved early in gr divergence from the mr. i discuss some compelling suggestions about particles which could be the dark matter in the universe, with special attention to experimental searches for them. Question: A robin catches and eats a cricket. Which statement best describes the roles of each animal? A) The robin is the prey and the cricket is the predator. B) The robin is the predator and the cricket is the prey. C) The robin is the consumer and the cricket is the producer. D) The robin is the producer and the cricket is the consumer.
B) The robin is the predator and the cricket is the prey.
Context: ##iation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. the radiation sources used include radioisotope gamma ray sources, x - ray generators and electron accelerators. further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re - hydration. irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal ( in this context ' ionizing radiation ' is implied ). as such it is also used on non - food items, such as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioact the dynamic impedance of a sphere oscillating in an elastic medium is considered. oestreicher ' s formula for the impedance of a sphere bonded to the surrounding medium can be expressed simply in terms of three lumped impedances associated with the displaced mass and the longitudinal and transverse waves. if the surface of the sphere slips while the normal velocity remains continuous, the impedance formula is modified by adjusting the definition of the transverse impedance to include the interfacial impedance. eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant – people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation participates as a consumer, resource, or both in consumer – resource interactions, which form the core of food chains or food webs. there are different trophic levels within any food web, with the lowest level being the primary producers ( or autotrophs ) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. at the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. heterotrophs that consume plants are primary consumers ( or herbivores ) whereas heterotrophs that consume herbivores are secondary consumers ( or carnivores ). and those that eat secondary consumers are tertiary consumers and so on. omnivorous heterotrophs are able to consume at multiple levels. finally, there are decomposers that feed on the waste products or dead bodies of organisms. on average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one - tenth of the energy of the trophic level that it consumes. waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. = = = biosphere = = = in the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. for example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. a biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic ( biosphere ) and the abiotic ( lithosphere, atmosphere, and hydrosphere ) compartments of earth. there are biogeochemical cycles for nitrogen, carbon, and water. = = = conservation = = = conservation biology is the study of the conservation of earth ' s biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates of extinction and the erosion of biotic interactions. it is concerned with factors that influence the maintenance, loss, and restoration of biodiversity and the science of sustaining evolutionary processes that engender genetic, population, species, and ecosystem diversity. the concern stems from estimates suggesting that up to 50 % of all species on the planet outer satellites of the planets have distant, eccentric orbits that can be highly inclined or even retrograde relative to the equatorial planes of their planets. these irregular orbits cannot have formed by circumplanetary accretion and are likely products of early capture from heliocentric orbit. the irregular satellites may be the only small bodies remaining which are still relatively near their formation locations within the giant planet region. the study of the irregular satellites provides a unique window on processes operating in the young solar system and allows us to probe possible planet formation mechanisms and the composition of the solar nebula between the rocky objects in the main asteroid belt and the very volatile rich objects in the kuiper belt. the gas and ice giant planets all appear to have very similar irregular satellite systems irrespective of their mass or formation timescales and mechanisms. water ice has been detected on some of the outer satellites of saturn and neptune whereas none has been observed on jupiter ' s outer satellites. of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. this allows us to localize particular functions within different brain regions. fmri has moderate spatial and temporal resolution. optical imaging. this technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active ( i. e., those that have more oxygenated blood ). optical imaging has moderate temporal resolution, but poor spatial resolution. it also has the advantage that it is extremely safe and can be used to study infants ' brains. magnetoencephalography. meg measures magnetic fields resulting from cortical activity. it is similar to eeg, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in eeg is. meg uses squid sensors to detect tiny magnetic fields. = = = computational modeling = = = computational models require a mathematically and logically formal representation of a problem. computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. computational modeling can help us understand the functional organization of a particular cognitive phenomenon. approaches to cognitive modeling can be categorized as : ( 1 ) symbolic, on abstract mental functions of an intelligent mind by means of symbols ; ( 2 ) subsymbolic, on the neural and associa the cross section of elastic electron - proton scattering taking place in an electron gas is calculated within the closed time path method. it is found to be the sum of two terms, one being the expression in the vacuum except that it involves dressing due to the electron gas. the other term is due to the scattering particles - electron gas entanglement. this term dominates the usual one when the exchange energy is in the vicinity of the fermi energy. furthermore it makes the trajectories of the colliding particles more consistent and the collision more irreversible, rendering the scattering more classical in this regime. a watershed ( called a " divide " in north america ) over which rainfall flows down towards the river traversing the lowest part of the valley, whereas the rain falling on the far slope of the watershed flows away to another river draining an adjacent basin. river basins vary in extent according to the configuration of the country, ranging from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern quantum mechanics is interpreted by the adjacent vacuum that behaves as a virtual particle to be absorbed and emitted by its matter. as described in the vacuum universe model, the adjacent vacuum is derived from the pre - inflationary universe in which the pre - adjacent vacuum is absorbed by the pre - matter. this absorbed pre - adjacent vacuum is emitted to become the added space for the inflation in the inflationary universe whose space - time is separated from the pre - inflationary universe. this added space is the adjacent vacuum. the absorption of the adjacent vacuum as the added space results in the adjacent zero space ( no space ), quantum mechanics is the interaction between matter and the three different types of vacuum : the adjacent vacuum, the adjacent zero space, and the empty space. the absorption of the adjacent vacuum results in the empty space superimposed with the adjacent zero space, confining the matter in the form of particle. when the absorbed vacuum is emitted, the adjacent vacuum can be anywhere instantly in the empty space superimposed with the adjacent zero space where any point can be the starting point ( zero point ) of space - time. consequently, the matter that expands into the adjacent vacuum has the probability to be anywhere instantly in the form of wavefunction. in the vacuum universe model, the universe not only gains its existence from the vacuum but also fattens itself with the vacuum. during the inflation, the adjacent vacuum also generates the periodic table of elementary particles to account for all elementary particles and their masses in a good agreement with the observed values. Question: Hookworms live inside the intestines of dogs. As the dog eats, the hookworms consume partially digested food. As a result of this nutrient diversion, the dog can become malnourished and weakened. Which best describes the relationship between the hookworms and the dog? A) a parasitic relationship B) a mutualistic relationship C) a predator-prey relationship D) a producer-consumer relationship
A) a parasitic relationship
Context: and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant – people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour to be separated conceptually from geology and crop production and treated as a whole. as a founding father of soil science, fallou has primacy in time. fallou was working on the origins of soil before dokuchaev was born ; however dokuchaev ' s work was more extensive and is considered to be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the be the more significant to modern soil theory than fallou ' s. previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. soil and bedrock were in fact equated. dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. the soil is considered as different from bedrock. the latter becomes soil under the influence of a series of soil - formation factors ( climate, vegetation, country, relief and age ). according to him, soil should be called the " daily " or outward horizons of rocks regardless of the type ; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. a 1914 encyclopedic definition : " the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks ". serves to illustrate the historic view of soil which persisted from the 19th century. dokuchaev ' s late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. a corollary concept is that soil without a living component is simply a part of earth ' s outer layer. further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. the term is popularly applied to the material on the surface of the earth ' s moon and mars, a usage acceptable within a portion of the scientific community. accurate to this modern understanding of soil is nikiforoff ' s 1959 definition of soil as the " excited skin of the sub aerial part of the earth ' s crust ". = = areas of practice = = academically, soil scientists tend to be drawn to one of five areas of specialization : microbiology, pedology, edaphology, physics, or chemistry. yet the work specifics are very much dictated by the challenges facing our civilization ' s desire to sustain the land that supports it, and the distinctions between the sub - disciplines of soil science often blur in the process. soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. one exciting effort drawing in soil scientists in the u. s. as of 2004 is the soil quality initiative. central to the soil quality initiative is developing indices of soil health and then monitoring them in a way the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a the wide development of inter connectivity of cellular networks with the internet network has made them to be vulnerable. this exposure of the cellular networks to internet has increased threats to customer end equipment as well as the carrier infrastructure. equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 ) = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling Question: The prairie grass ecosystem once had a deep layer of topsoil which was protected by the grasses that covered it. Removal of these grasses for farmland is causing the soil to be eroded mainly by A) wind and rain. B) animal movement. C) crops grown in the soil. D) increased temperatures.
A) wind and rain.
Context: parts of australia have been privileged to see dazzling lights in the night sky as the aurora australis ( known as the southern lights ) puts on a show this year. aurorae are significant in australian indigenous astronomical traditions. aboriginal people associate aurorae with fire, death, blood, and omens, sharing many similarities with native american communities. in the year 1598 philipp uffenbach published a printed diptych sundial, which is a forerunner of franz ritters horizantal sundial. uffenbach ' s sundial contains apart from the usual information on a sundial ascending signs of the zodiac, several brigthest stars, an almucantar and most important the oldest gnomonic world map known so far. the sundial is constructed for the polar height of 50 1 / 6 degrees, the height of frankfurt / main the town of his citizenship. oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars. variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated. weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes. the large scale pattern in the arrival directions of extragalactic cosmic rays that reach the earth is different from that of the flux arriving to the halo of the galaxy as a result of the propagation through the galactic magnetic field. two different effects are relevant in this process : deflections of trajectories and ( de ) acceleration by the electric field component due to the galactic rotation. the deflection of the cosmic ray trajectories makes the flux intensity arriving to the halo from some direction to appear reaching the earth from another direction. this applies to any intrinsic anisotropy in the extragalactic distribution or, even in the absence of intrinsic anisotropies, to the dipolar compton - getting anisotropy induced when the observer is moving with respect to the cosmic rays rest frame. for an observer moving with the solar system, cosmic rays traveling through far away regions of the galaxy also experience an electric force coming from the relative motion ( due to the rotation of the galaxy ) of the local system in which the field can be considered as being purely magnetic. this produces small changes in the particles momentum that can originate large scale anisotropies even for an isotropic extragalactic flux. inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid Question: When the Northern Hemisphere is tilted toward the Sun, what season is occurring in Australia? A) fall B) winter C) spring D) summer
B) winter
Context: ##physical processes which take place in human beings as they make sense of information received through the visual system. the subject of the image. when developing an imaging system, designers must consider the observables associated with the subjects which will be imaged. these observables generally take the form of emitted or reflected energy, such as electromagnetic energy or mechanical energy. the capture device. once the observables associated with the subject are characterized, designers can then identify and integrate the technologies needed to capture those observables. for example, in the case of consumer digital cameras, those technologies include optics for collecting energy in the visible portion of the electromagnetic spectrum, and electronic detectors for converting the electromagnetic energy into an electronic signal. the processor. for all digital imaging systems, the electronic signals produced by the capture device must be manipulated by an algorithm which formats the signals so they can be displayed as an image. in practice, there are often multiple processors involved in the creation of a digital image. the display. the display takes the electronic signals which have been manipulated by the processor and renders them on some visual medium. examples include paper ( for printed, or " hard copy " images ), television, computer monitor, or projector. note that some imaging scientists will include additional " links " in their description of the imaging chain. for example, some will include the " source " of the energy which " illuminates " or interacts with the subject of the image. others will include storage and / or transmission systems. = = subfields = = subfields within imaging science include : image processing, computer vision, 3d computer graphics, animations, atmospheric optics, astronomical imaging, biological imaging, digital image restoration, digital imaging, color science, digital photography, holography, magnetic resonance imaging, medical imaging, microdensitometry, optics, photography, remote sensing, radar imaging, radiometry, silver halide, ultrasound imaging, photoacoustic imaging, thermal imaging, visual perception, and various printing technologies. = = methodologies = = acoustic imaging coherent imaging uses an active coherent illumination source, such as in radar, synthetic aperture radar ( sar ), medical ultrasound and optical coherence tomography ; non - coherent imaging systems include fluorescent microscopes, optical microscopes, and telescopes. chemical imaging, the simultaneous measurement of spectra and pictures digital imaging, creating digital images, generally by scanning or through digital photography disk image, a file which contains the exact content of a data storage medium document imaging, replicating documents commonly generation of direct current in zigzag carbon nanotubes due to harmonic mixing of two coherent electromagnetic waves is being considered. the electromagnetic waves have commensurate frequencies of omega and two omega. the rectification of the waves at high frequencies is quite smooth whiles at low frequencies there are some fluctuations. the nonohmicity observed in the i - vcharacteristics is attributed to the nonparabolicity of the electron energy band which is very strong in carbon nanotubes because of high stark component. it is observed that the current falls off faster at lower electric field than the case in superlattice. for omega tau equal to two? the external electric field strength emax for the observation of negative differential conductivity occurs around 1. 03x10e6 v / m which is quite weak. it is interesting to note that the peak of the curve shifts to the left with increasing value of omega tau? the connection between the quantum frequency of radiation by the transition of the electron from orbit n to orbit k and frequencies of circling of electron in these orbits for the atom of hydrogen is determined. stations located in places like light poles or building roofs. in the past, 4g networking had to rely on large cell towers in order to transmit signals over large distances. with the introduction of 5g networking, it is imperative that small cell stations are used because the mm wave spectrum, which is the specific type of band used in 5g services, strictly travels over short distances. if the distances between cell stations were longer, signals may suffer from interference from inclimate weather, or other objects such as houses, buildings, trees, and much more. in 5g networking, there are 3 main kinds of 5g : low - band, mid - band, and high - band. low - band frequencies operate below 2 ghz, mid - band frequencies operate between 2 – 10 ghz, and high - band frequencies operate between 20 and 100 ghz. verizon have seen outrageous numbers on their high - band 5g service, which they deem " ultraband ", which hit speeds of over 3 gbit / s. the main advantage of 5g networks is that the data transmission rate is much higher than the previous cellular network, up to 10 gbit / s, which is faster than the current wired internet and 100 times faster than the previous 4g lte cellular network. another advantage is lower network latency ( faster response time ), less than 1 millisecond, and 4g is 30 - 70 milliseconds. the peak rate needs to reach the gbit / s standard to meet the high data volume of high - definition video, virtual reality and so on. the air interface delay level needs to be around 1ms, which meets real - time applications such as autonomous driving and telemedicine. large network capacity, providing the connection capacity of 100 billion devices to meet iot communication. the spectrum efficiency is 10 times higher than lte. with continuous wide area coverage and high mobility, the user experience rate reaches 100 mbit / s. the flow density and the number of connections are greatly increased. since 5g is a relatively new type of service, only phones which are newly released or are upcoming can support 5g service. some of these phones include the iphone 12 / 13 ; select samsung devices such as the s21 series, note series, flip / fold series, a series ; google pixel 4a / 5 ; and a few more devices from other manufacturers. the first ever 5g smartphone, the samsung galaxy s20, was released by samsung in march 2020. following the release of samsung ' s s the relations among the components of the exit momenta of ultrarelativistic electrons scattered on a strong electromagnetic wave of a low ( optical ) frequency and linear polarization are established using the exact solutions to the equations of motion with radiation reaction included ( the landau - lifshitz equation ). it is found that the momentum components of the electrons traversed the electromagnetic wave depend weakly on the initial values of the momenta. these electrons are mostly scattered at the small angles to the direction of propagation of the electromagnetic wave. the maximum lorentz factor of the electrons crossed the electromagnetic wave is proportional to the work done by the electromagnetic field and is independent of the initial momenta. the momentum component parallel to the electric field strength vector of the electromagnetic wave is determined only by the diameter of the laser beam measured in the units of the classical electron radius. as for the reflected electrons, they for the most part lose the energy, but remain relativistic. there is a reflection law for these electrons that relates the incident and the reflection angles and is independent of any parameters. radio is the technology of communicating using radio waves. radio waves are electromagnetic waves of frequency between 3 hertz ( hz ) and 300 gigahertz ( ghz ). they are generated by an electronic device called a transmitter connected to an antenna which radiates the waves. they can be received by other antennas connected to a radio receiver ; this is the fundamental principle of radio communication. in addition to communication, radio is used for radar, radio navigation, remote control, remote sensing, and other applications. in radio communication, used in radio and television broadcasting, cell phones, two - way radios, wireless networking, and satellite communication, among numerous other uses, radio waves are used to carry information across space from a transmitter to a receiver, by modulating the radio signal ( impressing an information signal on the radio wave by varying some aspect of the wave ) in the transmitter. in radar, used to locate and track objects like aircraft, ships, spacecraft and missiles, a beam of radio waves emitted by a radar transmitter reflects off the target object, and the reflected waves reveal the object ' s location to a receiver that is typically colocated with the transmitter. in radio navigation systems such as gps and vor, a mobile navigation instrument receives radio signals from multiple navigational radio beacons whose position is known, and by precisely measuring the arrival time of the radio waves the receiver can calculate its position on earth. in wireless radio remote control devices like drones, garage door openers, and keyless entry systems, radio signals transmitted from a controller device control the actions of a remote device. the existence of radio waves was first proven by german physicist heinrich hertz on 11 november 1886. in the mid - 1890s, building on techniques physicists were using to study electromagnetic waves, italian physicist guglielmo marconi developed the first apparatus for long - distance radio communication, sending a wireless morse code message to a recipient over a kilometer away in 1895, and the first transatlantic signal on 12 december 1901. the first commercial radio broadcast was transmitted on 2 november 1920, when the live returns of the harding - cox presidential election were broadcast by westinghouse electric and manufacturing company in pittsburgh, under the call sign kdka. the emission of radio waves is regulated by law, coordinated by the international telecommunication union ( itu ), which allocates frequency bands in the radio spectrum for various uses. = = etymology = = the word radio is derived from the latin word radius, meaning " spoke of a wheel, beam of light, ray. " it was first the curvature radiation is applied to the explain the circular polarization of frbs. significant circular polarization is reported in both apparently non - repeating and repeating frbs. curvature radiation can produce significant circular polarization at the wing of the radiation beam. in the curvature radiation scenario, in order to see significant circular polarization in frbs ( 1 ) more energetic bursts, ( 2 ) burst with electrons having higher lorentz factor, ( 3 ) a slowly rotating neutron star at the centre are required. different rotational period of the central neutron star may explain why some frbs have high circular polarization, while others don ' t. considering possible difference in refractive index for the parallel and perpendicular component of electric field, the position angle may change rapidly over the narrow pulse window of the radiation beam. the position angle swing in frbs may also be explained by this non - geometric origin, besides that of the rotating vector model. an orthotropic metamaterial is composed of elements arrayed periodically in space. the element includes two cuboid structures. the first structure is the basic structure of the element, and the second structure is the transformation of the first structure of the element. the first structure of the element is a cuboid structure composed of 24 bars connected by 8 nodes, and the second structure of the element is a cuboid structure composed of 36 bars connected by 14 nodes. this metamaterial has 6 independent elastic constants, so there is a large degree of freedom in material design. using a simple universal design method, a metamaterial with tailored elastic constants can be designed. therefore, it has great application value in the fields of mechanical metamaterials, elastic wave metamaterials, acoustic metamaterials, and seismic metamaterials, and has also laid the foundation for realizing the dream of controlling elastic waves, acoustic waves and vibrations. harding - cox presidential election. = = technology = = radio waves are radiated by electric charges undergoing acceleration. they are generated artificially by time - varying electric currents, consisting of electrons flowing back and forth in a metal conductor called an antenna. as they travel farther from the transmitting antenna, radio waves spread out so their signal strength ( intensity in watts per square meter ) decreases ( see inverse - square law ), so radio transmissions can only be received within a limited range of the transmitter, the distance depending on the transmitter power, the antenna radiation pattern, receiver sensitivity, background noise level, and presence of obstructions between transmitter and receiver. an omnidirectional antenna transmits or receives radio waves in all directions, while a directional antenna transmits radio waves in a beam in a particular direction, or receives waves from only one direction. radio waves travel at the speed of light in vacuum and at slightly lower velocity in air. the other types of electromagnetic waves besides radio waves, infrared, visible light, ultraviolet, x - rays and gamma rays, can also carry information and be used for communication. the wide use of radio waves for telecommunication is mainly due to their desirable propagation properties stemming from their longer wavelength. radio waves have the ability to pass through the atmosphere in any weather, foliage, and at longer wavelengths through most building materials. by diffraction, longer wavelengths can bend around obstructions, and unlike other electromagnetic waves they tend to be scattered rather than absorbed by objects larger than their wavelength. = = radio communication = = in radio communication systems, information is carried across space using radio waves. at the sending end, the information to be sent is converted by some type of transducer to a time - varying electrical signal called the modulation signal. the modulation signal may be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal consisting of a sequence of bits representing binary data from a computer. the modulation signal is applied to a radio transmitter. in the transmitter, an electronic oscillator generates an alternating current oscillating at a radio frequency, called the carrier wave because it serves to generate the radio waves that carry the information through the air. the modulation signal is used to modulate the carrier, varying some aspect of the carrier wave, impressing the information in the modulation signal onto the carrier. different radio systems use different modulation methods : amplitude modulation ( am ) – in an am transmitter, the amplitude ( strength ) of the radio carrier wave is varied by the modulation radio waves. the radio waves carry the information to the receiver location. at the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna – a weaker replica of the current in the transmitting antenna. this voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. the modulation signal is converted by a transducer back to a human - usable form : an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. the radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter ' s radio waves oscillate at a different frequency, measured in hertz ( hz ), kilohertz ( khz ), megahertz ( mhz ) or gigahertz ( ghz ). the receiving antenna typically picks up the radio signals of many transmitters. the receiver uses tuned circuits to select the radio signal desired out of all the signals picked up by the antenna and reject the others. a tuned circuit acts like a resonator, similar to a tuning fork. it has a natural resonant frequency at which it oscillates. the resonant frequency of the receiver ' s tuned circuit is adjusted by the user to the frequency of the desired radio station ; this is called tuning. the oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. radio signals at other frequencies are blocked by the tuned circuit and not passed on. = = = bandwidth = = = a modulated radio wave, carrying an information signal, occupies a range of frequencies. the information in a radio signal is usually concentrated in narrow frequency bands called sidebands ( sb ) just above and below the carrier frequency. the width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency, is called its bandwidth ( bw ). for any given signal - to - noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located ; bandwidth is a measure of information - carrying capacity. the bandwidth required by a radio transmission depends on the data rate of Question: Light waves are arranged in the electromagnetic spectrum by A) wavelength and brightness. B) speed and color. C) brightness and color. D) wavelength and frequency.
D) wavelength and frequency.
Context: scientists look through telescopes, study images on electronic screens, record meter readings, and so on. generally, on a basic level, they can agree on what they see, e. g., the thermometer shows 37. 9 degrees c. but, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. for example, before albert einstein ' s general theory of relativity, observers would have likely interpreted an image of the einstein cross as five different objects in space. in light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. observations that cannot be separated from theoretical interpretation are said to be theory - laden. all observation involves both perception and cognition. that is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. therefore, observations are affected by one ' s underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. in this sense, it can be argued that all observation is theory - laden. = = = the purpose of science = = = should science aim to determine ultimate truth, or are there questions that science cannot answer? scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. conversely, scientific anti - realists argue that science does not aim ( or at least does not succeed ) at truth, especially truth about unobservables like electrons or other universes. instrumentalists argue that scientific theories should only be evaluated on whether they are useful. in their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. realists often point to the success of recent scientific theories as evidence for the truth ( or near truth ) of current theories. antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. antirealists attempt to explain the success of scientific theories without reference to truth. some antirealists claim that scientific designates the relationship between two or more variables. conceptual definition : description of a concept by relating it to other concepts. operational definition : details in regards to defining the variables and how they will be measured / assessed in the study. gathering of data : consists of identifying a population and selecting samples, gathering information from or about these samples by using specific research instruments. the instruments used for data collection must be valid and reliable. analysis of data : involves breaking down the individual pieces of data to draw conclusions about it. data interpretation : this can be represented through tables, figures, and pictures, and then described in words. test, revising of hypothesis conclusion, reiteration if necessary a common misconception is that a hypothesis will be proven ( see, rather, null hypothesis ). generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. if the outcome is inconsistent with the hypothesis, then the hypothesis is rejected ( see falsifiability ). however, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. this careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. in this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true. a useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. as the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. in this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables. = = = research in the humanities = = = research in the humanities involves different methods such as for example hermeneutics and semiotics. humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. context is always important, and context can be social, historical, political, cultural, or ethnic. an example of research in the humanities is historical research, which is embodied in historical method. historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. other studies aim to merely examine the occurrence of behaviours in societies and communities options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygen behavioral responses to different stimuli, one can understand something about how those stimuli are processed. lewandowski & strohmetz ( 2009 ) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present ( e. g., litter in a parking lot or readings on an electric meter ). behavioral observations involve the direct witnessing of the actor engaging in the behavior ( e. g., watching how close a person sits next to another person ). behavioral choices are when a person selects between two or more options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream these samples by using specific research instruments. the instruments used for data collection must be valid and reliable. analysis of data : involves breaking down the individual pieces of data to draw conclusions about it. data interpretation : this can be represented through tables, figures, and pictures, and then described in words. test, revising of hypothesis conclusion, reiteration if necessary a common misconception is that a hypothesis will be proven ( see, rather, null hypothesis ). generally, a hypothesis is used to make predictions that can be tested by observing the outcome of an experiment. if the outcome is inconsistent with the hypothesis, then the hypothesis is rejected ( see falsifiability ). however, if the outcome is consistent with the hypothesis, the experiment is said to support the hypothesis. this careful language is used because researchers recognize that alternative hypotheses may also be consistent with the observations. in this sense, a hypothesis can never be proven, but rather only supported by surviving rounds of scientific testing and, eventually, becoming widely thought of as true. a useful hypothesis allows prediction and within the accuracy of observation of the time, the prediction will be verified. as the accuracy of observation improves with time, the hypothesis may no longer provide an accurate prediction. in this case, a new hypothesis will arise to challenge the old, and to the extent that the new hypothesis makes more accurate predictions than the old, the new will supplant it. researchers can also use a null hypothesis, which states no relationship or difference between the independent or dependent variables. = = = research in the humanities = = = research in the humanities involves different methods such as for example hermeneutics and semiotics. humanities scholars usually do not search for the ultimate correct answer to a question, but instead, explore the issues and details that surround it. context is always important, and context can be social, historical, political, cultural, or ethnic. an example of research in the humanities is historical research, which is embodied in historical method. historians use primary sources and other evidence to systematically investigate a topic, and then to write histories in the form of accounts of the past. other studies aim to merely examine the occurrence of behaviours in societies and communities, without particularly looking for reasons or motivations to explain these. these studies may be qualitative or quantitative, and can use a variety of approaches, such as queer theory or feminist theory. = = = artistic research = = = artistic research, also seen as ' practice - based research ', can take form when the theory outright... lakatos sought to reconcile the rationalism of popperian falsificationism with what seemed to be its own refutation by history ". many philosophers have tried to solve the problem of demarcation in the following terms : a statement constitutes knowledge if sufficiently many people believe it sufficiently strongly. but the history of thought shows us that many people were totally committed to absurd beliefs. if the strengths of beliefs were a hallmark of knowledge, we should have to rank some tales about demons, angels, devils, and of heaven and hell as knowledge. scientists, on the other hand, are very sceptical even of their best theories. newton ' s is the most powerful theory science has yet produced, but newton himself never believed that bodies attract each other at a distance. so no degree of commitment to beliefs makes them knowledge. indeed, the hallmark of scientific behaviour is a certain scepticism even towards one ' s most cherished theories. blind commitment to a theory is not an intellectual virtue : it is an intellectual crime. thus a statement may be pseudoscientific even if it is eminently ' plausible ' and everybody believes in it, and it may be scientifically valuable even if it is unbelievable and nobody believes in it. a theory may even be of supreme scientific value even if no one understands it, let alone believes in it. the boundary between science and pseudoscience is disputed and difficult to determine analytically, even after more than a century of study by philosophers of science and scientists, and despite some basic agreements on the fundamentals of the scientific method. the concept of pseudoscience rests on an understanding that the scientific method has been misrepresented or misapplied with respect to a given theory, but many philosophers of science maintain that different kinds of methods are held as appropriate across different fields and different eras of human history. according to lakatos, the typical descriptive unit of great scientific achievements is not an isolated hypothesis but " a powerful problem - solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence ". to popper, pseudoscience uses induction to generate theories, and only performs experiments to seek to verify them. to popper, falsifiability is what determines the scientific status of a theory. taking a historical approach, kuhn observed that scientists did not follow popper ' s rule, and might ignore falsifying data, unless overwhelming. to kuhn, puzzle - solving within a prediction and observational evidence for the mass of a dark matter particle are presented.. invited contribution to annalen der physik ( expert opinion ). of beliefs. an observation of a transit of venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. if the prediction fails and a transit is not observed, that is likely to occasion an adjustment in the system, a change in some auxiliary assumption, rather than a rejection of the theoretical system. according to the duhem – quine thesis, after pierre duhem and w. v. quine, it is impossible to test a theory in isolation. one must always add auxiliary hypotheses in order to make testable predictions. for example, to test newton ' s law of gravitation in the solar system, one needs information about the masses and positions of the sun and all the planets. famously, the failure to predict the orbit of uranus in the 19th century led not to the rejection of newton ' s law but rather to the rejection of the hypothesis that the solar system comprises only seven planets. the investigations that followed led to the discovery of an eighth planet, neptune. if a test fails, something is wrong. but there is a problem in figuring out what that something is : a missing planet, badly calibrated test equipment, an unsuspected curvature of space, or something else. one consequence of the duhem – quine thesis is that one can make any theory compatible with any empirical observation by the addition of a sufficient number of suitable ad hoc hypotheses. karl popper accepted this thesis, leading him to reject naive falsification. instead, he favored a " survival of the fittest " view in which the most falsifiable scientific theories are to be preferred. = = = anything goes methodology = = = paul feyerabend ( 1924 – 1994 ) argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception - free methodological rules governing the progress of science. he argued that " the only principle that does not inhibit progress is : anything goes ". feyerabend said that science started as a liberating movement, but that over time it had become increasingly dogmatic and rigid and had some oppressive features, and thus had become increasingly an ideology. because of this, he said it was impossible to come up with an unambiguous way to distinguish science from religion, magic, or mythology. he saw the exclusive dominance of science as a means of directing society as here are a few random thoughts on the interpretations of the quantum double slit experiment, the mach zehnder experiment, the delayed - choice experiment and the measurement problem. Question: In order to distinguish fact from opinion, conclusions in experiments should be A) recorded on a computer. B) presented in bar graphs. C) based on verifiable data. D) organized in a table.
C) based on verifiable data.
Context: , crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest ##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to ##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s on the basis of laboratory simulation a mechanism is established for the formation of the upper mantle convection spiral plumes from a hot point in the presence of a roll - type large - scale convective flow. the observed plume has horizontal sections near the upper limit, which may lead to the formation of chains of volcanic islands. the origins of the series of european cosmic - ray symposia are briefly described. the first meeting in the series, on hadronic interactions and extensive air showers, held in lodz, poland in 1968, was attended by the author : some memories are recounted. the aim of this note is to prove the analogue of poincar \ ' e duality in the chiral hodge cohomology. we reply to the comment arxiv : quant - ph / 0702060 on our letter arxiv : quant - ph / 0603120 [ phys. rev. lett. 96, 100402 ( 2006 ) ] we develop a structure theory for nilpotent symplectic alternating algebras. cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make a solitary millisecond pulsar, if near the mass limit, and undergoing a phase transition, either first or second order, provided the transition is to a substantially more compressible phase, will emit a blatantly obvious signal - - - spontaneous spin - up. normally a pulsar spins down by angular momentum loss to radiation. the signal is trivial to detect and is estimated to be ` ` on ' ' for 1 / 50 of the spin - down era of millisecond pulsars. presently about 25 solitary millisecond pulsars are known. the phenomenon is analogous to ` ` backbending ' ' observed in high spin nuclei in the 1970 ' s. Question: Pumice is formed when lava from a volcano cools. Which rock type is pumice? A) Gaseous rock B) Igneous rock C) Sedimentary rock D) Metamorphic rock
B) Igneous rock
Context: variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated. have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became all christian authors held that the earth was round. athenagoras, an eastern christian writing around the year 175 ad, said that the earth was spherical. methodius ( c. 290 ad ), an eastern christian writing against " the theory of the chaldeans and the egyptians " said : " let us first lay bare... the theory of the chaldeans and the egyptians. they say that the circumference of the universe is likened to the turnings of a well - rounded globe, the earth being a central point. they say that since its outline is spherical,... the earth should be the center of the universe, around which the heaven is whirling. " arnobius, another eastern christian writing sometime around 305 ad, described the round earth : " in the first place, indeed, the world itself is neither right nor left. it has neither upper nor lower regions, nor front nor back. for whatever is round and bounded on every side by the circumference of a solid sphere, has no beginning or end... " other advocates of a round earth included eusebius, hilary of poitiers, irenaeus, hippolytus of rome, firmicus maternus, ambrose, jerome, prudentius, favonius eulogius, and others. the only exceptions to this consensus up until the mid - fourth century were theophilus of antioch and lactantius, both of whom held anti - hellenistic views and associated the round - earth view with pagan cosmology. lactantius, a western christian writer and advisor to the first christian roman emperor, constantine, writing sometime between 304 and 313 ad, ridiculed the notion of antipodes and the philosophers who fancied that " the universe is round like a ball. they also thought that heaven revolves in accordance with the motion of the heavenly bodies.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture ##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' outer satellites of the planets have distant, eccentric orbits that can be highly inclined or even retrograde relative to the equatorial planes of their planets. these irregular orbits cannot have formed by circumplanetary accretion and are likely products of early capture from heliocentric orbit. the irregular satellites may be the only small bodies remaining which are still relatively near their formation locations within the giant planet region. the study of the irregular satellites provides a unique window on processes operating in the young solar system and allows us to probe possible planet formation mechanisms and the composition of the solar nebula between the rocky objects in the main asteroid belt and the very volatile rich objects in the kuiper belt. the gas and ice giant planets all appear to have very similar irregular satellite systems irrespective of their mass or formation timescales and mechanisms. water ice has been detected on some of the outer satellites of saturn and neptune whereas none has been observed on jupiter ' s outer satellites. earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off .... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture, on the ground that the earth is suspended within the concavity of the sky, and that it has as much room on the one side of it as on the other : hence they say that the part that is beneath must also be inhabited. but they do not remark that, although it be supposed or scientifically demonstrated that the world is of a round and spherical form, yet it does not follow that the other side of the earth is bare of water ; nor even, though it be bare, does it immediately follow that it is peopled. for scripture, which proves the truth of its historical statements by the accomplishment of its prophecies, gives no false information ; and it is too absurd to say, that some men might have taken ship and traversed the whole wide ocean, and crossed from this side of the world to the other, and that thus even the inhabitants of that distant region are descended from that one first man. some historians do not view augustine ' s scriptural commentaries as endorsing any particular cosmological model, endorsing instead the view that augustine shared the common view of his contemporaries that the earth is spherical, in line with his endorsement of science in de genesi ad litteram. c. p. e. nothaft, responding to writers like leo ferrari who described augustine as endorsing a flat earth, says that "... other recent writers on the subject treat augustine ' s acceptance of the earth ' s spherical shape as a well - established fact ". while it always remained a minority view, from the mid - fourth to the seventh centuries ad, the flat - earth view experienced a revival, around the time when diodorus of tarsus founded the exegetical school known as the school of antioch, which sought to counter what he saw as the pagan cosmology of the greeks with a return to the traditional cosmology. the writings ##hosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere Question: The principle of uniformitarianism states that most of the landscape of Earth was formed slowly and over a long period of time. Which occurrence of Earth is least supported by this principle? A) soil development B) volcanic eruption C) plate movement D) fossil formation
B) volcanic eruption
Context: best - known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products. the first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering , subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of gmos. the development of a regulatory framework began in 1975, at asilomar, california. the asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. as the technology improved tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the inherited traits such as shape in pisum sativum ( peas ). what mendel learned from studying plants has had far - reaching benefits outside of botany. similarly, " jumping genes " were discovered by barbara mcclintock while she was studying maize. nevertheless, there are some distinctive genetic differences between plants and other organisms. species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. a familiar example is peppermint, mentha Γ— piperita, a sterile hybrid between mentha aquatica and spearmint, mentha spicata. the many cultivated varieties of wheat are the result of multiple inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the smallest genomes among flowering plants. arabidopsis was the first plant to have its genome sequenced, in 2000. the sequencing of some other relatively small genomes, of rice ( oryza sativa ) and brachypodium distachyon, has made them important model species for understanding the genetics, of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent . species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. a familiar example is peppermint, mentha Γ— piperita, a sterile hybrid between mentha aquatica and spearmint, mentha spicata. the many cultivated varieties of wheat are the result of multiple inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in Question: Selective breeding has resulted in plants that are resistant to pests and produce a higher yield of fruits and vegetables. Which of these is the most likely disadvantage that can result from this process? A) decreased genetic diversity B) habitat destruction C) overpopulation D) increased erosion
A) decreased genetic diversity
Context: armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets. scientists look through telescopes, study images on electronic screens, record meter readings, and so on. generally, on a basic level, they can agree on what they see, e. g., the thermometer shows 37. 9 degrees c. but, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. for example, before albert einstein ' s general theory of relativity, observers would have likely interpreted an image of the einstein cross as five different objects in space. in light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. observations that cannot be separated from theoretical interpretation are said to be theory - laden. all observation involves both perception and cognition. that is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. therefore, observations are affected by one ' s underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. in this sense, it can be argued that all observation is theory - laden. = = = the purpose of science = = = should science aim to determine ultimate truth, or are there questions that science cannot answer? scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. conversely, scientific anti - realists argue that science does not aim ( or at least does not succeed ) at truth, especially truth about unobservables like electrons or other universes. instrumentalists argue that scientific theories should only be evaluated on whether they are useful. in their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology. realists often point to the success of recent scientific theories as evidence for the truth ( or near truth ) of current theories. antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. antirealists attempt to explain the success of scientific theories without reference to truth. some antirealists claim that scientific several thoughts are presented on the long ongoing difficulties both students and academics face related to calculus 101. some of these thoughts may have a more general interest. the hun tian theory ), or as being without substance while the heavenly bodies float freely ( the hsuan yeh theory ), the earth was at all times flat, although perhaps bulging up slightly. the model of an egg was often used by chinese astronomers such as zhang heng ( 78 – 139 ad ) to describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they to investigate the affinity of acetylated wood for organic liquids, yezo spruce wood specimens were acetylated with acetic anhydride, and their swelling in various liquids were compared to those of untreated specimens. the acetylated wood was rapidly and remarkably swollen in aprotic organic liquids such as benzene and toluene in which the untreated wood was swollen only slightly and / or very slowly. on the other hand, the swelling of wood in water, ethylene glycol and alcohols remained unchanged or decreased by the acetylation. consequently the maximum volume of wood swollen in organic liquids was always larger than that in water. the effect of acetylation on the maximum swollen volume of wood was greater in liquids having smaller solubility parameters. the easier penetration of aprotic organic liquids into the acetylated wood was considered to be due to the scission of hydrogen bonds among the amorphous wood constituents by the substitution of hydroxyl groups with hydrophobic acetyl groups. classes according to pore size : the form and shape of the membrane pores are highly dependent on the manufacturing process and are often difficult to specify. therefore, for characterization, test filtrations are carried out and the pore diameter refers to the diameter of the smallest particles which could not pass through the membrane. the rejection can be determined in various ways and provides an indirect measurement of the pore size. one possibility is the filtration of macromolecules ( often dextran, polyethylene glycol or albumin ), another is measurement of the cut - off by gel permeation chromatography. these methods are used mainly to measure membranes for ultrafiltration applications. another testing method is the filtration of particles with defined size and their measurement with a particle sizer or by laser induced breakdown spectroscopy ( libs ). a vivid characterization is to measure the rejection of dextran blue or other colored molecules. the retention of bacteriophage and bacteria, the so - called " bacteria challenge test ", can also provide information about the pore size. to determine the pore diameter, physical methods such as porosimeter ( mercury, liquid - liquid porosimeter and bubble point test ) are also used, but a certain form of the pores ( such as cylindrical or concatenated spherical holes ) is assumed. such methods are used for membranes whose pore geometry does not match the ideal, and we get " nominal " pore diameter, which characterizes the membrane, but does not necessarily reflect its actual filtration behavior and selectivity. the selectivity is highly dependent on the separation process, the composition of the membrane and its electrochemical properties in addition to the pore size. with high selectivity, isotopes can be enriched ( uranium enrichment ) in nuclear engineering or industrial gases like nitrogen can be recovered ( gas separation ). ideally, even racemics can be enriched with a suitable membrane. when choosing membranes selectivity has priority over a high permeability, as low flows can easily be offset by increasing the filter surface with a modular structure. in gas phase filtration different deposition mechanisms are operative, so that particles having sizes below the pore size of the membrane can be retained as well. = = membrane classification = = bio - membrane is classified in two categories, synthetic membrane and natural membrane. synthetic membranes further classified in organic and inorganic membranes. organic membrane sub classified polymeric membranes and inorganic membrane sub classified ceramic polymers. = = synthesis of biomass membrane of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. this allows us to localize particular functions within different brain regions. fmri has moderate spatial and temporal resolution. optical imaging. this technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active ( i. e., those that have more oxygenated blood ). optical imaging has moderate temporal resolution, but poor spatial resolution. it also has the advantage that it is extremely safe and can be used to study infants ' brains. magnetoencephalography. meg measures magnetic fields resulting from cortical activity. it is similar to eeg, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in eeg is. meg uses squid sensors to detect tiny magnetic fields. = = = computational modeling = = = computational models require a mathematically and logically formal representation of a problem. computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. computational modeling can help us understand the functional organization of a particular cognitive phenomenon. approaches to cognitive modeling can be categorized as : ( 1 ) symbolic, on abstract mental functions of an intelligent mind by means of symbols ; ( 2 ) subsymbolic, on the neural and associa higher education and advanced scientific research lead to social, economic, and political development of any country. all developed societies like the current 2022 g7 countries : canada, france, germany, italy, japan, the uk, and the us have all not only heavily invested in higher education but also in advanced scientific research in their respective countries. similarly, for african countries to develop socially, economically, and politically, they must follow suit by massively investing in higher education and local scientific research. all christian authors held that the earth was round. athenagoras, an eastern christian writing around the year 175 ad, said that the earth was spherical. methodius ( c. 290 ad ), an eastern christian writing against " the theory of the chaldeans and the egyptians " said : " let us first lay bare... the theory of the chaldeans and the egyptians. they say that the circumference of the universe is likened to the turnings of a well - rounded globe, the earth being a central point. they say that since its outline is spherical,... the earth should be the center of the universe, around which the heaven is whirling. " arnobius, another eastern christian writing sometime around 305 ad, described the round earth : " in the first place, indeed, the world itself is neither right nor left. it has neither upper nor lower regions, nor front nor back. for whatever is round and bounded on every side by the circumference of a solid sphere, has no beginning or end... " other advocates of a round earth included eusebius, hilary of poitiers, irenaeus, hippolytus of rome, firmicus maternus, ambrose, jerome, prudentius, favonius eulogius, and others. the only exceptions to this consensus up until the mid - fourth century were theophilus of antioch and lactantius, both of whom held anti - hellenistic views and associated the round - earth view with pagan cosmology. lactantius, a western christian writer and advisor to the first christian roman emperor, constantine, writing sometime between 304 and 313 ad, ridiculed the notion of antipodes and the philosophers who fancied that " the universe is round like a ball. they also thought that heaven revolves in accordance with the motion of the heavenly bodies.... for that reason, they constructed brass globes, as though after the figure of the universe. " the influential theologian and philosopher saint augustine, one of the four great church fathers of the western church, similarly objected to the " fable " of antipodes : but as to the fable that there are antipodes, that is to say, men on the opposite side of the earth, where the sun rises when it sets to us, men who walk with their feet opposite ours that is on no ground credible. and, indeed, it is not affirmed that this has been learned by historical knowledge, but by scientific conjecture considered the father of modern neuroscience. from new zealand and australia came maurice wilkins, howard florey, and frank macfarlane burnet. others that did significant work include william williams keen, william coley, james d. watson ( united states ) ; salvador luria ( italy ) ; alexandre yersin ( switzerland ) ; kitasato shibasaburo ( japan ) ; jean - martin charcot, claude bernard, paul broca ( france ) ; adolfo lutz ( brazil ) ; nikolai korotkov ( russia ) ; sir william osler ( canada ) ; and harvey cushing ( united states ). as science and technology developed, medicine became more reliant upon medications. throughout history and in europe right until the late 18th century, not only plant products were used as medicine, but also animal ( including human ) body parts and fluids. pharmacology developed in part from herbalism and some drugs are still derived from plants ( atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc. ). vaccines were discovered by edward jenner and louis pasteur. the first antibiotic was arsphenamine ( salvarsan ) discovered by paul ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. the first major class of antibiotics was the sulfa drugs, derived by german chemists originally from azo dyes. pharmacology has become increasingly sophisticated ; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side - effects. genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision - making. evidence - based medicine is a contemporary movement to establish the most effective algorithms of practice ( ways of doing things ) through the use of systematic reviews and meta - analysis. the movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect, Question: A student measured the volume of water in a pan. The student boiled the water for thirty minutes and then measured the volume of the water again. The volume of water most likely A) decreased B) increased C) remained the same
A) decreased
Context: generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β€” one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united industrial applications. this branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio - oils with photosynthetic micro - algae. green biotechnology is biotechnology applied to agricultural processes. an example would be the selection and domestication of plants via micropropagation. another example is the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of poll on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of gmos. the development of a regulatory framework began in 1975, at asilomar, california. the asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. as the technology improved the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. micro kilometers ( 4, 200, 000 to 395, 400, 000 acres ). 10 % of the world ' s crop lands were planted with gm crops in 2010. as of 2011, 11 different transgenic crops were grown commercially on 395 million acres ( 160 million hectares ) in 29 countries such as the us, brazil, argentina, india, canada, china, paraguay, pakistan, south africa, uruguay, bolivia, australia, philippines, myanmar, burkina faso, mexico and spain. genetically modified foods are foods produced from organisms that have had specific changes introduced into their dna with the methods of genetic engineering. these techniques have allowed for the introduction of new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of best - known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products. the first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. in the current decades, significant progress has been done in creating genetically modified organisms ( gmos ) that enhance the diversity of applications and economical viability of industrial biotechnology. by using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse gas emissions and moving away from a petrochemical - based economy. synthetic biology is considered one of the essential cornerstones in industrial biotechnology due to its financial and sustainable contribution to the manufacturing sector. jointly biotechnology and synthetic biology play a crucial role in generating cost - effective products with nature - friendly features by using bio - based Question: Lori owns a house next to the lake. She uses lots of fertilizer to keep her lawn green. Which impact could fertilizing her lawn have on the lake? A) an increase in the algae population B) an increase in the fish population C) an increase in the mosquito population D) an increase in the lake's depth
A) an increase in the algae population
Context: based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. for example, steels are classified based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and iron - carbon alloy is only considered steel if the carbon level is between 0. 01 % and 2. 00 % by weight. for steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. heat treatment processes such as quenching and tempering can significantly change these properties, however. in contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. cast iron is defined as an iron – carbon alloy with more than 2. 00 %, but less than 6. 67 % carbon. stainless steel is defined as a regular steel alloy with greater than 10 % by weight alloying content of chromium. nickel and molybdenum are typically also added in stainless steels. other significant metallic alloys are those of aluminium, titanium, copper and magnesium. copper alloys have been known for a long time ( since the bronze age ), while the alloys of the other three metals have been relatively recently developed. due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. the alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. these materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications. = = = semiconductors = = = a semiconductor is a material that has a resistivity between a conductor and insulator. modern day electronics run on semiconductors, and the industry had an estimated us $ 530 billion market in 2021. its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. semiconductor materials are used to build diodes, transistors, light - emitting diodes ( leds ), and analog and digital electric circuits, among their many uses. semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. semiconductor devices are manufactured both as single discrete devices and as integrated circuits ( ics ), which consist of a number β€” from a few to millions β€” of devices manufactured and interconnected on a single semiconductor substrate. of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. gallium arsenide ( which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or soft interactions are not easily disentangled from hard ones. in an operational definition of soft and hard processes one finds that at presently analyzed scales there is an interplay of soft and hard processes. as the scale increases, so does the amount of hard processes. so far, nothing is as soft nor as hard as we would like. is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution ##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and the valuable metals into individual constituents. = = metal and its alloys = = much effort has been placed on understanding iron – carbon alloy system, which includes steels and cast irons. plain carbon steels ( those that contain essentially only carbon as an alloying element ) are used in low - cost, high - strength applications, where neither weight nor corrosion are a major concern. cast irons, including ductile iron, are also part of the iron - carbon system. iron - manganese - chromium alloys ( hadfield - type steels ) are also used in non - magnetic applications such as directional drilling. other engineering metals include aluminium, chromium, copper, magnesium, nickel, titanium, zinc, and silicon. these metals are most often used as alloys with the noted exception of silicon, which is not a metal. other forms include : stainless steel, particularly austenitic stainless steels, galvanized steel, nickel alloys, titanium alloys, or occasionally copper alloys are used, where resistance to corrosion is important. aluminium alloys and magnesium alloys are commonly used, when a lightweight strong part is required such as in automotive and aerospace applications. copper - nickel alloys ( such as monel ) are used in highly corrosive environments and for non - magnetic applications. nickel - based superalloys like inconel are used in high - temperature applications such as gas turbines, turbochargers, pressure vessels, and heat exchangers. for extremely high temperatures, single crystal alloys are used to minimize creep. in modern electronics, high purity single crystal silicon is essential for metal - oxide - silicon transistors ( mos ) and integrated circuits. = = production = = in production engineering, metallurgy is concerned with the production of metallic components for use in consumer or engineering products. this involves production of alloys, shaping, heat treatment and surface treatment of product. the task of the metallurgist is to achieve balance between material properties, such as cost, weight, strength, toughness, hardness, corrosion, fatigue resistance and performance in temperature extremes. to achieve this goal, the operating environment must be carefully considered. determining the hardness of the metal using the rockwell, vickers, and brinell hardness scales is a commonly used practice that helps better understand the metal ' s elasticity and plasticity for different applications and production processes. in a saltwater environment, most ferrous metals and some non - ferrous alloys corrode quickly. metals exposed to cold or cryogenic conditions may undergo a ductile to brittle Question: To compare the hardness of different minerals, it would be BEST to find A) the color of the minerals. B) which minerals scratch other minerals. C) which minerals reflect light most strongly. D) the samples that feel smoothest to the touch.
B) which minerals scratch other minerals.
Context: , lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. geographic redundancy locations can be more than 621 miles ( 999 km ) continental, more than 62 miles apart and less than 93 miles ( 150 km ) apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than 300 feet ( 91 m ) apart on the same campus. the following methods can reduce the risks of damage by a fire conflagration : large buildings at least 80 feet ( 24 m ) to 110 feet ( 34 m ) apart, but sometimes a minimum of 210 feet ( 64 m ) apart. : 9 high - rise buildings at least 82 feet ( 25 m ) apart : 12 open spaces clear of flammable vegetation within 200 feet ( 61 m ) on each side of objects different wings on the same building, in rooms that are separated by more than 300 feet ( 91 m ) different floors on the same wing of a building in rooms that are horizontally offset by a minimum of 70 feet ( 21 m ) with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70 - foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor geographic redundancy is used by amazon web services ( aws ), google cloud platform ( gcp ), microsoft azure, netflix, dropbox, salesforce, linkedin, paypal, twitter, facebook, apple icloud, cisco meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. as another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles ( 3. 2 km ) away from the shore, with an elevation of at least 5 feet ( 1. 5 m ) above sea level. for additional protection, they can be located at least 100 feet ( 30 m ) away from flood plain areas. = = functions of redundancy = = the two functions of redundancy are passive redundancy and active redundancy. both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. passive redundancy uses excess capacity to reduce the impact of component failures. one common form of passive redundancy is the extra strength of cabling and struts used in bridges. in a voltaic cell, positive ( negative ) ions flow from the low ( high ) potential electrode to the high ( low ) potential electrode, driven by an ` electromotive force ' which points in opposite direction and overcomes the electric force. similarly in a superconductor charge flows in direction opposite to that dictated by the faraday electric field as the magnetic field is expelled in the meissner effect. the puzzle is the same in both cases : what drives electric charges against electromagnetic forces? i propose that the answer is also the same in both cases : kinetic energy lowering, or ` quantum pressure '. = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used? onset of electro - chemical corrosion. similar problems are encountered in coastal and offshore structures. = = = anti - fouling = = = anti - fouling is the process of eliminating obstructive organisms from essential components of seawater systems. depending on the nature and location of marine growth, this process is performed in a number of different ways : marine organisms may grow and attach to the surfaces of the outboard suction inlets used to obtain water for cooling systems. electro - chlorination involves running high electrical current through sea water, altering the water ' s chemical composition to create sodium hypochlorite, purging any bio - matter. an electrolytic method of anti - fouling involves running electrical current through two anodes ( scardino, 2009 ). these anodes typically consist of copper and aluminum ( or alternatively, iron ). the first metal, copper anode, releases its ion into the water, creating an environment that is too toxic for bio - matter. the second metal, aluminum, coats the inside of the pipes to prevent corrosion. other forms of marine growth such as mussels and algae may attach themselves to the bottom of a ship ' s hull. this growth interferes with the smoothness and uniformity of the ship ' s hull, causing the ship to have a less hydrodynamic shape that causes it to be slower and less fuel - efficient. marine growth on the hull can be remedied by using special paint that prevents the growth of such organisms. = = = pollution control = = = = = = = sulfur emission = = = = the burning of marine fuels releases harmful pollutants into the atmosphere. ships burn marine diesel in addition to heavy fuel oil. heavy fuel oil, being the heaviest of refined oils, releases sulfur dioxide when burned. sulfur dioxide emissions have the potential to raise atmospheric and ocean acidity causing harm to marine life. however, heavy fuel oil may only be burned in international waters due to the pollution created. it is commercially advantageous due to the cost effectiveness compared to other marine fuels. it is prospected that heavy fuel oil will be phased out of commercial use by the year 2020 ( smith, 2018 ). = = = = oil and water discharge = = = = water, oil, and other substances collect at the bottom of the ship in what is known as the bilge. bilge water is pumped overboard, but must pass a pollution threshold test of 15 ppm ( parts per million ) of oil to be discharged. water is tested world made wide use of hydropower, along with early uses of tidal power, wind power, fossil fuels such as petroleum, and large factory complexes ( tiraz in arabic ). a variety of industrial mills were employed in the islamic world, including fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, and tide mills. by the 11th century, every province throughout the islamic world had these industrial mills in operation. muslim engineers also employed water turbines and gears in mills and water - raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water - raising machines. many of these technologies were transferred to medieval europe. wind - powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now iran, afghanistan and pakistan by the 9th century. they were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 – 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two ##nts from the air to reduce the potential adverse effects on humans and the environment. the process of air purification may be performed using methods such as mechanical filtration, ionization, activated carbon adsorption, photocatalytic oxidation, and ultraviolet light germicidal irradiation. = = = sewage treatment = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the the influence of a neutrinoless electron to positron conversion on a cooling of strongly magnetized iron white dwarfs is studied. new non - perturbatives excitations in the massless thirring and schwinger models are discussed. a review of mhd dynamos and turbulence. Question: A house is built in a desert, where there is no electricity and very little wind. Which action could lead to operating the electrical appliances in the house that would cause the least amount of environmental pollution? A) constructing a small hydroelectric plant B) placing solar panels on the roof of the house C) using gasoline generators D) burning coal or wood
B) placing solar panels on the roof of the house
Context: ##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light three separate questions of relevance to major league baseball are investigated from a physics perspective. first, can a baseball be hit farther with a corked bat? second, is there evidence that the baseball is more lively today than in earlier years? third, can storing baseballs in a temperature - or humidity - controlled environment significantly affect home run production? each of these questions is subjected to a physics analysis, including an experiment, an interpretation of the data, and a definitive answer. the answers to the three questions are no, no, and yes. lectures given at the summer school on algebraic groups, goettingen, june 27 - july 15 2005 , and gerhard lenski have declared technological progress to be the primary factor driving the development of human civilization. morgan ' s concept of three major stages of social evolution ( savagery, barbarism, and civilization ) can be divided by technological milestones, such as fire. white argued the measure by which to judge the evolution of culture is energy. for white, " the primary function of culture " is to " harness and control energy. " white differentiates between five stages of human development : in the first, people use the energy of their own muscles. in the second, they use the energy of domesticated animals. in the third, they use the energy of plants ( agricultural revolution ). in the fourth, they learn to use the energy of natural resources : coal, oil, gas. in the fifth, they harness nuclear energy. white introduced the formula p = e / t, where p is the development index, e is a measure of energy consumed, and t is the measure of the efficiency of technical factors using the energy. in his own words, " culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased ". nikolai kardashev extrapolated his theory, creating the kardashev scale, which categorizes the energy use of advanced civilizations. lenski ' s approach focuses on information. the more information and knowledge ( especially allowing the shaping of natural environment ) a given society has, the more advanced it is. he identifies four stages of human development, based on advances in the history of communication. in the first stage, information is passed by genes. in the second, when humans gain sentience, they can learn and pass information through experience. in the third, the humans start using signs and develop logic. in the fourth, they can create symbols, develop language and writing. advancements in communications technology translate into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. he also differentiates societies based on their level of technology, communication, and economy : hunter - gatherer, simple agricultural, advanced agricultural, industrial, special ( such as fishing societies ). in economics, productivity is a measure of technological progress. productivity increases when fewer inputs ( classically labor and capital but some measures include energy and materials ) are used in the production of a unit of output. another indicator of technological progress is the development of new products and services, s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = = pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and bee. he investigated chick embryos by breaking open eggs and observing them at various stages of development. aristotle ' s works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. he also presented philosophies about physics, nature, and astronomy using inductive reasoning in his works physics and meteorology. while aristotle considered natural philosophy more seriously than his predecessors, he approached it as a theoretical branch of science. still, inspired by his work, ancient roman philosophers of the early 1st century ad, including lucretius, seneca and pliny the elder, wrote treatises that dealt with the rules of the natural world in varying degrees of depth. many ancient roman neoplatonists of the 3rd to the 6th centuries also adapted aristotle ' s teachings on the physical world to a philosophy that emphasized spiritualism. early medieval philosophers including macrobius, calcidius and martianus capella also examined the physical world, largely from a cosmological and cosmographical perspective, putting forth theories on the arrangement of celestial bodies and the heavens, which were posited as being composed of aether. aristotle ' s works on natural philosophy continued to be translated and studied amid the rise of the byzantine empire and abbasid caliphate. in the byzantine empire, john philoponus, an alexandrian aristotelian commentator and christian theologian, was the first to question aristotle ' s physics teaching. unlike aristotle, who based his physics on verbal argument, philoponus instead relied on observation and argued for observation rather than resorting to a verbal argument. he introduced the theory of impetus. john philoponus ' criticism of aristotelian principles of physics served as inspiration for galileo galilei during the scientific revolution. a revival in mathematics and science took place during the time of the abbasid caliphate from the 9th century onward, when muslim scholars expanded upon greek and indian natural philosophy. the words alcohol, algebra and zenith all have arabic roots. = = = medieval natural philosophy ( 1100 – 1600 ) = = = aristotle ' s works and other greek natural philosophy did not reach the west until about the middle of the 12th century, when works were translated from greek and two possible interpretations of frw cosmologies ( perfect fluid or dissipative fluid ) are considered as consecutive phases of the system. necessary conditions are found, for the transition from perfect fluid to dissipative regime to occur, bringing out the conspicuous role played by a particular state of the system ( the ' ' critical point ' ' ). notes of the lectures delivered in les houches during the summer school on complex systems ( july 2006 ). , they use the energy of plants ( agricultural revolution ). in the fourth, they learn to use the energy of natural resources : coal, oil, gas. in the fifth, they harness nuclear energy. white introduced the formula p = e / t, where p is the development index, e is a measure of energy consumed, and t is the measure of the efficiency of technical factors using the energy. in his own words, " culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased ". nikolai kardashev extrapolated his theory, creating the kardashev scale, which categorizes the energy use of advanced civilizations. lenski ' s approach focuses on information. the more information and knowledge ( especially allowing the shaping of natural environment ) a given society has, the more advanced it is. he identifies four stages of human development, based on advances in the history of communication. in the first stage, information is passed by genes. in the second, when humans gain sentience, they can learn and pass information through experience. in the third, the humans start using signs and develop logic. in the fourth, they can create symbols, develop language and writing. advancements in communications technology translate into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. he also differentiates societies based on their level of technology, communication, and economy : hunter - gatherer, simple agricultural, advanced agricultural, industrial, special ( such as fishing societies ). in economics, productivity is a measure of technological progress. productivity increases when fewer inputs ( classically labor and capital but some measures include energy and materials ) are used in the production of a unit of output. another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced. in developed countries productivity growth has been slowing since the late 1970s ; however, productivity growth was higher in some economic sectors, such as manufacturing. for example, employment in manufacturing in the united states declined from over 30 % in the 1940s to just over 10 % 70 years later. similar changes occurred in other developed countries. this stage is referred to as post - industrial. in the late 1970s sociologists and anthropologists like alvin toffler ( author of future shock ), daniel bell and john naisbitt have approached the theories of post - industrial societies, Question: Students are learning about the natural resources in Maryland. One group of students researches information about renewable natural resources in the state. The other group researches information about nonrenewable natural resources in the state. The resources the students investigate include plants, animals, soil, minerals, water, coal, and oil. Which nonrenewable natural resource heats homes? A) sunlight B) aluminum C) natural gas D) ocean waves
C) natural gas
Context: is the scientific study of inheritance. mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. it has several principles. the first is that genetic characteristics, alleles, are discrete and have alternate forms ( e. g., purple vs. white or tall vs. dwarf ), each inherited from one of two parents. based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive ; an organism with at least one dominant allele will display the phenotype of that dominant allele. during gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. heterozygotic individuals produce gametes with an equal frequency of two alleles. finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can ##tes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical ( e. g., nitrous acid, benzopyrene ) or radiation ( e. g., x - ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes ). mutations can lead to phenotypic effects such as loss - of - function, gain - of - function, and conditional mutations. some mutations are beneficial, as they are a source of genetic variation for evolution. others are harmful if they were to result in a loss of function of genes needed for survival. = = = gene expression = = = gene expression is the molecular process by which a genotype encoded in dna gives rise to an observable phenotype in the proteins of an organism ' s body. this process is summarized by the central dogma of molecular biology, which was formulated by francis crick in 1958. according to the central dogma, genetic information flows from dna , tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive for natural scientists, with the creation of transgenic organisms one of the most important tools for analysis of gene function. genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at - 80 Β°c almost indefinitely. once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research. organisms are genetically engineered to discover the functions of certain genes. this could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. these experiments generally involve loss of function, gain of function, tracking and expression. loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. in a simple knockout a copy of the desired gene has been altered to make it non - functional. embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. these stem cells are injected into blastocysts, which are implanted into surrogate mothers. this allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. it is used especially frequently in developmental biology. when this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called " scanning mutagenesis ". the simplest method, and the first to be used, is " alanine scanning ", where every position in turn is mutated to the unreactive amino acid alanine. gain of function experiments, the logical counterpart of knockouts. these are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. the process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition best - known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products. the first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering . microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism ' s genes using technology. it is a set of technologies used to change the genetic makeup of cells, including the transfer of genes within and across species boundaries to produce improved or novel organisms. new dna is obtained by either isolating and copying the genetic material of interest using recombinant dna methods or by artificially synthesising the dna. a construct is usually created and used to insert this dna into the host organism. the first recombinant dna molecule was made by paul berg in 1972 by combining dna from the monkey virus sv40 with the lambda virus. as well as inserting genes, the process can be used to remove, or " knock out ", genes. the new dna can be inserted randomly, or targeted to a specific part of the genome. an organism that is generated through genetic engineering is considered to be genetically modified ( gm ) and the resulting entity is a genetically modified organism ( gmo ). the first gmo was a bacterium generated by herbert boyer and stanley cohen in 1973. rudolf jaenisch created the first gm animal when he inserted foreign dna into a mouse in 1974. the first company to focus on genetic engineering, genentech, was founded in 1976 and started the production of human proteins. genetically engineered human insulin was produced in 1978 and insulin - producing bacteria were commercialised in 1982. genetically modified food has been sold since 1994, with the release of the flavr savr tomato. the flavr savr was engineered to have a longer shelf life, but most current gm crops are modified to increase resistance to insects and herbicides. glofish, the first gmo designed as a pet, was sold in the united states in december 2003. in 2016 salmon modified with a growth hormone were sold. genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. in research, gmos are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. by knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. as well as producing hormones, vaccines and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. chinese hamster ovary ( cho ) cells are used in industrial genetic engineering. additionally mrna vaccines are made through genetic engineering to prevent infections by viruses such as covid - 19. the same techniques that are used to produce drugs can also have industrial applications such the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β€” one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form genetic engineering takes the gene directly from one organism and delivers it to the other. this is much faster, can be used to insert any genes from any organism ( even ones from different domains ) and prevents other undesirable genes from also being added. genetic engineering could potentially fix severe genetic disorders in humans by replacing the defective gene with a functioning one. it is an important tool in research that allows the function of specific genes to be studied. drugs, vaccines and other products have been harvested from organisms engineered to produce them. crops have been developed that aid food security by increasing yield, nutritional value and tolerance to environmental stresses. the dna can be introduced directly into the host organism or into a cell that is then fused or hybridised with the host. this relies on recombinant nucleic acid techniques to form new combinations of heritable genetic material followed by the incorporation of that material either indirectly through a vector system or directly through micro - injection, macro - injection or micro - encapsulation. genetic engineering does not normally include traditional breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. however, some broad definitions of genetic engineering include selective breeding. cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesised material into an organism. plants, animals or microorganisms that have been changed through genetic engineering are termed genetically modified organisms or gmos. if genetic material from another species is added to the host, the resulting organism is called transgenic. if genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. if genetic engineering is used to remove genetic material from the target organism the resulting organism is termed a knockout organism. in europe genetic modification is synonymous with genetic engineering while within the united states of america and canada genetic modification can also be used to refer to more conventional breeding methods. = = history = = humans have altered the genomes of species for thousands of years through selective breeding, or artificial selection : 1 : 1 as contrasted with natural selection. more recently, mutation breeding has used exposure to chemicals or radiation to produce a high frequency of random mutations, for selective breeding purposes. genetic engineering as the direct manipulation of dna by humans outside breeding and studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the smallest genomes among flowering plants. arabidopsis was the first plant to have its genome sequenced, in 2000. the sequencing of some other relatively small genomes, of rice ( oryza sativa ) and brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example Question: Gregor Mendel was the first to show that organisms had traits that are passed on from parents to the next generation. In order for the scientific community to accept Mendel's discovery, others had to A) invent the microscope. B) read through his journals. C) duplicate the results of his experiments. D) invest money in his scientific investigations.
C) duplicate the results of his experiments.
Context: the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or mostly waste. concentrating the particles of value in a form supporting separation enables the desired metal to be removed from waste products. mining may not be necessary, if the ore body and physical environment are conducive to leaching. leaching dissolves minerals in an ore body and results in an enriched solution. the solution is collected and processed to extract valuable metals. ore bodies often contain more than one valuable metal. tailings of a previous process may be used as a feed in another process to extract a secondary product from the original ore. additionally, a concentrate may contain more than one valuable metal. that concentrate would then be processed to separate ##thic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures al - kimia is derived from the ancient greek χημια, which is in turn derived from the word kemet, which is the ancient name of egypt in the egyptian language. alternately, al - kimia may derive from χημΡια ' cast together '. = = modern principles = = the current model of atomic structure is the quantum mechanical model. traditional chemistry starts with the study of elementary particles, atoms, molecules, substances, metals, crystals and other aggregates of matter. matter can be studied in solid, liquid, gas and plasma states, in isolation or in combination. the interactions, reactions and transformations that are studied in chemistry are usually the result of interactions between atoms, leading to rearrangements of the chemical bonds which hold atoms together. such behaviors are studied in a chemistry laboratory. the chemistry laboratory stereotypically uses various forms of laboratory glassware. however glassware is not central to chemistry, and a great deal of experimental ( as well as applied / industrial ) chemistry is done without it. a chemical reaction is a transformation of some substances into one or more different substances. the basis of such a chemical transformation is the rearrangement of electrons in the chemical bonds between atoms. it can be symbolically depicted through a chemical equation, which usually involves atoms as subjects. the number of atoms on the left and the right in the equation for a chemical transformation is equal. ( when the number of atoms on either side is unequal, the transformation is referred to as a nuclear reaction or radioactive decay. ) the type of chemical reactions a substance may undergo and the energy changes that may accompany it are constrained by certain basic rules, known as chemical laws. energy and entropy considerations are invariably important in almost all chemical studies. chemical substances are classified in terms of their structure, phase, as well as their chemical compositions. they can be analyzed using the tools of chemical analysis, e. g. spectroscopy and chromatography. scientists engaged in chemical research are known as chemists. most chemists specialize in one or more sub - disciplines. several concepts are essential for the study of chemistry ; some of them are : = = = matter = = = in chemistry, matter is defined as anything that has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans was used before copper smelting was known. copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. the concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently work hardened to be suitable for making tools. bronze is an alloy of copper with tin ; the latter being found in relatively few deposits globally caused a long time to elapse before true tin bronze became widespread. ( see : tin sources and trade in ancient times ) bronze was a major advancement over stone as a material for making tools, both because of its mechanical properties like strength and ductility and because it could be cast in molds to make intricately shaped objects. bronze significantly advanced shipbuilding technology with better tools and bronze nails. bronze nails replaced the old method of attaching boards of the hull with cord woven through drilled holes. better ships enabled long - distance trade and the advance of civilization. this technological trend apparently began in the fertile crescent and spread outward over time. these developments were not, and still are not, universal. the three - age system does not accurately describe the technology history of groups outside of eurasia, and does not apply at all in the case of some isolated populations, such as the spinifex people, the sentinelese, and various amazonian tribes, which still make use of stone age technology, and have not developed agricultural or metal technology. these villages preserve traditional customs in the face of global modernity, exhibiting a remarkable resistance to the rapid advancement of technology. = = = = iron age = = = = before iron smelting was developed the only iron was obtained from meteorites and is usually identified by having nickel content. meteoric iron was rare and valuable, but was sometimes used to make tools and other implements, such as fish hooks. the iron age involved the adoption of iron smelting technology. it generally replaced bronze and made it possible to produce tools which were stronger, lighter and cheaper to make than bronze equivalents. the raw materials to make iron, such as ore and limestone, are far more abundant than copper and especially tin ores. consequently, iron was produced in many areas. it was not possible to mass manufacture steel or pure iron because of the high temperatures required. furnaces could reach melting temperature but the crucibles and molds needed for melting and casting had not been developed. steel could be produced by forging bloomery iron to reduce the carbon content in a . currently, even blades made of advanced metal alloys used in the engines ' hot section require cooling and careful limiting of operating temperatures. turbine engines made with ceramics could operate more efficiently, giving aircraft greater range and payload for a set amount of fuel. recently, there have been advances in ceramics which include bio - ceramics, such as dental implants and synthetic bones. hydroxyapatite, the natural mineral component of bone, has been made synthetically from a number of biological and chemical sources and can be formed into ceramic materials. orthopedic implants made from these materials bond readily to bone and other tissues in the body without rejection or inflammatory reactions. because of this, they are of great interest for gene delivery and tissue engineering scaffolds. most hydroxyapatite ceramics are very porous and lack mechanical strength and are used to coat metal orthopedic devices to aid in forming a bond to bone or as bone fillers. they are also used as fillers for orthopedic plastic screws to aid in reducing the inflammation and increase absorption of these plastic materials. work is being done to make strong, fully dense nano crystalline hydroxyapatite ceramic materials for orthopedic weight bearing devices, replacing foreign metal and plastic orthopedic materials with a synthetic, but naturally occurring, bone mineral. ultimately these ceramic materials may be used as bone replacements or with the incorporation of protein collagens, synthetic bones. durable actinide - containing ceramic materials have many applications such as in nuclear fuels for burning excess pu and in chemically - inert sources of alpha irradiation for power supply of unmanned space vehicles or to produce electricity for microelectronic devices. both use and disposal of radioactive actinides require their immobilization in a durable host material. nuclear waste long - lived radionuclides such as actinides are immobilized using chemically - durable crystalline materials based on polycrystalline ceramics and large single crystals. alumina ceramics are widely utilized in the chemical industry due to their excellent chemical stability and high resistance to corrosion. it is used as acid - resistant pump impellers and pump bodies, ensuring long - lasting performance in transferring aggressive fluids. they are also used in acid - carrying pipe linings to prevent contamination and maintain fluid purity, which is crucial in industries like pharmaceuticals and food processing. valves made from alumina ceramics demonstrate exceptional durability and resistance to chemical attack, making them reliable for controlling the flow of corrosive liquids. = ##nik, in present - day serbia. the site of plocnik has produced a smelted copper axe dating from 5, 500 bc, belonging to the vinca culture. the balkans and adjacent carpathian region were the location of major chalcolithic cultures including vinca, varna, karanovo, gumelnita and hamangia, which are often grouped together under the name of ' old europe '. with the carpatho - balkan region described as the ' earliest metallurgical province in eurasia ', its scale and technical quality of metal production in the 6th – 5th millennia bc totally overshadowed that of any other contemporary production centre. the earliest documented use of lead ( possibly native or smelted ) in the near east dates from the 6th millennium bc, is from the late neolithic settlements of yarim tepe and arpachiyah in iraq. the artifacts suggest that lead smelting may have predated copper smelting. metallurgy of lead has also been found in the balkans during the same period. copper smelting is documented at sites in anatolia and at the site of tal - i iblis in southeastern iran from c. 5000 bc. copper smelting is first documented in the delta region of northern egypt in c. 4000 bc, associated with the maadi culture. this represents the earliest evidence for smelting in africa. the varna necropolis, bulgaria, is a burial site located in the western industrial zone of varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin Question: Alicia has lots of old bicycle parts. She wants to build something new with the parts. What is the first thing Alicia should do? A) plan the new item B) construct the new item C) try out the new item D) evaluate the new item
A) plan the new item
Context: the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. emergency medicine is concerned with the diagnosis and treatment of acute or life - threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. family medicine, family practice, general practice or primary care is, in many countries, the first port - of - call for patients with non - emergency medical problems. family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. medical genetics is concerned with the diagnosis and management of hereditary disorders. neurology is concerned with diseases of the nervous system. in the uk, neurology is a subspecialty of general medicine. obstetrics and gynecology ( often abbreviated as ob / gyn ( american english ) or obs & gynae ( british english ) ) are concerned respectively with childbirth and the female reproductive and associated organs. reproductive medicine and fertility medicine are generally practiced by gynecological specialists. pediatrics ( ae ) or paediatrics ( be ) is devoted to the care of infants, children, and adolescents. like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery. pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health. physical medicine and rehabilitation ( or physiatry ) is concerned with functional improvement after injury, illness, or congenital disorders. podiatric medicine is the study of, diagnosis, and medical and surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back. preventive medicine is the branch of medicine concerned with preventing disease. community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis. psychiatry is the branch of medicine concerned with the bio - psycho - social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. related fields include psychotherapy and clinical psychology. = = = interdisciplinary fields = = = some interdisciplinary sub - specialties of medicine include : addiction medicine deals with the treatment of addiction. aerospace medicine deals with medical problems related to flying and space travel. biomedical engineering is a field dealing with the application of engineering principles to medical practice known as anaesthetics ) : concerned with the perioperative management of the surgical patient. the anesthesiologist ' s role during surgery is to prevent derangement in the vital organs ' ( i. e. brain, heart, kidneys ) functions and postoperative pain. outside of the operating room, the anesthesiology physician also serves the same function in the labor and delivery ward, and some are specialized in critical medicine. emergency medicine is concerned with the diagnosis and treatment of acute or life - threatening conditions, including trauma, surgical, medical, pediatric, and psychiatric emergencies. family medicine, family practice, general practice or primary care is, in many countries, the first port - of - call for patients with non - emergency medical problems. family physicians often provide services across a broad range of settings including office based practices, emergency department coverage, inpatient care, and nursing home care. medical genetics is concerned with the diagnosis and management of hereditary disorders. neurology is concerned with diseases of the nervous system. in the uk, neurology is a subspecialty of general medicine. obstetrics and gynecology ( often abbreviated as ob / gyn ( american english ) or obs & gynae ( british english ) ) are concerned respectively with childbirth and the female reproductive and associated organs. reproductive medicine and fertility medicine are generally practiced by gynecological specialists. pediatrics ( ae ) or paediatrics ( be ) is devoted to the care of infants, children, and adolescents. like internal medicine, there are many pediatric subspecialties for specific age ranges, organ systems, disease classes, and sites of care delivery. pharmaceutical medicine is the medical scientific discipline concerned with the discovery, development, evaluation, registration, monitoring and medical aspects of marketing of medicines for the benefit of patients and public health. physical medicine and rehabilitation ( or physiatry ) is concerned with functional improvement after injury, illness, or congenital disorders. podiatric medicine is the study of, diagnosis, and medical and surgical treatment of disorders of the foot, ankle, lower limb, hip and lower back. preventive medicine is the branch of medicine concerned with preventing disease. community health or public health is an aspect of health services concerned with threats to the overall health of a community based on population health analysis. psychiatry is the branch of medicine concerned with the bio - psycho - social study of the etiology, diagnosis, treatment and prevention of cognitive, perceptual, emotional and behavioral disorders. the tests, assays, and procedures needed for providing the specific services. subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of the nervous system. these kinds of tests can be divided into recordings of : ( 1 ) spontaneous or continuously running electrical activity, or ( 2 ) stimulus evoked responses. subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. diagnostic radiology is concerned with imaging of the body, e. g. by x - rays, x - ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also medicine are : basic sciences of medicine ; this is what every physician is educated in, and some return to in biomedical research. interdisciplinary fields, where different medical specialties are mixed to function in certain occasions. medical specialties = = = basic sciences = = = anatomy is the study of the physical structure of organisms. in contrast to macroscopic or gross anatomy, cytology and histology are concerned with microscopic structures. biochemistry is the study of the chemistry taking place in living organisms, especially the structure and function of their chemical components. biomechanics is the study of the structure and function of biological systems by means of the methods of mechanics. biophysics is an interdisciplinary science that uses the methods of physics and physical chemistry to study biological systems. biostatistics is the application of statistics to biological fields in the broadest sense. a knowledge of biostatistics is essential in the planning, evaluation, and interpretation of medical research. it is also fundamental to epidemiology and evidence - based medicine. cytology is the microscopic study of individual cells. embryology is the study of the early development of organisms. endocrinology is the study of hormones and their effect throughout the body of animals. epidemiology is the study of the demographics of disease processes, and includes, but is not limited to, the study of epidemics. genetics is the study of genes, and their role in biological inheritance. gynecology is the study of female reproductive system. histology is the study of the structures of biological tissues by light microscopy, electron microscopy and immunohistochemistry. immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. medical physics is the study of the applications of physics principles in medicine. microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially , characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the west. little evidence survives of how ancient indian cultures around the indus river understood nature, but some of their perspectives may be reflected in the vedas, a set of sacred hindu texts. they reveal a conception of the universe as ever - expanding and constantly being recycled and reformed. surgeons in the ayurvedic tradition saw health and illness as a combination of three humors : wind, bile and phlegm. a healthy life resulted from a balance among these humors. in ayurvedic thought, the body consisted of five elements : earth, water, fire, wind, and space. ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy. pre - socratic philosophers in ancient greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 bc. however, an element of magic and mythology remained. natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. thales of miletus, an early philosopher who lived from 625 to 546 bc, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. in the 5th century bc, leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. pythagoras applied greek innovations in mathematics to astronomy and suggested that the earth was spherical. = = = aristotelian natural philosophy ( 400 bc – 1100 ad ) = = = later socratic and platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world ; plato criticized pre - socratic thinkers as materialists and anti - religionists. aristotle, however, a student of plato who lived from 384 to 322 bc, paid closer attention to the natural world in his philosophy. in his history of animals, he described the inner workings of 110 species, including the stingray, catfish and bee. he investigated chick embryos by breaking open eggs and observing them at various stages of development. aristotle ' s works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. he also presented philosophies about physics, nature, and astronomy using the nervous system. these kinds of tests can be divided into recordings of : ( 1 ) spontaneous or continuously running electrical activity, or ( 2 ) stimulus evoked responses. subspecialties include electroencephalography, electromyography, evoked potential, nerve conduction study and polysomnography. sometimes these tests are performed by techs without a medical degree, but the interpretation of these tests is done by a medical professional. diagnostic radiology is concerned with imaging of the body, e. g. by x - rays, x - ray computed tomography, ultrasonography, and nuclear magnetic resonance tomography. interventional radiologists can access areas in the body under imaging for an intervention or diagnostic sampling. nuclear medicine is concerned with studying human organ systems by administering radiolabelled substances ( radiopharmaceuticals ) to the body, which can then be imaged outside the body by a gamma camera or a pet scanner. each radiopharmaceutical consists of two parts : a tracer that is specific for the function under study ( e. g., neurotransmitter pathway, metabolic pathway, blood flow, or other ), and a radionuclide ( usually either a gamma - emitter or a positron emitter ). there is a degree of overlap between nuclear medicine and radiology, as evidenced by the emergence of combined devices such as the pet / ct scanner. pathology as a medical specialty is the branch of medicine that deals with the study of diseases and the morphologic, physiologic changes produced by them. as a diagnostic specialty, pathology can be considered the basis of modern scientific medical knowledge and plays a large role in evidence - based medicine. many modern molecular tests such as flow cytometry, polymerase chain reaction ( pcr ), immunohistochemistry, cytogenetics, gene rearrangements studies and fluorescent in situ hybridization ( fish ) fall within the territory of pathology. = = = = other major specialties = = = = the following are some major medical specialties that do not directly fit into any of the above - mentioned groups : anesthesiology ( also known as anaesthetics ) : concerned with the perioperative management of the surgical patient. the anesthesiologist ' s role during surgery is to prevent derangement in the vital organs ' ( i. e. brain, heart, kidneys ) functions and postoperative pain. outside of ##ry. immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. medical physics is the study of the applications of physics principles in medicine. microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. pathology as a science is the study of disease – the causes, course, progression and resolution thereof. pharmacology is the study of drugs and their actions. photobiology is the study of the interactions between non - ionizing radiation and living organisms. physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. radiobiology is the study of the interactions between ionizing radiation and living organisms. toxicology is the study of hazardous effects of drugs and poisons. = = = specialties = = = in the broadest meaning of " medicine ", there are many different specialties. in the uk, most specialities have their own body or college, which has its own entrance examination. these are collectively known as the royal colleges, although not all currently use the term " royal ". the development of a speciality is often driven by new technology ( such as the development of effective anaesthetics ) or ways of working ( such as emergency departments ) ; the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. within medical circles, specialities usually fit into one of two broad categories : " medicine " and " surgery ". " medicine " refers to the practice of non - operative medicine, and most of its subspecialties require preliminary training in internal medicine. in the uk cross - fertilization that takes place among the various fields. psychology differs from biology and neuroscience in that it is primarily concerned with the interaction of mental processes and behaviour, and of the overall processes of a system, and not simply the biological or neural processes themselves, though the subfield of neuropsychology combines the study of the actual neural processes with the study of the mental effects they have subjectively produced. many people associate psychology with clinical psychology, which focuses on assessment and treatment of problems in living and psychopathology. in reality, psychology has myriad specialties including social psychology, developmental psychology, cognitive psychology, educational psychology, industrial - organizational psychology, mathematical psychology, neuropsychology, and quantitative analysis of behaviour. psychology is a very broad science that is rarely tackled as a whole, major block. although some subfields encompass a natural science base and a social science application, others can be clearly distinguished as having little to do with the social sciences or having a lot to do with the social sciences. for example, biological psychology is considered a natural science with a social scientific application ( as is clinical medicine ), social and occupational psychology are, generally speaking, purely social sciences, whereas neuropsychology is a natural science that lacks application out of the scientific tradition entirely. in british universities, emphasis on what tenet of psychology a student has studied and / or concentrated is communicated through the degree conferred : bpsy indicates a balance between natural and social sciences, bsc indicates a strong ( or entire ) scientific concentration, whereas a ba underlines a majority of social science credits. this is not always necessarily the case however, and in many uk institutions students studying the bpsy, bsc, and ba follow the same curriculum as outlined by the british psychological society and have the same options of specialism open to them regardless of whether they choose a balance, a heavy science basis, or heavy social science basis to their degree. if they applied to read the ba. for example, but specialized in heavily science - based modules, then they will still generally be awarded the ba. = = = sociology = = = sociology is the systematic study of society, individuals ' relationship to their societies, the consequences of difference, and other aspects of human social action. the meaning of the word comes from the suffix - logy, which means " study of ", derived from ancient greek, and the stem soci -, which is from the latin word socius, meaning " companion ", or society in general. auguste comte ( 1798 – 1857 ) coined and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. this allows us to localize particular functions within different brain regions. fmri has moderate spatial and temporal resolution. optical imaging. this technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active ( i. e., those that have more oxygenated blood ). optical imaging has moderate temporal resolution, but poor spatial resolution. it also has the advantage that it is extremely safe and can be used to study infants ' brains. magnetoencephalography. meg measures magnetic fields resulting from cortical activity. it is similar to eeg, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in eeg is. meg uses squid sensors to detect tiny magnetic fields. = = = computational modeling = = = computational models require a mathematically and logically formal representation of a problem. computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. computational modeling can help us understand the functional organization of a particular cognitive phenomenon. approaches to cognitive modeling can be categorized as : ( 1 ) symbolic, on abstract mental functions of an intelligent mind by means of symbols ; ( 2 ) subsymbolic, on the neural and associative properties of the human brain ; and ( 3 ) across the symbolic – subsymbolic border, including hybrid. symbolic modeling evolved from the computer science paradigms using the technologies of knowledge - based systems, as well as a philosophical perspective ( e. g. " good old - fashioned artificial intelligence " ( gofa often called physicians. these terms, internist or physician ( in the narrow sense, common outside north america ), generally exclude practitioners of gynecology and obstetrics, pathology, psychiatry, and especially surgery and its subspecialities. because their patients are often seriously ill or require complex investigations, internists do much of their work in hospitals. formerly, many internists were not subspecialized ; such general physicians would see any complex nonsurgical problem ; this style of practice has become much less common. in modern urban practice, most internists are subspecialists : that is, they generally limit their medical practice to problems of one organ system or to one particular area of medical knowledge. for example, gastroenterologists and nephrologists specialize respectively in diseases of the gut and the kidneys. in the commonwealth of nations and some other countries, specialist pediatricians and geriatricians are also described as specialist physicians ( or internists ) who have subspecialized by age of patient rather than by organ system. elsewhere, especially in north america, general pediatrics is often a form of primary care. there are many subspecialities ( or subdisciplines ) of internal medicine : training in internal medicine ( as opposed to surgical training ), varies considerably across the world : see the articles on medical education for more details. in north america, it requires at least three years of residency training after medical school, which can then be followed by a one - to three - year fellowship in the subspecialties listed above. in general, resident work hours in medicine are less than those in surgery, averaging about 60 hours per week in the us. this difference does not apply in the uk where all doctors are now required by law to work less than 48 hours per week on average. = = = = diagnostic specialties = = = = clinical laboratory sciences are the clinical diagnostic services that apply laboratory techniques to diagnosis and management of patients. in the united states, these services are supervised by a pathologist. the personnel that work in these medical laboratory departments are technically trained staff who do not hold medical degrees, but who usually hold an undergraduate medical technology degree, who actually perform the tests, assays, and procedures needed for providing the specific services. subspecialties include transfusion medicine, cellular pathology, clinical chemistry, hematology, clinical microbiology and clinical immunology. clinical neurophysiology is concerned with testing the physiology or function of the central and peripheral aspects of Question: Which is a major organ of the nervous system? A) brain B) stomach C) lung D) bone
A) brain
Context: affect static bodies dynamics, the study of how forces affect moving bodies. dynamics includes kinematics ( about movement, velocity, and acceleration ) and kinetics ( about forces and resulting accelerations ). mechanics of materials, the study of how different materials deform under various types of stress fluid mechanics, the study of how fluids react to forces kinematics, the study of the motion of bodies ( objects ) and systems ( groups of objects ), while ignoring the forces that cause the motion. kinematics is often used in the design and analysis of mechanisms. continuum mechanics, a method of applying mechanics that assumes that objects are continuous ( rather than discrete ) mechanical engineers typically use mechanics in the design or analysis phases of engineering. if the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. dynamics might be used when designing the car ' s engine, to evaluate the forces in the pistons and cams as the engine cycles. mechanics of materials might be used to choose appropriate materials for the frame and engine. fluid mechanics might be used to design a ventilation system for the vehicle ( see hvac ), or to design the intake system for the engine. = = = mechatronics and robotics = = = mechatronics is a combination of mechanics and electronics. it is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. in this way, machines can be automated through the use of electric motors, servo - mechanisms, and other electrical systems in conjunction with special software. a common example of a mechatronics system is a cd - rom drive. mechanical systems open and close the drive, spin the cd and move the laser, while an optical system reads the data on the cd and converts it to bits. integrated software controls the process and communicates the contents of the cd to the computer. robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. these robots may be of any shape and size, but all are preprogrammed and interact physically with the world. to create a robot, an engineer typically employs kinematics ( to determine the robot ' s range of motion ) and mechanics ( to determine the stresses within the robot ). robots are used extensively in industrial automation engineering. they allow businesses to save money on labor, a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 – 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 – 2 ), or a downshift maneuver in passing ( 4 – 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up - front tooling and fixed costs associated with developing the vehicle. there are also costs associated with warranty reductions and marketing. program timing : to some extent programs are timed with respect to the market, and also to the production - schedules of assembly plants. any new part in the design must support the development and manufacturing schedule of the model. design for manufacturability ( dfm ) : dfm refers to designing vehicular components in such a way that they are not only feasible to manufacture, but also such that they are cost - efficient to produce while resulting in acceptable systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 – 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 – 2 ), or a downshift maneuver in passing ( 4 – 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up - front tooling and fixed costs associated with developing the vehicle. there are also costs associated with warranty reductions and marketing. program timing : to some extent programs are timed with respect to the market, and also to the production - schedules of assembly plants. any new forces and their effect upon matter. typically, engineering mechanics is used to analyze and predict the acceleration and deformation ( both elastic and plastic ) of objects under known forces ( also called loads ) or stresses. subdisciplines of mechanics include statics, the study of non - moving bodies under known loads, how forces affect static bodies dynamics, the study of how forces affect moving bodies. dynamics includes kinematics ( about movement, velocity, and acceleration ) and kinetics ( about forces and resulting accelerations ). mechanics of materials, the study of how different materials deform under various types of stress fluid mechanics, the study of how fluids react to forces kinematics, the study of the motion of bodies ( objects ) and systems ( groups of objects ), while ignoring the forces that cause the motion. kinematics is often used in the design and analysis of mechanisms. continuum mechanics, a method of applying mechanics that assumes that objects are continuous ( rather than discrete ) mechanical engineers typically use mechanics in the design or analysis phases of engineering. if the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. dynamics might be used when designing the car ' s engine, to evaluate the forces in the pistons and cams as the engine cycles. mechanics of materials might be used to choose appropriate materials for the frame and engine. fluid mechanics might be used to design a ventilation system for the vehicle ( see hvac ), or to design the intake system for the engine. = = = mechatronics and robotics = = = mechatronics is a combination of mechanics and electronics. it is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. in this way, machines can be automated through the use of electric motors, servo - mechanisms, and other electrical systems in conjunction with special software. a common example of a mechatronics system is a cd - rom drive. mechanical systems open and close the drive, spin the cd and move the laser, while an optical system reads the data on the cd and converts it to bits. integrated software controls the process and communicates the contents of the cd to the computer. robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. these robots may be of any shape and size, but all are earth. each satellite has an onboard atomic clock and transmits a continuous radio signal containing a precise time signal as well as its current position. two frequencies are used, 1. 2276 and 1. 57542 ghz. since the velocity of radio waves is virtually constant, the delay of the radio signal from a satellite is proportional to the distance of the receiver from the satellite. by receiving the signals from at least four satellites a gps receiver can calculate its position on earth by comparing the arrival time of the radio signals. since each satellite ' s position is known precisely at any given time, from the delay the position of the receiver can be calculated by a microprocessor in the receiver. the position can be displayed as latitude and longitude, or as a marker on an electronic map. gps receivers are incorporated in almost all cellphones and in vehicles such as automobiles, aircraft, and ships, and are used to guide drones, missiles, cruise missiles, and even artillery shells to their target, and handheld gps receivers are produced for hikers and the military. radio beacon – a fixed location terrestrial radio transmitter which transmits a continuous radio signal used by aircraft and ships for navigation. the locations of beacons are plotted on navigational maps used by aircraft and ships. vhf omnidirectional range ( vor ) – a worldwide aircraft radio navigation system consisting of fixed ground radio beacons transmitting between 108. 00 and 117. 95 mhz in the very high frequency ( vhf ) band. an automated navigational instrument on the aircraft displays a bearing to a nearby vor transmitter. a vor beacon transmits two signals simultaneously on different frequencies. a directional antenna transmits a beam of radio waves that rotates like a lighthouse at a fixed rate, 30 times per second. when the directional beam is facing north, an omnidirectional antenna transmits a pulse. by measuring the difference in phase of these two signals, an aircraft can determine its bearing ( or " radial " ) from the station accurately. by taking a bearing on two vor beacons an aircraft can determine its position ( called a " fix " ) to an accuracy of about 90 metres ( 300 ft ). most vor beacons also have a distance measuring capability, called distance measuring equipment ( dme ) ; these are called vor / dme ' s. the aircraft transmits a radio signal to the vor / dme beacon and a transponder transmits a return signal. from the propagation delay between the transmitted and received signal the aircraft can calculate several thoughts are presented on the long ongoing difficulties both students and academics face related to calculus 101. some of these thoughts may have a more general interest. beam reveals the object ' s location. since radio waves travel at a constant speed close to the speed of light, by measuring the brief time delay between the outgoing pulse and the received " echo ", the range to the target can be calculated. the targets are often displayed graphically on a map display called a radar screen. doppler radar can measure a moving object ' s velocity, by measuring the change in frequency of the return radio waves due to the doppler effect. radar sets mainly use high frequencies in the microwave bands, because these frequencies create strong reflections from objects the size of vehicles and can be focused into narrow beams with compact antennas. parabolic ( dish ) antennas are widely used. in most radars the transmitting antenna also serves as the receiving antenna ; this is called a monostatic radar. a radar which uses separate transmitting and receiving antennas is called a bistatic radar. airport surveillance radar – in aviation, radar is the main tool of air traffic control. a rotating dish antenna sweeps a vertical fan - shaped beam of microwaves around the airspace and the radar set shows the location of aircraft as " blips " of light on a display called a radar screen. airport radar operates at 2. 7 – 2. 9 ghz in the microwave s band. in large airports the radar image is displayed on multiple screens in an operations room called the tracon ( terminal radar approach control ), where air traffic controllers direct the aircraft by radio to maintain safe aircraft separation. secondary surveillance radar – aircraft carry radar transponders, transceivers which when triggered by the incoming radar signal transmit a return microwave signal. this causes the aircraft to show up more strongly on the radar screen. the radar which triggers the transponder and receives the return beam, usually mounted on top of the primary radar dish, is called the secondary surveillance radar. since radar cannot measure an aircraft ' s altitude with any accuracy, the transponder also transmits back the aircraft ' s altitude measured by its altimeter, and an id number identifying the aircraft, which is displayed on the radar screen. electronic countermeasures ( ecm ) – military defensive electronic systems designed to degrade enemy radar effectiveness, or deceive it with false information, to prevent enemies from locating local forces. it often consists of powerful microwave transmitters that can mimic enemy radar signals to create false target indications on the enemy radar screens. marine radar – an s or x band radar on ships used to detect nearby ships and obstructions like bridges. a rotating antenna sweeps a vertical , its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 – 2 ), or a downshift maneuver in passing ( 4 – 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up - front tooling and fixed costs associated with developing the vehicle. there are also costs associated with warranty reductions and marketing. program timing : to some extent programs are timed with respect to the market, and also to the production - schedules of assembly plants. any new part in the design must support the development and manufacturing schedule of the model. design for manufacturability ( dfm ) : dfm refers to designing vehicular components in such a way that they are not only feasible to manufacture, but also such that they are cost - efficient to produce while resulting in acceptable quality that meets design specifications and engineering tolerances. this requires coordination between the design engineers and the assembly / manufacturing teams. quality management : quality control is an important factor within the production process, as high quality is needed to meet customer requirements and to avoid expensive recall campaigns. the complexity of components involved in the production process requires a comparison of the sensitivities of methods which allow us to determine the coordinates of a moving hot body is made. and bad nvh qualities. the nvh engineer works to either eliminate bad nvh or change the " bad nvh " to good ( i. e., exhaust tones ). vehicle electronics : automotive electronics is an increasingly important aspect of automotive engineering. modern vehicles employ dozens of electronic systems. these systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 – 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 – 2 ), or a downshift maneuver in passing ( 4 – 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect Question: Some students were investigating the relationship between position, time, and speed. The students marked the initial position of a toy car. The students set the car into motion and marked the position of the car each second. Which of these are most appropriate for recording and analyzing the students' data? A) a table and a bar graph B) a table and a line graph C) a pie chart and a bar graph D) a pie chart and a line graph
B) a table and a line graph
Context: the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies. enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern. the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used? to investigate the affinity of acetylated wood for organic liquids, yezo spruce wood specimens were acetylated with acetic anhydride, and their swelling in various liquids were compared to those of untreated specimens. the acetylated wood was rapidly and remarkably swollen in aprotic organic liquids such as benzene and toluene in which the untreated wood was swollen only slightly and / or very slowly. on the other hand, the swelling of wood in water, ethylene glycol and alcohols remained unchanged or decreased by the acetylation. consequently the maximum volume of wood swollen in organic liquids was always larger than that in water. the effect of acetylation on the maximum swollen volume of wood was greater in liquids having smaller solubility parameters. the easier penetration of aprotic organic liquids into the acetylated wood was considered to be due to the scission of hydrogen bonds among the amorphous wood constituents by the substitution of hydroxyl groups with hydrophobic acetyl groups. horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ) , dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant – people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour major stellar - wind emission features in the spectrum of eta car have recently decreased by factors of order 2 relative to the continuum. this is unprecedented in the modern observational record. the simplest, but unproven, explanation is a rapid decrease in the wind density. Question: If the number of trees significantly decreases, the atmosphere's level of which gas might significantly increase? A) nitrogen B) carbon dioxide C) carbon monoxide D) hydrogen
B) carbon dioxide
Context: two planetary nebulae are shown to belong to the sagittarius dwarf galaxy, on the basis of their radial velocities. this is only the second dwarf spheroidal galaxy, after fornax, found to contain planetary nebulae. their existence confirms that this galaxy is at least as massive as the fornax dwarf spheroidal which has a single planetary nebula, and suggests a mass of a few times 10 * * 7 solar masses. the two planetary nebulae are located along the major axis of the galaxy, near the base of the tidal tail. there is a further candidate, situated at a very large distance along the direction of the tidal tail, for which no velocity measurement is available. the location of the planetary nebulae and globular clusters of the sagittarius dwarf galaxy suggests that a significant fraction of its mass is contained within the tidal tail. while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern. i will discuss the presence of massive star clusters in starburst galaxies with an emphasis on low mass galaxies outside the local group. i will show that such galaxies, with respect to their mass and luminosity, may be very rich in young luminous clusters. there are a few different mechanisms that can cause white dwarf stars to vary in brightness, providing opportunities to probe the physics, structures, and formation of these compact stellar remnants. the observational characteristics of the three most common types of white dwarf variability are summarized : stellar pulsations, rotation, and ellipsoidal variations from tidal distortion in binary systems. stellar pulsations are emphasized as the most complex type of variability, which also has the greatest potential to reveal the conditions of white dwarf interiors. v735 sgr was known as an enigmatic star with rapid brightness variations. long - term ogle photometry, brightness measurements in infrared bands, and recently obtained moderate resolution spectrum from the 6. 5 - m magellan telescope show that this star is an active young stellar object of herbig ae / be type. oscillations of the sun have been used to understand its interior structure. the extension of similar studies to more distant stars has raised many difficulties despite the strong efforts of the international community over the past decades. the corot ( convection rotation and planetary transits ) satellite, launched in december 2006, has now measured oscillations and the stellar granulation signature in three main sequence stars that are noticeably hotter than the sun. the oscillation amplitudes are about 1. 5 times as large as those in the sun ; the stellar granulation is up to three times as high. the stellar amplitudes are about 25 % below the theoretic values, providing a measurement of the nonadiabaticity of the process ruling the oscillations in the outer layers of the stars. armed with an astrolabe and kepler ' s laws one can arrive at accurate estimates of the orbits of planets. recent surveys have revealed a lack of close - in planets around evolved stars more massive than 1. 2 msun. such planets are common around solar - mass stars. we have calculated the orbital evolution of planets around stars with a range of initial masses, and have shown how planetary orbits are affected by the evolution of the stars all the way to the tip of the red giant branch ( rgb ). we find that tidal interaction can lead to the engulfment of close - in planets by evolved stars. the engulfment is more efficient for more - massive planets and less - massive stars. these results may explain the observed semi - major axis distribution of planets around evolved stars with masses larger than 1. 5 msun. our results also suggest that massive planets may form more efficiently around intermediate - mass stars. a 4mj planet with a 15. 8day orbital period has been detected from very precise radial velocity measurements with the coralie echelle spectrograph. a second remote and more massive companion has also been detected. all the planetary companions so far detected in orbit closer than 0. 08 au have a parent star with a statistically higher metal content compared to the metallicity distribution of other stars with planets. different processes occuring during their formation may provide a possible explanation for this observation. in a diagram of metallicity ( \ ~ z ) vs. luminosity ( m $ _ b $ ), the different types of nearby ( z $ < 0. 05 $ ) starburst galaxies seem to follow the same linear relationship as the normal spiral and irregular galaxies. however, for comparable luminosities the more massive starburst nucleus galaxies ( sbngs ) show a slight metallic defficiency as compared to the giant spiral galaxies. furthermore, the sbngs do not seem to follow the same relationship between \ ~ z and hubble type ( t ) than the normal galaxies. the early - type sbngs are metal poor as compared to normal galaxies. it may suggests that the chemical evolution of a majority of the nearby starbursts galaxies is not completely over and that the present burst represent an important phase of this process. Question: Which of the following is the best estimate of the number of stars in a typical galaxy? A) tens B) hundreds C) thousands D) billions
D) billions
Context: stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell – which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosyn hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell – which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. ##ses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell – which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed sugar ) for export to the rest of the plant. unlike in animals ( which lack chloroplasts ), plants and their eukaryote relatives have delegated many biochemical roles to their chloroplasts, including synthesising all their fatty acids, and most amino acids. the fatty acids that chloroplasts make are used for many things, such as providing material to build cell membranes out of and making the polymer cutin which is found in the plant cuticle that protects land plants from drying out. plants synthesise a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabino elongation and the control of flowering. abscisic acid ( aba ) occurs in all land plants except liverworts, and is synthesised from carotenoids in the chloroplasts and other plastids. it inhibits cell division, promotes seed maturation, and dormancy, and promotes stomatal closure. it was so named because it was originally thought to control abscission. ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. it is now known to be the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton, pineapples and other climacteric crops. another class of phytohormones is the jasmonates, first isolated from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmos eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant – people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour pectin and xyloglucan from which the land plant cell wall is constructed. vascular land plants make lignin, a polymer used to strengthen the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through them under water stress. lignin is also used in other cell types like sclerenchyma fibres that provide structural support for a plant and is a major constituent of wood. sporopollenin is a chemically resistant polymer found in the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores and the pollen of seed plants in the fossil record. it is widely regarded as a marker for the start of land plant evolution during the ordovician period. the concentration of carbon dioxide in the atmosphere today is much lower than it was when plants emerged onto land during the ordovician and silurian periods. many monocots like maize and the pineapple and some dicots like the asteraceae have since independently evolved pathways like crassulacean acid metabolism and the c4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration in the more common c3 carbon fixation pathway. these biochemical strategies are unique to land plants. = = = medicine and materials = = = phytochemistry is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary metabolism. some of these compounds are toxins such as the alkaloid coniine from hemlock. others, such as the essential oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices ( e. g., capsaicin ), and in medicine as pharmaceuticals as in opium from opium poppies. many medicinal and recreational drugs, such as tetrahydrocannabinol ( active ingredient in cannabis ), caffeine, morphine and nicotine come directly from plants. others are simple derivatives of botanical natural products. for example, the pain killer aspirin is the acetyl ester of salicylic acid, originally isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical modification of morphine obtained from the opium poppy. popular stimulants come from plants, such as caffeine from coffee, tea and chocolate, and nicotine from tobacco. most alcoholic beverages come from fermentation of carbohy unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomi the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β€” one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form Question: A company plants trees on a bare hillside. Which of these is the BEST reason for planting the trees? A) The trees provide oxygen for the soil. B) The trees prevent soil from washing away. C) The trees make shade for animals living there. D) The trees cause animals to move to other places.
B) The trees prevent soil from washing away.
Context: ##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and ##wi, turkana, dating from 3. 3 million years ago. stone tools diversified through the pleistocene period, which ended ~ 12, 000 years ago. the earliest evidence of warfare between two groups is recorded at the site of nataruk in turkana, kenya, where human skeletons with major traumatic injuries to the head, neck, ribs, knees and hands, including an embedded obsidian bladelet on a skull, are evidence of inter - group conflict between groups of nomadic hunter - gatherers 10, 000 years ago. humans entered the bronze age as they learned to smelt copper into an alloy with tin to make weapons. in asia where copper - tin ores are rare, this development was delayed until trading in bronze began in the third millennium bce. in the middle east and southern european regions, the bronze age follows the neolithic period, but in other parts of the world, the copper age is a transition from neolithic to the bronze age. although the iron age generally follows the bronze age, in some areas the iron age intrudes directly on the neolithic from outside the region, with the exception of sub - saharan africa where it was developed independently. the first large - scale use of iron weapons began in asia minor around the 14th century bce and in central europe around the 11th century bce followed by the middle east ( about 1000 bce ) and india and china. the assyrians are credited with the introduction of horse cavalry in warfare and the extensive use of iron weapons by 1100 bce. assyrians were also the first to use iron - tipped arrows. = = = post - classical technology = = = the wujing zongyao ( essentials of the military arts ), written by zeng gongliang, ding du, and others at the order of emperor renzong around 1043 during the song dynasty illustrate the eras focus on advancing intellectual issues and military technology due to the significance of warfare between the song and the liao, jin, and yuan to their north. the book covers topics of military strategy, training, and the production and employment of advanced weaponry. advances in military technology aided the song dynasty in its defense against hostile neighbors to the north. the flamethrower found its origins in byzantine - era greece, employing greek fire ( a chemically complex, highly flammable petrol fluid ) in a device with a siphon hose by the 7th century. : 77 the earliest reference to greek fire in china was made in 917, written by wu renchen in his spring and autumn annals of the ten kingdoms. : 80 in 91 prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electrolytically. extractive metallurgists are interested in three primary streams : feed, concentrate ( metal oxide / sulphide ) and tailings ( waste ). after mining, large pieces of the ore feed are broken through crushing or grinding in order to obtain particles small enough, where each particle is either mostly valuable or of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea . the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became c. 4000 bc, associated with the maadi culture. this represents the earliest evidence for smelting in africa. the varna necropolis, bulgaria, is a burial site located in the western industrial zone of varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electroly ##thic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures ##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as Question: Index fossils help scientists estimate the age of a rock because index fossil species only existed for a relatively short time. What happened to the species that are now used as index fossils? A) They became extinct. B) They changed their diets. C) They hid in marine sediments. D) They migrated to new environments.
A) They became extinct.
Context: angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit the hun tian theory ), or as being without substance while the heavenly bodies float freely ( the hsuan yeh theory ), the earth was at all times flat, although perhaps bulging up slightly. the model of an egg was often used by chinese astronomers such as zhang heng ( 78 – 139 ad ) to describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit, reflect off objects ( the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return ), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. the coating is thin enough that it has in supersymmetric theories, the presence of axions usually implies the existence of a non - compact, ( pseudo ) moduli space. in gauge mediated models, the axion would seem a particularly promising dark matter candidate. the cosmology of the moduli then constrains the gravitino mass and the axion decay constant ; the former can ' t be much below 10 mev ; the latter can ' t be much larger than 10 ^ { 13 } gev. axinos, when identifiable, are typically heavy and do not play an important role in cosmology. defective body parts. inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures. prostheses are specifically not orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis. prostheses are technically the complete finished item. for instance, a c - leg knee alone is not a prosthesis, but only a prosthetic component. the complete prosthesis would consist of the attachment system to the residual limb – usually a " socket ", and all the attachment hardware components all the way down to and including the terminal device. despite the technical difference, the terms are often used interchangeably. the terms " prosthetic " and " orthotic " are adjectives used to describe devices such as a prosthetic knee. the terms " prosthetics " and " orthotics " are used to describe the respective allied health fields. an occupational therapist ' s role in prosthetics include therapy, training and evaluations. prosthetic training includes orientation to prosthetics components and terminology, donning and doffing, wearing schedule, and how to care for residual limb and the prosthesis. = = = exoskeletons = = = a powered exoskeleton is a wearable mobile machine that is powered by a system of electric motors, pneumatics, levers, hydraulics, or a combination of technologies that allow for limb movement with increased strength and endurance. its design aims to provide back support, sense the user ' s motion, and send a signal to motors which manage the gears. the exoskeleton supports the shoulder, waist and thigh, and assists movement for lifting and holding heavy items, while lowering back stress. = = = adaptive seating and positioning = = = people with balance and motor function challenges often need specialized equipment to sit or stand safely and securely. this equipment is frequently specialized for specific settings such as in a classroom or nursing home. positioning is often important in seating arrangements to ensure that user ' s body pressure is distributed equally without inhibiting movement in a desired way. positioning devices have been developed to aid in allowing people to stand and bear weight on their legs without risk of a fall. static black holes in two - dimensional string theory can carry tachyon hair. configurations which are non - singular at the event horizon have non - vanishing asymptotic energy density. such solutions can be smoothly extended through the event horizon and have non - vanishing energy flux emerging from the past singularity. dynamical processes will not change the amount of tachyon hair on a black hole. in particular, there will be no tachyon hair on a black hole formed in gravitational collapse if the initial geometry is the linear dilaton vacuum. there also exist static solutions with finite total energy, which have singular event horizons. simple dynamical arguments suggest that black holes formed in gravitational collapse will not have tachyon hair of this type. describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they did so with formulaic terminology previously used by zhang heng to describe the spherical shape of the sun and moon ( i. e. that they were as round as a crossbow bullet ). as noted in the book huainanzi, in the 2nd century bc, chinese astronomers effectively inverted eratosthenes ' calculation or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit, reflect off objects ( the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return ), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. the coating is thin enough that it has no adverse effect on pilot vision. = = = = ships = = = = ships have also adopted similar methods. though the earlier american arleigh burke - class destroyers incorporated some signature - reduction features. the norwegian skjold - class corvettes was the first coastal defence and the french la fayette - class frigates the photons ( bosons ) confined in a hollow waveguide containing an atomic gas could show spin - charge separation, which is more commonly associated with one - dimensional fermions. muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up the muck tube. the pressurized air flow must be constant to ensure regular air changes for the workers and prevent excessive inflow of mud or water at the base of the caisson. when the caisson hits bedrock, the sandhogs exit through the airlock and fill the box with concrete, forming a solid foundation pier. a pneumatic ( compressed - air ) caisson has the advantage of providing dry working conditions, which is better for placing concrete. it is also well suited for foundations for which other methods might cause settlement of adjacent structures. construction workers who leave the pressurized environment of the caisson must decompress at a rate that allows symptom - free release of inert gases dissolved in the body tissues if they are to avoid decompression sickness, a condition first identified in caisson workers, and originally named " caisson disease " in recognition of the occupational hazard. construction of the brooklyn bridge, which was built with the help of pressurised caissons, resulted in numerous workers being either killed or permanently injured by caisson disease during its construction. barotrauma of the ears, sinus cavities and lungs and dysbaric osteonecrosis are other risks. = = other uses = = caissons have also been used in the installation of hydraulic elevators where a single - stage ram is installed below the ground level. caissons, codenamed phoenix, were an integral part of the mulberry harbours used during the world war ii allied invasion of normandy. = = other meanings = = boat lift caissons : the word caisson is also used as a synonym for the moving trough part of caisson locks, canal lifts and inclines in which boats and ships rest while being lifted from one canal elevation to another ; the water is retained on the inside of the caisson, or excluded from the caisson, according to the respective operating principle. structural caissons : caisson is also sometimes used as a colloquial term for a reinforced concrete structure formed by pouring into a hollow cylindrical form, typically by placing a caisson form below grade in an open excavation and pouring once backfill is complete, or by Question: Feathers, wings, and the hollow bones of birds are examples of A) adaptations for flight B) responses to stimuli C) unnecessary body parts D) reproductive structures
A) adaptations for flight
Context: of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop ##thic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures . the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, ##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and the study of mummies. scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and especially their technology. = = = ancient = = = = = = = copper and bronze ages = = = = metallic copper occurs on the surface of weathered copper ore deposits and copper was used before copper smelting was known. copper smelting is believed to have originated when the technology of pottery kilns allowed sufficiently high temperatures. the concentration of various elements such as arsenic increase with depth in copper ore deposits and smelting of these ores yields arsenical bronze, which can be sufficiently more readily than they could participate in hunter - gatherer activities. with this increase in population and availability of labor came an increase in labor specialization. what triggered the progression from early neolithic villages to the first cities, such as uruk, and the first civilizations, such as sumer, is not specifically known ; however, the emergence of increasingly hierarchical social structures and specialized labor, of trade and war among adjacent cultures, and the need for collective action to overcome environmental challenges such as irrigation, are all thought to have played a role. the invention of writing led to the spread of cultural knowledge and became the basis for history, libraries, schools, and scientific research. continuing improvements led to the furnace and bellows and provided, for the first time, the ability to smelt and forge gold, copper, silver, and lead – native metals found in relatively pure form in nature. the advantages of copper tools over stone, bone and wooden tools were quickly apparent to early humans, and native copper was probably used from near the beginning of neolithic times ( about 10 kya ). native copper does not naturally occur in large amounts, but copper ores are quite common and some of them produce metal easily when burned in wood or charcoal fires. eventually, the working of metals led to the discovery of alloys such as bronze and brass ( about 4, 000 bce ). the first use of iron alloys such as steel dates to around 1, 800 bce. = = = ancient = = = after harnessing fire, humans discovered other forms of energy. the earliest known use of wind power is the sailing ship ; the earliest record of a ship under sail is that of a nile boat dating to around 7, 000 bce. from prehistoric times, egyptians likely used the power of the annual flooding of the nile to irrigate their lands, gradually learning to regulate much of it through purposely built irrigation channels and " catch " basins. the ancient sumerians in mesopotamia used a complex system of canals and levees to divert water from the tigris and euphrates rivers for irrigation. archaeologists estimate that the wheel was invented independently and concurrently in mesopotamia ( in present - day iraq ), the northern caucasus ( maykop culture ), and central europe. time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and the study of mummies. scientists and historians have been able to form significant inferences about the lifestyle and culture of various prehistoric peoples, and especially their technology. = = = ancient = = = = = = = copper and bronze ages = = = = metallic copper occurs on the surface of weathered copper ore deposits and copper discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial ( e. g., trunks of trees, boulders and accumulations of gravel ) from a river bed furnishes a simple and efficient means of increasing the discharging capacity of its channel. such removals will consequently lower the height of floods upstream. every impediment to the flow, in proportion to its extent, raises the level of the river above it so as to produce the additional artificial fall necessary to convey the flow through the restricted channel, thereby reducing the total available fall. reducing the length of the channel by substituting straight cuts for a winding course is the only way in which the effective fall can be increased. this involves some loss of capacity in the channel as a whole, and in the case of a large river with a considerable flow it is difficult to maintain a straight cut owing to the tendency of the current to erode the banks and form again a sinuous channel. even if the cut is preserved by protecting the banks, made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called muck ) from the edge of the workspace to a water - filled pit, connected by a tube ( called the muck tube ) to the surface. a crane at the surface removes the soil with a clamshell bucket. the water pressure in the tube balances the air pressure, with excess air escaping up cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the romans were largely neglected throughout europe. the first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in paisley, scotland, john gibb, installed an experimental filter, selling his unwanted surplus to the public. the first treated public water supply in the world was installed by engineer james simpson for the chelsea waterworks company in london in 1829. the first screw - down water tap was patented in 1845 by guest and chrimes, a brass foundry in rotherham. the practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician john snow during the 1854 broad street cholera outbreak demonstrated the role of the water supply in spreading the cholera epidemic. = = = second industrial revolution ( 1860s – 1914 ) = = = the 19th century saw astonishing developments in transportation, construction, manufacturing and communication technologies originating in europe. after a recession at the end of the 1830s and a general slowdown in major inventions, the second industrial revolution was a period of rapid innovation and industrialization that began in the 1860s or around 1870 and lasted until world war i. it included rapid development of chemical, electrical, petroleum Question: A student climbs up a rocky mountain trail in Maine. She sees many small pieces of rock on the path. Which action most likely made the small pieces of rock? A) sand blowing into cracks B) leaves pressing down tightly C) ice breaking large rocks apart D) shells and bones sticking together
C) ice breaking large rocks apart
Context: temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β†’ batching β†’ mixing β†’ forming β†’ drying β†’ firing β†’ assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. the fired part will be smaller than the dried part. = = forming methods = = ceramic forming techniques include throwing, slipcasting, tape casting, freeze - casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and the oxides are chemically changed to cause bonding and densification. the fired part will be smaller than the dried part. = = forming methods = = ceramic forming techniques include throwing, slipcasting, tape casting, freeze - casting, injection molding, dry pressing, isostatic pressing, hot isostatic pressing ( hip ), 3d printing and others. methods for forming ceramic powders into complex shapes are desirable in many areas of technology. such methods are required for producing advanced, high - temperature structural parts such as heat engine components and turbines. materials other than ceramics which are used in these processes may include : wood, metal, nuclear jets containing relativistic ` ` hot ' ' particles close to the central engine cool dramatically by producing high energy radiation. the radiative dissipation is similar to the famous compton drag acting upon ` ` cold ' ' thermal particles in a relativistic bulk flow. highly relativistic protons induce anisotropic showers raining electromagnetic power down onto the putative accretion disk. thus, the radiative signature of hot hadronic jets is x - ray irradiation of cold thermal matter. the synchrotron radio emission of the accelerated electrons is self - absorbed due to the strong magnetic fields close to the magnetic nozzle. the standard theory of ideal gases ignores the interaction of the gas particles with the thermal radiation ( photon gas ) that fills the otherwise vacuum space between them. this is an unphysical feature since every material absorbs and radiates thermal energy. this interaction may be important in gases since the latter, unlike solids and liquids are capable of undergoing conspicuous volume changes. taking it into account makes the behaviour of the ideal gases more realistic and removes gibbs ' paradox. endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e βˆ’ e / k t { \ displaystyle e ^ { - e / kt } } – that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer in one or more of these kinds of structures, it is invariably accompanied by an increase or decrease of energy of the substances involved. some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light ; thus the products of a reaction may have more or less energy than the reactants. a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e βˆ’ e / k t { \ displaystyle e ^ { - e / kt } } – that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid . a reaction is said to be exergonic if the final state is lower on the energy scale than the initial state ; in the case of endergonic reactions the situation is the reverse. a reaction is said to be exothermic if the reaction releases heat to the surroundings ; in the case of endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e βˆ’ e / k t { \ displaystyle e ^ { - e / kt } } – that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of the cross section of elastic electron - proton scattering taking place in an electron gas is calculated within the closed time path method. it is found to be the sum of two terms, one being the expression in the vacuum except that it involves dressing due to the electron gas. the other term is due to the scattering particles - electron gas entanglement. this term dominates the usual one when the exchange energy is in the vicinity of the fermi energy. furthermore it makes the trajectories of the colliding particles more consistent and the collision more irreversible, rendering the scattering more classical in this regime. Question: Which of the following heat exchange processes involves the collision of particles? A) insulation B) conduction C) convection D) radiation
B) conduction
Context: the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements ' resulting unique chronological timescales would then give inconsistent time estimates. in refutation of young earth claims of inconstant decay rates affecting the reliability of radiometric dating, roger c. wiens, a physicist specializing in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. thomas a. baillieul, a geologist and retired senior environmental scientist with the united states department of energy, disputed gentry ' s claims in an article entitled, " ' polonium haloes ' refuted : a review of ' radioactive halos in a radio options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygen the world is changing at an ever - increasing pace. and it has changed in a much more fundamental way than one would think, primarily because it has become more connected and interdependent than in our entire history. every new product, every new invention can be combined with those that existed before, thereby creating an explosion of complexity : structural complexity, dynamic complexity, functional complexity, and algorithmic complexity. how to respond to this challenge? and what are the costs? they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells. a single fertilised egg cell, the zygote, gives rise to the many different plant cell types including parenchyma, xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. the process results from the epigenetic activation of some genes and inhibition of others. unlike animals, many plant cells, particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give rise to a new individual plant. exceptions include highly lignified cells, the sclerenchyma and xylem which are dead at maturity, and the phloem sieve tubes which lack nuclei. while plants use many of the same epigenetic mechanisms as animals, such as chromatin remodelling, an alternative hypothesis is that plants set their gene expression patterns using positional information from the environment and surrounding cells to determine their developmental fate. epigenetic changes can lead to paramutations, which do not follow the mendelian heritage rules. these epigenetic marks are carried from one generation to the next, the universe is found to have undergone several phases in which the gravitational constant had different behaviors. during some epochs the energy density of the universe remained constant and the universe remained static. in the radiation dominated epoch the radiation field satisfies stefan ' s formula while the scale factor varies linearly with time. the model enhances the formation of the structure in the universe as observed today. listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, the decay rate for isotopes subject to extreme pressures, those differences were too small to significantly impact date estimates. the constancy of the decay rates is also governed by first principles in quantum mechanics, wherein any deviation in the rate would require a change in the fundamental constants. according to these principles, a change in the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements ' resulting unique chronological timescales would then give inconsistent time estimates. in refutation of young earth claims of inconstant decay rates affecting the reliability of radiometric dating, roger c. wiens, a physicist specializing in isotope dating states : there are only three quite technical instances where a half - life changes, and these do not affect the dating methods : " only one technical exception occurs under terrestrial conditions, and this is not for an isotope used for dating.... the artificially - produced isotope, beryllium - 7 has been shown to change by up to 1. 5 %, depending on its chemical environment.... heavier atoms are even less subject to these minute changes, so the dates of rocks made by electron - capture decays would only be off by at most a few hundredths of a percent. " "... another case is material inside of stars, which is in a plasma state where electrons are not bound to atoms. in the extremely hot stellar environment, a completely different kind of decay can occur. ' bound - state beta decay ' occurs when the nucleus emits an electron into a bound electronic state close to the nucleus.... all normal matter, such as everything on earth, the moon, meteorites, etc. has electrons in normal positions, so these instances never apply to rocks, or anything colder than several hundred thousand degrees. " " the last case also involves very fast - moving matter. it has been demonstrated by atomic clocks in very fast spacecraft. these atomic clocks slow down very slightly ( only a second or so per year ) as predicted by einstein ' s theory of relativity. no rocks in our solar system are going fast enough to make a noticeable change in their dates. " = = = = radiohaloes = = = = in the 1970s, young earth creationist robert v. gentry proposed that radiohaloes in certain granites represented evidence for the earth being created instantaneously rather than gradually. this idea has been criticized by physicists and geologists on many grounds including that the rocks gentry studied were not primordial and that the radionucl a legal document in many jurisdictions. follow - ups may be shorter but follow the same general procedure, and specialists follow a similar process. the diagnosis and treatment may take only a few minutes or a few weeks, depending on the complexity of the issue. the components of the medical interview and encounter are : chief complaint ( cc ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( we have written a java applet to illustrate the meaning of curved geometry. the applet provides a mapping interface similar to mapquest or google maps ; features include the ability to navigate through a space and place permanent point objects and / or shapes at arbitrary positions. the underlying two - dimensional space has a constant, positive curvature, which causes the apparent paths and shapes of the objects in the map to appear distorted in ways that change as you view them from different relative angles and distances. Question: The most effective way to show a change happening over time is to display your results using a A) line graph. B) Venn diagram. C) pie chart. D) flow chart.
A) line graph.
Context: eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. ethnobotany is the study of the relationships between plants and people. when applied to the investigation of historical plant – people relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. some of the earliest plant - people relationships arose between the indigenous people of canada in identifying edible plants from inedible plants. this relationship the indigenous people had with plants was recorded by ethnobotanists. = = plant biochemistry = = plant biochemistry is the study of the chemical processes used by plants. some of these processes are used in their primary metabolism like the photosynthetic calvin cycle and crassulacean acid metabolism. others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds. plants and various other groups of photosynthetic eukaryotes collectively known as " algae " have unique organelles known as chloroplasts. chloroplasts are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal ancestors. chloroplasts and cyanobacteria contain the blue - green pigment chlorophyll a. chlorophyll a ( as well as its plant and green algal - specific cousin chlorophyll b ) absorbs light in the blue - violet and orange / red parts of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour was done using the spinning wheel and weaving was done on a hand - and - foot - operated loom. it took from three to five spinners to supply one weaver. the invention of the flying shuttle in 1733 doubled the output of a weaver, creating a shortage of spinners. the spinning frame for wool was invented in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened in 1830, the rocket locomotive of robert stephenson being one of its first working locomotives used. manufacture of ships ' pulley blocks by all - metal machines at the portsmouth block mills in 1803 instigated the age of sustained mass production. machine tools used by engineers to manufacture parts began in the first decade of the century, notably by richard roberts and joseph whitworth. the development of interchangeable parts through what is now called the american system of manufacturing began in the firearms industry at the u. s. federal arsenals in the early 19th century, and became widely used by the end of the century. until the enlightenment era, little progress was made in water supply and sanitation and the engineering skills of the romans were largely neglected throughout europe. the first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in paisley, scotland, john gibb, installed an experimental filter, selling his unwanted surplus to the public. the first treated public water supply in the world was installed by engineer james simpson for the chelsea waterworks company in london in 1829. the first screw - down water tap was patented in 1845 by guest and chrimes, a brass foundry in rotherham. the practice of water treatment soon became mainstream, have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper and pulp, textiles and biofuels. in the current decades, significant progress has been done in creating genetically modified organisms ( gmos ) that enhance the diversity of applications and economical viability of industrial biotechnology. by using renewable raw materials to produce a variety of chemicals and fuels, industrial biotechnology is actively advancing towards lowering greenhouse the theoretical reasons at the root of ligo ' s experimental failure in searching gravitational waves ( gw ' s ) from binary black hole ( bbh ) inspirals. interaction between tannin and bovine serum albumin ( bsa ) was examined by the fluorescent quenching. the process of elimination between bsa and tannin was the one of a stationary state, and the coupling coefficient was one. the working strength between the tannin and the beef serum was hydrophobic one. new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper driven by cheap energy in the form of coal, produced in ever - increasing amounts from the abundant resources of britain. the british industrial revolution is characterized by developments in the areas of textile machinery, mining, metallurgy, transport and the invention of machine tools. before invention of machinery to spin yarn and weave cloth, spinning was done using the spinning wheel and weaving was done on a hand - and - foot - operated loom. it took from three to five spinners to supply one weaver. the invention of the flying shuttle in 1733 doubled the output of a weaver, creating a shortage of spinners. the spinning frame for wool was invented in 1738. the spinning jenny, invented in 1764, was a machine that used multiple spinning wheels ; however, it produced low quality thread. the water frame patented by richard arkwright in 1767, produced a better quality thread than the spinning jenny. the spinning mule, patented in 1779 by samuel crompton, produced a high quality thread. the power loom was invented by edmund cartwright in 1787. in the mid - 1750s, the steam engine was applied to the water power - constrained iron, copper and lead industries for powering blast bellows. these industries were located near the mines, some of which were using steam engines for mine pumping. steam engines were too powerful for leather bellows, so cast iron blowing cylinders were developed in 1768. steam powered blast furnaces achieved higher temperatures, allowing the use of more lime in iron blast furnace feed. ( lime rich slag was not free - flowing at the previously used temperatures. ) with a sufficient lime ratio, sulfur from coal or coke fuel reacts with the slag so that the sulfur does not contaminate the iron. coal and coke were cheaper and more abundant fuel. as a result, iron production rose significantly during the last decades of the 18th century. coal converted to coke fueled higher temperature blast furnaces and produced cast iron in much larger amounts than before, allowing the creation of a range of structures such as the iron bridge. cheap coal meant that industry was no longer constrained by water resources driving the mills, although it continued as a valuable source of power. the steam engine helped drain the mines, so more coal reserves could be accessed, and the output of coal increased. the development of the high - pressure steam engine made locomotives possible, and a transport revolution followed. the steam engine which had existed since the early 18th century, was practically applied to both steamboat and railway transportation. the liverpool and manchester railway, the first purpose - built railway line, opened we describe a natural way to plant cherry - and plumtrees at prescribed generic locations in an orchard. ultramagnetized neutron stars or magnetars are magnetically powered neutron stars. their strong magnetic fields dominate the physical processes in their crusts and their surroundings. the past few years have seen several advances in our theoretical and observational understanding of these objects. in spite of a surfeit of observations, their spectra are still poorly understood. i will discuss the emission from strongly magnetized condensed matter surfaces of neutron stars, recent advances in our expectations of the surface composition of magnetars and a model for the non - thermal emission from these objects. Question: The diet of a brown bear includes roots, grasses, berries, nuts, fish, insects, and mammals. Based on this description, in what category is a brown bear? A) carnivore B) herbivore C) omnivore D) decomposer
C) omnivore
Context: by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : kingdom ; phylum ( or division ) ; class ; order ; family ; genus ( plural genera ) ; species. the scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. for example, the tiger lily is lilium columbianum. lilium is the genus, and columbianum the specific epithet. the combination is the name of the species. when writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. additionally, the entire term is ordinarily italicised ( or underlined when italics are not available ). the evolutionary relationships and heredity of a group of organisms is called its phylogeny. phylogenetic studies attempt to discover phylogenies. the basic approach is to use similarities based on shared inheritance to determine relationships. as an example, species of pereskia are trees or bushes with prominent leaves. they do not obviously resemble a typical leafless cactus such as an echinocactus. however, both pereskia and echinocactus have spines produced from areoles ( highly specialised pad - like structures ) suggesting that the two genera are indeed related. judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : kingdom ; phylum ( or division ) ; class ; order ; family ; genus ( plural genera ) ; species. the scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. for example, the tiger lily is lilium columbianum. lilium is the genus, and columbianum the specific epithet. the combination is the name of the species. when writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. additionally, the entire term is ordinarily italicised ( or underlined when italics are not available ). the evolutionary relationships and heredity of a group of organisms is called its phylogeny. phylogenetic studies attempt to discover phylogenies. the basic approach is to use similarities based on shared inheritance to determine relationships. as an example, species of pereskia are trees or bushes with prominent leaves. they do not obviously resemble a typical leafless cactus such as an echinocactus. however, both pereskia and echinocactus have spines produced from areoles ( highly specialised pad - like structures ) suggesting that the two genera are indeed related. judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. the cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups ( homoplasies ) or those left over from ancestors ( plesiomorphies ) – and derived characters, which ##al nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : kingdom ; phylum ( or division ) ; class ; order ; family ; genus ( plural genera ) ; species. the scientific name of a plant represents its genus and its species within the genus, resulting in a single worldwide name for each organism. for example, the tiger lily is lilium columbianum. lilium is the genus, and columbianum the specific epithet. the combination is the name of the species. when writing the scientific name of an organism, it is proper to capitalise the first letter in the genus and put all of the specific epithet in lowercase. additionally, the entire term is ordinarily italicised ( or underlined when italics are not available ). the evolutionary relationships and heredity of a group of organisms is called its phylogeny. phylogenetic studies attempt to discover phylogenies. the basic approach is to use similarities based on shared inheritance to determine relationships. as an example, species of pereskia are trees or bushes with prominent leaves. they do not obviously resemble a typical leafless cactus such as an echinocactus. however, both pereskia and echinocactus have spines produced from areoles ( highly specialised pad - like structures ) suggesting that the two genera are indeed related. judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. the cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups ( homoplasies ) or those left over from ancestors ( plesiomorphies ) – and derived characters, which have been passed down from innovations in a shared ancestor ( apomorphies ). only derived characters, such as the spine - producing areoles of cacti, provide evidence for descent from a common ancestor. the results of cladistic analyses are expressed as cladograms : tree - like diagrams showing the , fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen - free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life ' s basic ingredients : energy, carbon, oxygen, nitrogen and water, and ways that our plant stewardship can help address the global environmental issues of resource management, conservation, human food security, biologically invasive organisms, carbon sequestration, climate change, and sustainability. = = = human nutrition = = = virtually all staple foods come either directly from primary production by plants, or indirectly from animals that eat them. plants and other photosynthetic organisms are at the base of most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting them into a form that can be used by animals. this is what ecologists call the first trophic level. the modern forms of the major staple foods, such as hemp, teff, maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as well as hemp, flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years from among wild ancestral plants with the most desirable characteristics. botanists study how plants produce food and how to increase yields, for example through plant breeding, making their work important to humanity ' s ability to feed the world and provide food security for future generations. botanists also study weeds, which are a considerable problem in agriculture, and the biology and control of plant the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the groups of organisms. divisions related to the broader historical sense of botany include bacteriology, mycology ( or fungology ), and phycology – respectively, the study of bacteria, fungi, and algae – with lichenology as a subfield of mycology. the narrower sense of botany as the study of embryophytes ( land plants ) is called phytology. bryology is the study of mosses ( and in the broader sense also liverworts and hornworts ). pteridology ( or filicology ) is the study of ferns and allied plants. a number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology ( or graminology ) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. study can also be divided by guild rather than clade or grade. for example, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical the usual modelling of the syllogisms of the organon by a calculus of classes does not include relations. aristotle may however have envisioned them in the first two books as the category of relatives, where he allowed them to compose with themselves. composition is the main operation in combinatory logic, which therefore offers itself for a new kind of modelling. the resulting calculus includes also composition of predicates by logical connectives. ranks varying from family to subgenus have terms for their study, including agrostology ( or graminology ) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. study can also be divided by guild rather than clade or grade. for example, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing , dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both short - term, like pollination and predation, or long - term ; both often strongly influence the evolution of the species involved. a long - term interaction is called a symbiosis. symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. every species participates as a consumer, resource, or both in consumer – resource interactions, which form the core of food chains or food webs. there are different trophic levels within any food web, with the lowest level being the primary producers ( or autotrophs ) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. at the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. heterotrophs that consume plants are primary consumers ( or herbivores ) whereas heterotrophs that consume herbivores are secondary consumers ( or carnivores ). and those that eat secondary consumers are tertiary consumers and so on. omnivorous heterotrophs are able to consume at multiple levels. finally, there are decomposers that feed on the waste products or dead bodies of organisms. on average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one - tenth of the energy of the trophic level that it consumes. waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. = = = biosphere = = = in the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. for example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. a biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic ( biosphere ) and the abiotic ( lithosphere, atmosphere, and hydrosphere ) compartments of earth. there are biogeochemical cycles for nitrogen, carbon, and water. = = = conservation = = = conservation biology is the study of the conservation of earth ' s biodiversity with the aim of protecting species, their habitats, and ecosystems from excessive rates Question: A bracket fungus grows on a dead tree and breaks it down into chemical nutrients. What term best classifies the role of a bracket fungus in this ecosystem? A) producer B) consumer C) parasite D) decomposer
D) decomposer
Context: the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations. while the modern stellar imf shows a rapid decline with increasing mass, theoretical investigations suggest that very massive stars ( > 100 solar masses ) may have been abundant in the early universe. other calculations also indicate that, lacking metals, these same stars reach their late evolutionary stages without appreciable mass loss. after central helium burning, they encounter the electron - positron pair instability, collapse, and burn oxygen and silicon explosively. if sufficient energy is released by the burning, these stars explode as brilliant supernovae with energies up to 100 times that of an ordinary core collapse supernova. they also eject up to 50 solar masses of radioactive ni56. stars less massive than 140 solar masses or more massive than 260 solar masses should collapse into black holes instead of exploding, thus bounding the pair - creation supernovae with regions of stellar mass that are nucleosynthetically sterile. pair - instability supernovae might be detectable in the near infrared out to redshifts of 20 or more and their ashes should leave a distinctive nucleosynthetic pattern. this process may release or absorb energy. when the resulting nucleus is lighter than that of iron, energy is normally released ; when the nucleus is heavier than that of iron, energy is generally absorbed. this process of fusion occurs in stars, which derive their energy from hydrogen and helium. they form, through stellar nucleosynthesis, the light elements ( lithium to calcium ) as well as some of the heavy elements ( beyond iron and nickel, via the s - process ). the remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the r - process. of course, these natural processes of astrophysics are not examples of nuclear " technology ". because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. hydrogen bombs, formally known as thermonuclear weapons, obtain their enormous destructive power from fusion, but their energy cannot be controlled. controlled fusion is achieved in particle accelerators ; this is how many synthetic elements are produced. a fusor can also produce controlled fusion and is a useful neutron source. however, both of these devices operate at a net energy loss. controlled, viable fusion power has proven elusive, despite the occasional hoax. technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world. nuclear fusion was initially pursued only in theoretical stages during world war ii, when scientists on the manhattan project ( led by edward teller ) investigated it as a method to build a bomb. the project abandoned fusion after concluding that it would require a fission reaction to detonate. it took until 1952 for the first full hydrogen bomb to be detonated, so - called because it used reactions between deuterium and tritium. fusion reactions are much more energetic per unit mass of fuel than fission reactions, but starting the fusion chain reaction is much more difficult. = = nuclear weapons = = a nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission or a combination of fission and fusion. both reactions release vast quantities of energy from relatively small amounts of matter. even small nuclear devices can devastate a city by blast, fire and radiation. nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut. the design of a nuclear weapon is more complicated than it might seem. such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β†’ batching β†’ mixing β†’ forming β†’ drying β†’ firing β†’ assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, heat removes water. this step needs careful control, as rapid heating causes cracks and surface defects. the dried part is smaller than the green part, and is brittle, necessitating careful handling, since a small impact will cause crumbling and breaking. sintering is where the dried parts pass through a controlled heating process, and a 4mj planet with a 15. 8day orbital period has been detected from very precise radial velocity measurements with the coralie echelle spectrograph. a second remote and more massive companion has also been detected. all the planetary companions so far detected in orbit closer than 0. 08 au have a parent star with a statistically higher metal content compared to the metallicity distribution of other stars with planets. different processes occuring during their formation may provide a possible explanation for this observation. also launched missions to mercury in 2004, with the messenger probe demonstrating as the first use of a solar sail. nasa also launched probes to the outer solar system starting in the 1960s. pioneer 10 was the first probe to the outer planets, flying by jupiter, while pioneer 11 provided the first close up view of the planet. both probes became the first objects to leave the solar system. the voyager program launched in 1977, conducting flybys of jupiter and saturn, neptune, and uranus on a trajectory to leave the solar system. the galileo spacecraft, deployed from the space shuttle flight sts - 34, was the first spacecraft to orbit jupiter, discovering evidence of subsurface oceans on the europa and observed that the moon may hold ice or liquid water. a joint nasa - european space agency - italian space agency mission, cassini – huygens, was sent to saturn ' s moon titan, which, along with mars and europa, are the only celestial bodies in the solar system suspected of being capable of harboring life. cassini discovered three new moons of saturn and the huygens probe entered titan ' s atmosphere. the mission discovered evidence of liquid hydrocarbon lakes on titan and subsurface water oceans on the moon of enceladus, which could harbor life. finally launched in 2006, the new horizons mission was the first spacecraft to visit pluto and the kuiper belt. beyond interplanetary probes, nasa has launched many space telescopes. launched in the 1960s, the orbiting astronomical observatory were nasa ' s first orbital telescopes, providing ultraviolet, gamma - ray, x - ray, and infrared observations. nasa launched the orbiting geophysical observatory in the 1960s and 1970s to look down at earth and observe its interactions with the sun. the uhuru satellite was the first dedicated x - ray telescope, mapping 85 % of the sky and discovering a large number of black holes. launched in the 1990s and early 2000s, the great observatories program are among nasa ' s most powerful telescopes. the hubble space telescope was launched in 1990 on sts - 31 from the discovery and could view galaxies 15 billion light years away. a major defect in the telescope ' s mirror could have crippled the program, had nasa not used computer enhancement to compensate for the imperfection and launched five space shuttle servicing flights to replace the damaged components. the compton gamma ray observatory was launched from the atlantis on sts - 37 in 1991, discovering a possible source of antimatter at the center of the milky way and observing that the majority of gamma - ray bursts three major planets, venus, earth, and mercury formed out of the solar nebula. a fourth planetesimal, theia, also formed near earth where it collided in a giant impact, rebounding as the planet mars. during this impact earth lost $ { \ approx } 4 $ \ % of its crust and mantle that is now is found on mars and the moon. at the antipode of the giant impact, $ \ approx $ 60 \ % of earth ' s crust, atmosphere, and a large amount of mantle were ejected into space forming the moon. the lost crust never reformed and became the earth ' s ocean basins. the theia impact site corresponds to indian ocean gravitational anomaly on earth and the hellas basin on mars. the dynamics of the giant impact are consistent with the rotational rates and axial tilts of both earth and mars. the giant impact removed sufficient co $ _ 2 $ from earth ' s atmosphere to avoid a runaway greenhouse effect, initiated plate tectonics, and gave life time to form near geothermal vents at the continental margins. mercury formed near venus where on a close approach it was slingshot into the sun ' s convective zone losing 94 \ % of its mass, much of which remains there today. black carbon, from co $ _ 2 $ decomposed by the intense heat, is still found on the surface of mercury. arriving at 616 km / s, mercury dramatically altered the sun ' s rotational energy, explaining both its anomalously slow rotation rate and axial tilt. these results are quantitatively supported by mass balances, the current locations of the terrestrial planets, and the orientations of their major orbital axes. large scale manned space flight within the solar system is still confronted with the solution of two problems : 1. a propulsion system to transport large payloads with short transit times between different planetary orbits. 2. a cost effective lifting of large payloads into earth orbit. for the solution of the first problem a deuterium fusion bomb propulsion system is proposed where a thermonuclear detonation wave is ignited in a small cylindrical assembly of deuterium with a gigavolt - multimegampere proton beam, drawn from the magnetically insulated spacecraft acting in the ultrahigh vacuum of space as a gigavolt capacitor. for the solution of the second problem, the ignition is done by argon ion lasers driven by high explosives, with the lasers destroyed in the fusion explosion and becoming part of the exhaust. ring mass density and the corresponding circular velocity in thin disk model are known to be integral transforms of one another. but it may be less familiar that the transforms can be reduced to one - fold integrals with identical weight functions. it may be of practical value that the integral for the surface density does not involve the velocity derivative, unlike the equivalent and widely known toomre ' s formula. Question: The comet Shoemaker-Levy struck the planet Jupiter in July of 1994. The process of a comet striking a planet is an example of a net decrease in kinetic energy. Kinetic energy was ultimately converted into A) light. B) radiation. C) thermal energy. D) electromagnetic energy.
C) thermal energy.
Context: ##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' , crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest an important question of theoretical physics is whether sound is able to propagate in vacuums at all and if this is the case, then it must lead to the reinterpretation of one zero - restmass particle which corresponds to vacuum - sound waves. taking the electron - neutrino as the corresponding particle, its observed non - vanishing rest - energy may only appear for neutrino - propagation inside material media. the idea may also influence the physics of dense matter, restricting the maximum speed of sound, both in vacuums and in matter to the speed of light. subsea engineering and the ability to detect, track and destroy submarines ( anti - submarine warfare ) required the parallel development of a host of marine scientific instrumentation and sensors. visible light is not transferred far underwater, so the medium for transmission of data is primarily acoustic. high - frequency sound is used to measure the depth of the ocean, determine the nature of the seafloor, and detect submerged objects. the higher the frequency, the higher the definition of the data that is returned. sound navigation and ranging or sonar was developed during the first world war to detect submarines, and has been greatly refined through to the present day. submarines similarly use sonar equipment to detect and target other submarines and surface ships, and to detect submerged obstacles such as seamounts that pose a navigational obstacle. simple echo - sounders point straight down and can give an accurate reading of ocean depth ( or look up at the underside of sea - ice ). more advanced echo sounders use a fan - shaped beam or sound, or multiple beams to derive highly detailed images of the ocean floor. high power systems can penetrate the soil and seabed rocks to give information about the geology of the seafloor, and are widely used in geophysics for the discovery of hydrocarbons, or for engineering survey. for close - range underwater communications, optical transmission is possible, mainly using blue lasers. these have a high bandwidth compared with acoustic systems, but the range is usually only a few tens of metres, and ideally at night. as well as acoustic communications and navigation, sensors have been developed to measure ocean parameters such as temperature, salinity, oxygen levels and other properties including nitrate levels, levels of trace chemicals and environmental dna. the industry trend has been towards smaller, more accurate and more affordable systems so that they can be purchased and used by university departments and small companies as well as large corporations, research organisations and governments. the sensors and instruments are fitted to autonomous and remotely - operated systems as well as ships, and are enabling these systems to take on tasks that hitherto required an expensive human - crewed platform. manufacture of marine sensors and instruments mainly takes place in asia, europe and north america. products are advertised in specialist journals, and through trade shows such as oceanology international and ocean business which help raise awareness of the products. = = = environmental engineering = = = in every coastal and offshore project, environmental sustainability is an important consideration for the preservation of ocean ecosystems and natural resources. instances in which marine engineers benefit from knowledge of environmental engineering include creation of fisheries, clean ##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make ocean, determine the nature of the seafloor, and detect submerged objects. the higher the frequency, the higher the definition of the data that is returned. sound navigation and ranging or sonar was developed during the first world war to detect submarines, and has been greatly refined through to the present day. submarines similarly use sonar equipment to detect and target other submarines and surface ships, and to detect submerged obstacles such as seamounts that pose a navigational obstacle. simple echo - sounders point straight down and can give an accurate reading of ocean depth ( or look up at the underside of sea - ice ). more advanced echo sounders use a fan - shaped beam or sound, or multiple beams to derive highly detailed images of the ocean floor. high power systems can penetrate the soil and seabed rocks to give information about the geology of the seafloor, and are widely used in geophysics for the discovery of hydrocarbons, or for engineering survey. for close - range underwater communications, optical transmission is possible, mainly using blue lasers. these have a high bandwidth compared with acoustic systems, but the range is usually only a few tens of metres, and ideally at night. as well as acoustic communications and navigation, sensors have been developed to measure ocean parameters such as temperature, salinity, oxygen levels and other properties including nitrate levels, levels of trace chemicals and environmental dna. the industry trend has been towards smaller, more accurate and more affordable systems so that they can be purchased and used by university departments and small companies as well as large corporations, research organisations and governments. the sensors and instruments are fitted to autonomous and remotely - operated systems as well as ships, and are enabling these systems to take on tasks that hitherto required an expensive human - crewed platform. manufacture of marine sensors and instruments mainly takes place in asia, europe and north america. products are advertised in specialist journals, and through trade shows such as oceanology international and ocean business which help raise awareness of the products. = = = environmental engineering = = = in every coastal and offshore project, environmental sustainability is an important consideration for the preservation of ocean ecosystems and natural resources. instances in which marine engineers benefit from knowledge of environmental engineering include creation of fisheries, clean - up of oil spills, and creation of coastal solutions. = = = offshore systems = = = a number of systems designed fully or in part by marine engineers are used offshore - far away from coastlines. = = = = offshore oil platforms = = = = the design of offshore oil platforms involves a number of produces. the mastering engineer makes any final adjustments to the overall sound of the record in the final step before commercial duplication. mastering engineers use principles of equalization, compression and limiting to fine - tune the sound timbre and dynamics and to achieve a louder recording. sound designer – broadly an artist who produces soundtracks or sound effects content for media. live sound engineer front of house ( foh ) engineer, or a1. – a person dealing with live sound reinforcement. this usually includes planning and installation of loudspeakers, cabling and equipment and mixing sound during the show. this may or may not include running the foldback sound. a live / sound reinforcement engineer hears source material and tries to correlate that sonic experience with system performance. wireless microphone engineer, or a2. this position is responsible for wireless microphones during a theatre production, a sports event or a corporate event. foldback or monitor engineer – a person running foldback sound during a live event. the term foldback comes from the old practice of folding back audio signals from the front of house ( foh ) mixing console to the stage so musicians can hear themselves while performing. monitor engineers usually have a separate audio system from the foh engineer and manipulate audio signals independently from what the audience hears so they can satisfy the requirements of each performer on stage. in - ear systems, digital and analog mixing consoles, and a variety of speaker enclosures are typically used by monitor engineers. in addition, most monitor engineers must be familiar with wireless or rf ( radio - frequency ) equipment and often must communicate personally with the artist ( s ) during each performance. systems engineer – responsible for the design setup of modern pa systems, which are often very complex. a systems engineer is usually also referred to as a crew chief on tour and is responsible for the performance and day - to - day job requirements of the audio crew as a whole along with the foh audio system. this is a sound - only position concerned with implementation, not to be confused with the interdisciplinary field of system engineering, which typically requires a college degree. re - recording mixer – a person in post - production who mixes audio tracks for feature films or television programs. = = equipment = = an audio engineer is proficient with different types of recording media, such as analog tape, digital multi - track recorders and workstations, plug - ins and computer knowledge. with the advent of the digital age, it is increasingly important for the audio engineer to understand software and hardware integration, from synchronization to analog to digital transfers radio waves. the radio waves carry the information to the receiver location. at the receiver, the radio wave induces a tiny oscillating voltage in the receiving antenna – a weaker replica of the current in the transmitting antenna. this voltage is applied to the radio receiver, which amplifies the weak radio signal so it is stronger, then demodulates it, extracting the original modulation signal from the modulated carrier wave. the modulation signal is converted by a transducer back to a human - usable form : an audio signal is converted to sound waves by a loudspeaker or earphones, a video signal is converted to images by a display, while a digital signal is applied to a computer or microprocessor, which interacts with human users. the radio waves from many transmitters pass through the air simultaneously without interfering with each other because each transmitter ' s radio waves oscillate at a different frequency, measured in hertz ( hz ), kilohertz ( khz ), megahertz ( mhz ) or gigahertz ( ghz ). the receiving antenna typically picks up the radio signals of many transmitters. the receiver uses tuned circuits to select the radio signal desired out of all the signals picked up by the antenna and reject the others. a tuned circuit acts like a resonator, similar to a tuning fork. it has a natural resonant frequency at which it oscillates. the resonant frequency of the receiver ' s tuned circuit is adjusted by the user to the frequency of the desired radio station ; this is called tuning. the oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. radio signals at other frequencies are blocked by the tuned circuit and not passed on. = = = bandwidth = = = a modulated radio wave, carrying an information signal, occupies a range of frequencies. the information in a radio signal is usually concentrated in narrow frequency bands called sidebands ( sb ) just above and below the carrier frequency. the width in hertz of the frequency range that the radio signal occupies, the highest frequency minus the lowest frequency, is called its bandwidth ( bw ). for any given signal - to - noise ratio, a given bandwidth can carry the same amount of information regardless of where in the radio frequency spectrum it is located ; bandwidth is a measure of information - carrying capacity. the bandwidth required by a radio transmission depends on the data rate of Question: An earthquake in the Earth's crust under the ocean releases sound waves. Which statement accurately describes how the sound waves spread? A) They spread in all directions away from their source. B) They remain trapped near the source by water pressure. C) They travel mostly horizontally along the ocean floor. D) They travel mostly upward toward the surface of the water.
A) They spread in all directions away from their source.
Context: enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became in this article i explain in detail a method for making small amounts of liquid oxygen in the classroom if there is no access to a cylinder of compressed oxygen gas. i also discuss two methods for identifying the fact that it is liquid oxygen as opposed to liquid nitrogen. . microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea ##rozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokar , tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive we have combined measurements of the kinematics, morphology, and oxygen abundance of the ionized gas in \ izw18, one of the most metal - poor galaxies known, to examine the star formation history and chemical mixing processes. c. 4000 bc, associated with the maadi culture. this represents the earliest evidence for smelting in africa. the varna necropolis, bulgaria, is a burial site located in the western industrial zone of varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. this includes the ancient and medieval kingdoms and empires of the middle east and near east, ancient iran, ancient egypt, ancient nubia, and anatolia in present - day turkey, ancient nok, carthage, the celts, greeks and romans of ancient europe, medieval europe, ancient and medieval china, ancient and medieval india, ancient and medieval japan, amongst others. a 16th century book by georg agricola, de re metallica, describes the highly developed and complex processes of mining metal ores, metal extraction, and metallurgy of the time. agricola has been described as the " father of metallurgy ". = = extraction = = extractive metallurgy is the practice of removing valuable metals from an ore and refining the extracted raw metals into a purer form. in order to convert a metal oxide or sulphide to a purer metal, the ore must be reduced physically, chemically, or electroly the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the Question: Which provides the oldest evidence for oxygen accumulation in Earth's atmosphere? A) the earliest fossils of animals B) the earliest sediments of oxidized rock C) impact craters of oxidized-iron asteroids D) extensive volcanic calderas of similar age
B) the earliest sediments of oxidized rock
Context: 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal technology developed, medicine became more reliant upon medications. throughout history and in europe right until the late 18th century, not only plant products were used as medicine, but also animal ( including human ) body parts and fluids. pharmacology developed in part from herbalism and some drugs are still derived from plants ( atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc. ). vaccines were discovered by edward jenner and louis pasteur. the first antibiotic was arsphenamine ( salvarsan ) discovered by paul ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. the first major class of antibiotics was the sulfa drugs, derived by german chemists originally from azo dyes. pharmacology has become increasingly sophisticated ; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side - effects. genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision - making. evidence - based medicine is a contemporary movement to establish the most effective algorithms of practice ( ways of doing things ) through the use of systematic reviews and meta - analysis. the movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect, and 22. 5 % concluded positive effect. = = quality, efficiency, and access = = evidence - based medicine, prevention of medical error ( and other " iatrogenesis " ), and avoidance of unnecessary health care are a priority in modern medical systems. these topics generate significant political and public policy attention, particularly in the united states where healthcare is regarded as excessively costly but population health metrics lag similar nations. globally, many developing countries lack access to care and access to medicines. as of 2015, most wealthy developed countries provide health care to all citizens, with a few exceptions such as the united states where lack of health insurance molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of ##tion, and pasteurization in order to become products that can be sold. there are three levels of food processing : primary, secondary, and tertiary. primary food processing involves turning agricultural products into other products that can be turned into food, secondary food processing is the making of food from readily available ingredients, and tertiary food processing is commercial production of ready - to eat or heat - and - serve foods. drying, pickling, salting, and fermenting foods were some of the oldest food processing techniques used to preserve food by preventing yeasts, molds, and bacteria to cause spoiling. methods for preserving food have evolved to meet current standards of food safety but still use the same processes as the past. biochemical engineers also work to improve the nutritional value of food products, such as in golden rice, which was developed to prevent vitamin a deficiency in certain areas where this was an issue. efforts to advance preserving technologies can also ensure lasting retention of nutrients as foods are stored. packaging plays a key role in preserving as well as ensuring the safety of the food by protecting the product from contamination, physical damage, and tampering. packaging can also make it easier to transport and serve food. a common job for biochemical engineers working in the food industry is to design ways to perform all these processes on a large scale in order to meet the demands of the population. responsibilities for this career path include designing and performing experiments, optimizing processes, consulting with groups to develop new technologies, and preparing project plans for equipment and facilities. = = = pharmaceuticals = = = in the pharmaceutical industry, bioprocess engineering plays a crucial role in the large - scale production of biopharmaceuticals, such as monoclonal antibodies, vaccines, and therapeutic proteins. the development and optimization of bioreactors and fermentation systems are essential for the mass production of these products, ensuring consistent quality and high yields. for example, recombinant proteins like insulin and erythropoietin are produced through cell culture systems using genetically modified cells. the bioprocess engineer ’ s role is to optimize variables like temperature, ph, nutrient availability, and oxygen levels to maximize the efficiency of these systems. the growing field of gene therapy also relies on bioprocessing techniques to produce viral vectors, which are used to deliver therapeutic genes to patients. this involves scaling up processes from laboratory to industrial scale while maintaining safety and regulatory compliance. as the demand for biopharmaceutical products increases, advancements ; kitasato shibasaburo ( japan ) ; jean - martin charcot, claude bernard, paul broca ( france ) ; adolfo lutz ( brazil ) ; nikolai korotkov ( russia ) ; sir william osler ( canada ) ; and harvey cushing ( united states ). as science and technology developed, medicine became more reliant upon medications. throughout history and in europe right until the late 18th century, not only plant products were used as medicine, but also animal ( including human ) body parts and fluids. pharmacology developed in part from herbalism and some drugs are still derived from plants ( atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc. ). vaccines were discovered by edward jenner and louis pasteur. the first antibiotic was arsphenamine ( salvarsan ) discovered by paul ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. the first major class of antibiotics was the sulfa drugs, derived by german chemists originally from azo dyes. pharmacology has become increasingly sophisticated ; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side - effects. genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision - making. evidence - based medicine is a contemporary movement to establish the most effective algorithms of practice ( ways of doing things ) through the use of systematic reviews and meta - analysis. the movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect, and 22. 5 % concluded positive effect. = = quality, efficiency, and access = = evidence - based medicine, prevention of medical error ( and other " iatrogenesis " ), and avoidance of unnecessary health care are a priority in modern medical systems. these topics generate significant political and public policy attention, particularly in considered the father of modern neuroscience. from new zealand and australia came maurice wilkins, howard florey, and frank macfarlane burnet. others that did significant work include william williams keen, william coley, james d. watson ( united states ) ; salvador luria ( italy ) ; alexandre yersin ( switzerland ) ; kitasato shibasaburo ( japan ) ; jean - martin charcot, claude bernard, paul broca ( france ) ; adolfo lutz ( brazil ) ; nikolai korotkov ( russia ) ; sir william osler ( canada ) ; and harvey cushing ( united states ). as science and technology developed, medicine became more reliant upon medications. throughout history and in europe right until the late 18th century, not only plant products were used as medicine, but also animal ( including human ) body parts and fluids. pharmacology developed in part from herbalism and some drugs are still derived from plants ( atropine, ephedrine, warfarin, aspirin, digoxin, vinca alkaloids, taxol, hyoscine, etc. ). vaccines were discovered by edward jenner and louis pasteur. the first antibiotic was arsphenamine ( salvarsan ) discovered by paul ehrlich in 1908 after he observed that bacteria took up toxic dyes that human cells did not. the first major class of antibiotics was the sulfa drugs, derived by german chemists originally from azo dyes. pharmacology has become increasingly sophisticated ; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side - effects. genomics and knowledge of human genetics and human evolution is having increasingly significant influence on medicine, as the causative genes of most monogenic genetic disorders have now been identified, and the development of techniques in molecular biology, evolution, and genetics are influencing medical technology, practice and decision - making. evidence - based medicine is a contemporary movement to establish the most effective algorithms of practice ( ways of doing things ) through the use of systematic reviews and meta - analysis. the movement is facilitated by modern global information science, which allows as much of the available evidence as possible to be collected and analyzed according to standard protocols that are then disseminated to healthcare providers. the cochrane collaboration leads this movement. a 2001 review of 160 cochrane systematic reviews revealed that, according to two readers, 21. 3 % of the reviews concluded insufficient evidence, 20 % concluded evidence of no effect, ##iation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. the radiation sources used include radioisotope gamma ray sources, x - ray generators and electron accelerators. further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re - hydration. irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal ( in this context ' ionizing radiation ' is implied ). as such it is also used on non - food items, such as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioact for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to use in treating oil spills. ( chakrabarty ' s work did not involve gene manipulation but rather the transfer of entire organelles between strains of the pseudomonas bacterium ). the mosfet invented at bell labs between 1955 and 1960, two years later, leland c. clark and champ lyons invented the first biosensor in 1962. biosensor mosfets were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters. the first biofet was the ion - sensitive field - effect transistor ( isfet ), invented by piet bergveld Question: Louis Pasteur discovered that the bacteria in a substance can be killed by heating the substance for a short period of time. Which of these practices benefited most from Pasteur's discovery? A) storing foods for longer periods of time B) building ovens and other heating devices C) creating medicines that cure infections D) transporting living organisms without injuring them
A) storing foods for longer periods of time
Context: is the scientific study of inheritance. mendelian inheritance, specifically, is the process by which genes and traits are passed on from parents to offspring. it has several principles. the first is that genetic characteristics, alleles, are discrete and have alternate forms ( e. g., purple vs. white or tall vs. dwarf ), each inherited from one of two parents. based on the law of dominance and uniformity, which states that some alleles are dominant while others are recessive ; an organism with at least one dominant allele will display the phenotype of that dominant allele. during gamete formation, the alleles for each gene segregate, so that each gamete carries only one allele for each gene. heterozygotic individuals produce gametes with an equal frequency of two alleles. finally, the law of independent assortment, states that genes of different traits can segregate independently during the formation of gametes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can ##tes, i. e., genes are unlinked. an exception to this rule would include traits that are sex - linked. test crosses can be performed to experimentally determine the underlying genotype of an organism with a dominant phenotype. a punnett square can be used to predict the results of a test cross. the chromosome theory of inheritance, which states that genes are found on chromosomes, was supported by thomas morgans ' s experiments with fruit flies, which established the sex linkage between eye color and sex in these insects. = = = genes and dna = = = a gene is a unit of heredity that corresponds to a region of deoxyribonucleic acid ( dna ) that carries genetic information that controls form or function of an organism. dna is composed of two polynucleotide chains that coil around each other to form a double helix. it is found as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. the set of chromosomes in a cell is collectively known as its genome. in eukaryotes, dna is mainly in the cell nucleus. in prokaryotes, the dna is held within the nucleoid. the genetic information is held within genes, and the complete assemblage in an organism is called its genotype. dna replication is a semiconservative process whereby each strand serves as a template for a new strand of dna. mutations are heritable changes in dna. they can arise spontaneously as a result of replication errors that were not corrected by proofreading or can be induced by an environmental mutagen such as a chemical ( e. g., nitrous acid, benzopyrene ) or radiation ( e. g., x - ray, gamma ray, ultraviolet radiation, particles emitted by unstable isotopes ). mutations can lead to phenotypic effects such as loss - of - function, gain - of - function, and conditional mutations. some mutations are beneficial, as they are a source of genetic variation for evolution. others are harmful if they were to result in a loss of function of genes needed for survival. = = = gene expression = = = gene expression is the molecular process by which a genotype encoded in dna gives rise to an observable phenotype in the proteins of an organism ' s body. this process is summarized by the central dogma of molecular biology, which was formulated by francis crick in 1958. according to the central dogma, genetic information flows from dna and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit, reflect off objects ( the inside of a cockpit has a complex shape, with a pilot helmet alone forming a sizeable return ), and possibly return to the radar, but the conductive coating creates a controlled shape that deflects the incoming radar waves away from the radar. the coating is thin enough that it has angles. stealth aircraft such as the f - 117 use a different arrangement, tilting the tail surfaces to reduce corner reflections formed between them. a more radical method is to omit the tail, as in the b - 2 spirit. the b - 2 ' s clean, low - drag flying wing configuration gives it exceptional range and reduces its radar profile. the flying wing design most closely resembles a so - called infinite flat plate ( as vertical control surfaces dramatically increase rcs ), the perfect stealth shape, as it would have no angles to reflect back radar waves. in addition to altering the tail, stealth design must bury the engines within the wing or fuselage, or in some cases where stealth is applied to an extant aircraft, install baffles in the air intakes, so that the compressor blades are not visible to radar. a stealthy shape must be devoid of complex bumps or protrusions of any kind, meaning that weapons, fuel tanks, and other stores must not be carried externally. any stealthy vehicle becomes un - stealthy when a door or hatch opens. parallel alignment of edges or even surfaces is also often used in stealth designs. the technique involves using a small number of edge orientations in the shape of the structure. for example, on the f - 22a raptor, the leading edges of the wing and the tail planes are set at the same angle. other smaller structures, such as the air intake bypass doors and the air refueling aperture, also use the same angles. the effect of this is to return a narrow radar signal in a very specific direction away from the radar emitter rather than returning a diffuse signal detectable at many angles. the effect is sometimes called " glitter " after the very brief signal seen when the reflected beam passes across a detector. it can be difficult for the radar operator to distinguish between a glitter event and a digital glitch in the processing system. stealth airframes sometimes display distinctive serrations on some exposed edges, such as the engine ports. the yf - 23 has such serrations on the exhaust ports. this is another example in the parallel alignment of features, this time on the external airframe. the shaping requirements detracted greatly from the f - 117 ' s aerodynamic properties. it is inherently unstable, and cannot be flown without a fly - by - wire control system. similarly, coating the cockpit canopy with a thin film transparent conductor ( vapor - deposited gold or indium tin oxide ) helps to reduce the aircraft ' s radar profile, because radar waves would normally enter the cockpit in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. the frequency of gene targeting can be greatly enhanced through genome editing. genome editing uses artificially engineered nucleases that create specific double - stranded breaks at desired locations in the genome, and use the cell ' s endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end - joining. there are four families of engineered nucleases : meganucleases, zinc finger nucleases, transcription activator - like effector nucleases ( talens ), and the cas9 - guiderna system ( adapted from crispr ). talen and crispr are the two most commonly used and each has its own advantages. talens have greater target specificity, while crispr is easier to design and more efficient. in addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations at endogenous genes that generate a gene knockout. = = applications = = genetic engineering has applications in medicine, research, industry and agriculture and can be used on a wide range of plants, animals and microorganisms. bacteria, the first organisms to be genetically modified, can have plasmid dna inserted containing new genes that code for medicines or enzymes that process food and other substrates. plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures and the production of edible vaccines. most commercialised gmos are insect resistant or herbicide tolerant crop plants. genetically modified animals have been used for research, model animals and the production of agricultural or pharmaceutical products. the genetically modified animals include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth and the ability to express proteins in their milk. = = = medicine = = = genetic engineering has many applications to medicine that include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin, creation of the first bioprinter in 2003 by the university of missouri when they printed spheroids without the need of scaffolds, 3 - d bioprinting became more conventionally used in medical field than ever before. so far, scientists have been able to print mini organoids and organs - on - chips that have rendered practical insights into the functions of a human body. pharmaceutical companies are using these models to test drugs before moving on to animal studies. however, a fully functional and structurally similar organ has not been printed yet. a team at university of utah has reportedly printed ears and successfully transplanted those onto children born with defects that left their ears partially developed. today hydrogels are considered the preferred choice of bio - inks for 3 - d bioprinting since they mimic cells ' natural ecm while also containing strong mechanical properties capable of sustaining 3 - d structures. furthermore, hydrogels in conjunction with 3 - d bioprinting allow researchers to produce different scaffolds which can be used to form new tissues or organs. 3 - d printed tissues still face many challenges such as adding vasculature. meanwhile, 3 - d printing parts of tissues definitely will improve our understanding of the human body, thus accelerating both basic and clinical research. = = examples = = as defined by langer and vacanti, examples of tissue engineering fall into one or more of three categories : " just cells, " " cells and scaffold, " or " tissue - inducing factors. " in vitro meat : edible artificial animal muscle tissue cultured in vitro. bioartificial liver device, " temporary liver ", extracorporeal liver assist device ( elad ) : the human hepatocyte cell line ( c3a line ) in a hollow fiber bioreactor can mimic the hepatic function of the liver for acute instances of liver failure. a fully capable elad would temporarily function as an individual ' s liver, thus avoiding transplantation and allowing regeneration of their own liver. artificial pancreas : research involves using islet cells to regulate the body ' s blood sugar, particularly in cases of diabetes. biochemical factors may be used to cause human pluripotent stem cells to differentiate ( turn into ) cells that function similarly to beta cells, which are in an islet cell in charge of producing insulin. artificial bladders : anthony atala ( wake forest university ) has successfully implanted artificial bladders, constructed of cultured cells seeded onto a bladder - shaped scaffold, the aim of this note is to prove the analogue of poincar \ ' e duality in the chiral hodge cohomology. classes according to pore size : the form and shape of the membrane pores are highly dependent on the manufacturing process and are often difficult to specify. therefore, for characterization, test filtrations are carried out and the pore diameter refers to the diameter of the smallest particles which could not pass through the membrane. the rejection can be determined in various ways and provides an indirect measurement of the pore size. one possibility is the filtration of macromolecules ( often dextran, polyethylene glycol or albumin ), another is measurement of the cut - off by gel permeation chromatography. these methods are used mainly to measure membranes for ultrafiltration applications. another testing method is the filtration of particles with defined size and their measurement with a particle sizer or by laser induced breakdown spectroscopy ( libs ). a vivid characterization is to measure the rejection of dextran blue or other colored molecules. the retention of bacteriophage and bacteria, the so - called " bacteria challenge test ", can also provide information about the pore size. to determine the pore diameter, physical methods such as porosimeter ( mercury, liquid - liquid porosimeter and bubble point test ) are also used, but a certain form of the pores ( such as cylindrical or concatenated spherical holes ) is assumed. such methods are used for membranes whose pore geometry does not match the ideal, and we get " nominal " pore diameter, which characterizes the membrane, but does not necessarily reflect its actual filtration behavior and selectivity. the selectivity is highly dependent on the separation process, the composition of the membrane and its electrochemical properties in addition to the pore size. with high selectivity, isotopes can be enriched ( uranium enrichment ) in nuclear engineering or industrial gases like nitrogen can be recovered ( gas separation ). ideally, even racemics can be enriched with a suitable membrane. when choosing membranes selectivity has priority over a high permeability, as low flows can easily be offset by increasing the filter surface with a modular structure. in gas phase filtration different deposition mechanisms are operative, so that particles having sizes below the pore size of the membrane can be retained as well. = = membrane classification = = bio - membrane is classified in two categories, synthetic membrane and natural membrane. synthetic membranes further classified in organic and inorganic membranes. organic membrane sub classified polymeric membranes and inorganic membrane sub classified ceramic polymers. = = synthesis of biomass membrane pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system, various forms of " utilization review ", such as prior authorization of tests, may place barriers on accessing expensive services. the medical decision - making ( mdm ) process includes the analysis and synthesis of all the above data to come up with a list of possible diagnoses ( the differential diagnoses ), along with an idea of what needs to be done to obtain a definitive diagnosis that would explain the patient ' s problem. on subsequent visits, the process may be repeated in an abbreviated manner to obtain any new history, symptoms, physical findings, lab or imaging results, or specialist consultations. = = institutions = = contemporary medicine is, in general, conducted within health care systems. legal, credentialing, and financing frameworks are established by individual governments, augmented on occasion by international organizations, such as churches. the characteristics of any given health care system have a significant impact on the way medical care is provided. from ancient times, christian emphasis on practical charity gave rise to the development of systematic nursing and hospitals, and the catholic church today remains the largest non - government provider of medical services in the world. advanced industrial countries ( with the exception of the united states ) and many developing countries provide medical services through a system of universal health care that aims to guarantee care for all through a single - payer health care system or compulsory private or cooperative health insurance. this is intended to ensure that the entire population has access to medical care on the basis of need rather than ability to pay. delivery may be via private medical practices, state - owned hospitals and clinics, or charities, Question: How do traits of animals such as ear shape, nose shape, and hair color most often get passed to offspring? A) sexual reproduction B) asexual reproduction C) adaptation D) instinct
A) sexual reproduction
Context: enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies. acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. glaciology is the study of the cryosphere, including glaciers and coverage of the earth by ice and snow. concerns of gla in the year 1598 philipp uffenbach published a printed diptych sundial, which is a forerunner of franz ritters horizantal sundial. uffenbach ' s sundial contains apart from the usual information on a sundial ascending signs of the zodiac, several brigthest stars, an almucantar and most important the oldest gnomonic world map known so far. the sundial is constructed for the polar height of 50 1 / 6 degrees, the height of frankfurt / main the town of his citizenship. molecular nitrogen is the most commonly assumed background gas that supports habitability on rocky planets. despite its chemical inertness, nitrogen molecule is broken by lightning, hot volcanic vents, and bolide impacts, and can be converted into soluble nitrogen compounds and then sequestered in the ocean. the very stability of nitrogen, and that of nitrogen - based habitability, is thus called into question. here we determine the lifetime of molecular nitrogen vis - a - vis aqueous sequestration, by developing a novel model that couples atmospheric photochemistry and oceanic chemistry. we find that hno, the dominant nitrogen compounds produced in anoxic atmospheres, is converted to n2o in the ocean, rather than oxidized to nitrites or nitrates as previously assumed. this n2o is then released back into the atmosphere and quickly converted to n2. we also find that the deposition rate of no is severely limited by the kinetics of the aqueous - phase reaction that converts no to nitrites in the ocean. putting these insights together, we conclude that the atmosphere must produce nitrogen species at least as oxidized as no2 and hno2 to enable aqueous sequestration. the lifetime of molecular nitrogen in anoxic atmospheres is determined to be > 1 billion years on temperate planets of both sun - like and m dwarf stars. this result upholds the validity of molecular nitrogen as a universal background gas on rocky planets. a minimum atmospheric temperature, or tropopause, occurs at a pressure of around 0. 1 bar in the atmospheres of earth, titan, jupiter, saturn, uranus and neptune, despite great differences in atmospheric composition, gravity, internal heat and sunlight. in all these bodies, the tropopause separates a stratosphere with a temperature profile that is controlled by the absorption of shortwave solar radiation, from a region below characterised by convection, weather, and clouds. however, it is not obvious why the tropopause occurs at the specific pressure near 0. 1 bar. here we use a physically - based model to demonstrate that, at atmospheric pressures lower than 0. 1 bar, transparency to thermal radiation allows shortwave heating to dominate, creating a stratosphere. at higher pressures, atmospheres become opaque to thermal radiation, causing temperatures to increase with depth and convection to ensue. a common dependence of infrared opacity on pressure, arising from the shared physics of molecular absorption, sets the 0. 1 bar tropopause. we hypothesize that a tropopause at a pressure of approximately 0. 1 bar is characteristic of many thick atmospheres, including exoplanets and exomoons in our galaxy and beyond. judicious use of this rule could help constrain the atmospheric structure, and thus the surface environments and habitability, of exoplanets. if a fintie group g acts topologically and faithfully on r ^ 3, then g is a subgroup of o ( 3 ) the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the open problems from the 15th annual acm symposium on computational geometry. Question: Why is the ozone content of the stratosphere important to living organisms? A) Ozone absorbs infrared radiation from the Sun. B) Ozone absorbs ultraviolet radiation from the Sun. C) Ozone is necessary to create oxygen for living things. D) Ozone in the atmosphere prevents radiation of heat from Earth.
B) Ozone absorbs ultraviolet radiation from the Sun.
Context: and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies. 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is variation in total solar irradiance is thought to have little effect on the earth ' s surface temperature because of the thermal time constant - - the characteristic response time of the earth ' s global surface temperature to changes in forcing. this time constant is large enough to smooth annual variations but not necessarily variations having a longer period such as those due to solar inertial motion ; the magnitude of these surface temperature variations is estimated. the transition of our energy system to renewable energies is necessary in order not to heat up the climate any further and to achieve climate neutrality. the use of wind energy plays an important role in this transition in germany. but how much wind energy can be used and what are the possible consequences for the atmosphere if more and more wind energy is used? the less of it people would be prepared to buy ( other things unchanged ). as the price of a commodity falls, consumers move toward it from relatively more expensive goods ( the substitution effect ). in addition, purchasing power from the price decline increases ability to buy ( the income effect ). other factors can change demand ; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. all determinants are predominantly taken as constant factors of demand and supply. supply is the relation between the price of a good and the quantity available for sale at that price. it may be represented as a table or graph relating price and quantity supplied. producers, for example business firms, are hypothesised to be profit maximisers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. supply is typically represented as a function relating price and quantity, if other factors are unchanged. that is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. the higher price makes it profitable to increase production. just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. the " law of supply " states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply. market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. at a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. this is posited to bid the price up. at a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. this pushes the price down. the model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilise at the price that makes quantity supplied equal to quantity demanded. similarly, demand - and - supply theory predicts a new price - quantity combination from a shift in demand ( as to the figure ), or in supply. = = = firms = = = people frequently do not trade directly on markets. instead, on the supply side, they may work the rapidly developing research field of organic analogue sensors aims to replace traditional semiconductors with naturally occurring materials. photosensors, or photodetectors, change their electrical properties in response to the light levels they are exposed to. organic photosensors can be functionalised to respond to specific wavelengths, from ultra - violet to red light. performing cyclic voltammetry on fungal mycelium and fruiting bodies under different lighting conditions shows no appreciable response to changes in lighting condition. however, functionalising the specimen using pedot : pss yields in a photosensor that produces large, instantaneous current spikes when the light conditions change. future works would look at interfacing this organic photosensor with an appropriate digital back - end for interpreting and processing the response. the world is changing at an ever - increasing pace. and it has changed in a much more fundamental way than one would think, primarily because it has become more connected and interdependent than in our entire history. every new product, every new invention can be combined with those that existed before, thereby creating an explosion of complexity : structural complexity, dynamic complexity, functional complexity, and algorithmic complexity. how to respond to this challenge? and what are the costs? phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. the frequency of gene targeting can be greatly enhanced through genome editing. genome editing uses artificially engineered nucleases that create specific double - stranded breaks at desired locations in the genome, and use the cell ' s endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end - joining. there are four families of engineered nucleases : meganucleases, zinc finger nucleases, transcription activator - like effector nucleases ( talens ), and the cas9 - guiderna system ( adapted from crispr ). talen and crispr are the two most commonly used and each has its own advantages. talens have greater target specificity, while crispr is easier to design and more efficient. in addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations at endogenous genes that generate a gene knockout. = = applications = = genetic engineering has applications in medicine, research, industry and agriculture and can be used on a wide range of plants, animals and microorganisms. bacteria, the first organisms to be genetically modified, can have plasmid dna inserted containing new genes that code for medicines or enzymes that process food and other substrates. plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures and the production of edible vaccines. most commercialised gmos are insect resistant or herbicide tolerant crop plants. genetically modified animals have been used for research, model animals and the production of agricultural or pharmaceutical products. the genetically modified animals include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth and the ability to express proteins in their milk. = = = medicine = = = genetic engineering has many applications to medicine that include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin, Question: If an environment becomes warmer and drier, the change that would most likely help a species adapt would be an increase in its A) amount of body fat. B) amount of body hair. C) ability to climb trees. D) ability to store water.
D) ability to store water.
Context: time - dependent distribution of the global extinction of megafauna is compared with the growth of human population. there is no correlation between the two processes. furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna. they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of prokaryotic cells and were initially classified as bacteria, receiving the name archaebacteria ( in the archaebacteria kingdom ), a term that has fallen out of use. archaeal cells have unique properties separating them from the other two domains, bacteria and eukaryota. archaea have evolved from the earliest emergence of life to present day. earth formed about 4. 5 billion years ago and all life on earth, both living and extinct, descended from a last universal common ancestor that lived about 3. 5 billion years ago. geologists have developed a geologic time scale that divides the history of the earth into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became . the first major technologies were tied to survival, hunting, and food preparation. stone tools and weapons, fire, and clothing were technological developments of major importance during this period. human ancestors have been using stone and other tools since long before the emergence of homo sapiens approximately 300, 000 years ago. the earliest direct evidence of tool usage was found in ethiopia within the great rift valley, dating back to 2. 5 million years ago. the earliest methods of stone tool making, known as the oldowan " industry ", date back to at least 2. 3 million years ago. this era of stone tool use is called the paleolithic, or " old stone age ", and spans all of human history up to the development of agriculture approximately 12, 000 years ago. to make a stone tool, a " core " of hard stone with specific flaking properties ( such as flint ) was struck with a hammerstone. this flaking produced sharp edges which could be used as tools, primarily in the form of choppers or scrapers. these tools greatly aided the early humans in their hunter - gatherer lifestyle to perform a variety of tasks including butchering carcasses ( and breaking bones to get at the marrow ) ; chopping wood ; cracking open nuts ; skinning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, into major divisions, starting with four eons ( hadean, archean, proterozoic, and phanerozoic ), the first three of which are collectively known as the precambrian, which lasted approximately 4 billion years. each eon can be divided into eras, with the phanerozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off . microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokaryotic microorganisms. typically a few micrometers in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. bacteria were among the first life forms to appear on earth, and are present in most of its habitats. bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of the earth ' s crust. bacteria also live in symbiotic and parasitic relationships with plants and animals. most bacteria have not been characterised, and only about 27 percent of the bacterial phyla have species that can be grown in the laboratory. archaea constitute the other domain of ##rozoic eon that began 539 million years ago being subdivided into paleozoic, mesozoic, and cenozoic eras. these three eras together comprise eleven periods ( cambrian, ordovician, silurian, devonian, carboniferous, permian, triassic, jurassic, cretaceous, tertiary, and quaternary ). the similarities among all known present - day species indicate that they have diverged through the process of evolution from their common ancestor. biologists regard the ubiquity of the genetic code as evidence of universal common descent for all bacteria, archaea, and eukaryotes. microbial mats of coexisting bacteria and archaea were the dominant form of life in the early archean eon and many of the major steps in early evolution are thought to have taken place in this environment. the earliest evidence of eukaryotes dates from 1. 85 billion years ago, and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. later, around 1. 7 billion years ago, multicellular organisms began to appear, with differentiated cells performing specialised functions. algae - like multicellular land plants are dated back to about 1 billion years ago, although evidence suggests that microorganisms formed the earliest terrestrial ecosystems, at least 2. 7 billion years ago. microorganisms are thought to have paved the way for the inception of land plants in the ordovician period. land plants were so successful that they are thought to have contributed to the late devonian extinction event. ediacara biota appear during the ediacaran period, while vertebrates, along with most other modern phyla originated about 525 million years ago during the cambrian explosion. during the permian period, synapsids, including the ancestors of mammals, dominated the land, but most of this group became extinct in the permian – triassic extinction event 252 million years ago. during the recovery from this catastrophe, archosaurs became the most abundant land vertebrates ; one archosaur group, the dinosaurs, dominated the jurassic and cretaceous periods. after the cretaceous – paleogene extinction event 66 million years ago killed off the non - avian dinosaurs, mammals increased rapidly in size and diversity. such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify. = = diversity = = = = = bacteria and archaea = = = bacteria are a type of cell that constitute a large domain of prokar ##ning an animal for its hide, and even forming other tools out of softer materials such as bone and wood. the earliest stone tools were irrelevant, being little more than a fractured rock. in the acheulian era, beginning approximately 1. 65 million years ago, methods of working these stones into specific shapes, such as hand axes emerged. this early stone age is described as the lower paleolithic. the middle paleolithic, approximately 300, 000 years ago, saw the introduction of the prepared - core technique, where multiple blades could be rapidly formed from a single core stone. the upper paleolithic, beginning approximately 40, 000 years ago, saw the introduction of pressure flaking, where a wood, bone, or antler punch could be used to shape a stone very finely. the end of the last ice age about 10, 000 years ago is taken as the end point of the upper paleolithic and the beginning of the epipaleolithic / mesolithic. the mesolithic technology included the use of microliths as composite stone tools, along with wood, bone, and antler tools. the later stone age, during which the rudiments of agricultural technology were developed, is called the neolithic period. during this period, polished stone tools were made from a variety of hard rocks such as flint, jade, jadeite, and greenstone, largely by working exposures as quarries, but later the valuable rocks were pursued by tunneling underground, the first steps in mining technology. the polished axes were used for forest clearance and the establishment of crop farming and were so effective as to remain in use when bronze and iron appeared. these stone axes were used alongside a continued use of stone tools such as a range of projectiles, knives, and scrapers, as well as tools, made from organic materials such as wood, bone, and antler. stone age cultures developed music and engaged in organized warfare. stone age humans developed ocean - worthy outrigger canoe technology, leading to migration across the malay archipelago, across the indian ocean to madagascar and also across the pacific ocean, which required knowledge of the ocean currents, weather patterns, sailing, and celestial navigation. although paleolithic cultures left no written records, the shift from nomadic life to settlement and agriculture can be inferred from a range of archaeological evidence. such evidence includes ancient tools, cave paintings, and other prehistoric art, such as the venus of willendorf. human remains also provide direct evidence, both through the examination of bones, and ##wi, turkana, dating from 3. 3 million years ago. stone tools diversified through the pleistocene period, which ended ~ 12, 000 years ago. the earliest evidence of warfare between two groups is recorded at the site of nataruk in turkana, kenya, where human skeletons with major traumatic injuries to the head, neck, ribs, knees and hands, including an embedded obsidian bladelet on a skull, are evidence of inter - group conflict between groups of nomadic hunter - gatherers 10, 000 years ago. humans entered the bronze age as they learned to smelt copper into an alloy with tin to make weapons. in asia where copper - tin ores are rare, this development was delayed until trading in bronze began in the third millennium bce. in the middle east and southern european regions, the bronze age follows the neolithic period, but in other parts of the world, the copper age is a transition from neolithic to the bronze age. although the iron age generally follows the bronze age, in some areas the iron age intrudes directly on the neolithic from outside the region, with the exception of sub - saharan africa where it was developed independently. the first large - scale use of iron weapons began in asia minor around the 14th century bce and in central europe around the 11th century bce followed by the middle east ( about 1000 bce ) and india and china. the assyrians are credited with the introduction of horse cavalry in warfare and the extensive use of iron weapons by 1100 bce. assyrians were also the first to use iron - tipped arrows. = = = post - classical technology = = = the wujing zongyao ( essentials of the military arts ), written by zeng gongliang, ding du, and others at the order of emperor renzong around 1043 during the song dynasty illustrate the eras focus on advancing intellectual issues and military technology due to the significance of warfare between the song and the liao, jin, and yuan to their north. the book covers topics of military strategy, training, and the production and employment of advanced weaponry. advances in military technology aided the song dynasty in its defense against hostile neighbors to the north. the flamethrower found its origins in byzantine - era greece, employing greek fire ( a chemically complex, highly flammable petrol fluid ) in a device with a siphon hose by the 7th century. : 77 the earliest reference to greek fire in china was made in 917, written by wu renchen in his spring and autumn annals of the ten kingdoms. : 80 in 91 the magellanic clouds were known before magellan ' s voyage exactly 500 years ago, and were not given that name by magellan himself or his chronicler antonio pigafetta. they were, of course, already known by local populations in south america, such as the mapuche and tupi - guaranis. the portuguese called them clouds of the cape, and scientific circles had long used the name of nubecula minor and major. we trace how and when the name magellanic clouds came into common usage by following the history of exploration of the southern hemisphere and the southern sky by european explorers. while the name of magellan was quickly associated to the strait he discovered ( within about 20 years only ), the clouds got their final scientific name only at the end of the 19th century, when scientists finally abandoned latin as their communication language. Question: Thousands of years ago, several species of large mammals existed in North America. These species became extinct not long after the first human settlement of North America. Which human activity most likely contributed to the extinction of these mammals? A) hunting B) waterway pollution C) habitat destruction D) competition for resources
A) hunting
Context: behavioral responses to different stimuli, one can understand something about how those stimuli are processed. lewandowski & strohmetz ( 2009 ) reviewed a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice. behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present ( e. g., litter in a parking lot or readings on an electric meter ). behavioral observations involve the direct witnessing of the actor engaging in the behavior ( e. g., watching how close a person sits next to another person ). behavioral choices are when a person selects between two or more options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream is not present ( e. g., litter in a parking lot or readings on an electric meter ). behavioral observations involve the direct witnessing of the actor engaging in the behavior ( e. g., watching how close a person sits next to another person ). behavioral choices are when a person selects between two or more options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields options ( e. g., voting behavior, choice of a punishment for another participant ). reaction time. the time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. for example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. psychophysical responses. psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. they typically involve making judgments of some physical property, e. g. the loudness of a sound. correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. some examples include : sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. eye tracking. this methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. the fixation point of the eyes is linked to an individual ' s focus of attention. thus, by monitoring eye movements, we can study what information is being processed at a given time. eye tracking allows us to study cognitive processes on extremely short time scales. eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed. = = = brain imaging = = = brain imaging involves analyzing activity within the brain while performing various tasks. this allows us to link behavior and brain function to help understand how information is processed. different types of imaging techniques vary in their temporal ( time - based ) and spatial ( location - based ) resolution. brain imaging is often used in cognitive neuroscience. single - photon emission computed tomography and positron emission tomography. spect and pet use radioactive isotopes, which are injected into the subject ' s bloodstream and taken up by the brain. by observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. pet has similar spatial resolution to fmri, but it has extremely poor temporal resolution. electroencephalography. eeg measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. this technique has an extremely high temporal resolution, but a relatively poor spatial resolution. functional magnetic resonance imaging. fmri measures the relative amount of oxygenated blood flowing to different parts of the brain. more oxygen reference to recent papers and experimental feasibility are added. the paper will not be published in a hard - copy journal. i reject the following null hypothesis : { h0 : your data are normal }. such drastic decision is motivated by theoretical reasons, and applies to your current data, the past ones, and the future ones. while this situation may appear embarrassing, it does not invalidate any of your results. moreover, it allows to save time and energy that are currently spent in vain by performing the following unnecessary tasks : ( i ) carrying out normality tests ; ( ii ) pretending to do something if normality is rejected ; and ( iii ) arguing about normality with referee # 2. superdielectric behavior was observed in pastes made of high surface area alumina filled to the level of incipient wetness with water containing dissolved sodium chloride ( table salt ). in some cases the dielectric constants were greater than 10 ^ 10. an alternative explanation of 1 / f - noise in manganites is suggested and discussed oxygen and / or steam, to grow a thin surface layer of silicon dioxide. = = = patterning = = = patterning is the transfer of a pattern into a material. = = = lithography = = = lithography in a mems context is typically the transfer of a pattern into a photosensitive material by selective exposure to a radiation source such as light. a photosensitive material is a material that experiences a change in its physical properties when exposed to a radiation source. if a photosensitive material is selectively exposed to radiation ( e. g. by masking some of the radiation ) the pattern of the radiation on the material is transferred to the material exposed, as the properties of the exposed and unexposed regions differs. this exposed region can then be removed or treated providing a mask for the underlying substrate. photolithography is typically used with metal or other thin film deposition, wet and dry etching. sometimes, photolithography is used to create structure without any kind of post etching. one example is su8 based lens where su8 based square blocks are generated. then the photoresist is melted to form a semi - sphere which acts as a lens. electron beam lithography ( often abbreviated as e - beam lithography ) is the practice of scanning a beam of electrons in a patterned fashion across a surface covered with a film ( called the resist ), ( " exposing " the resist ) and of selectively removing either exposed or non - exposed regions of the resist ( " developing " ). the purpose, as with photolithography, is to create very small structures in the resist that can subsequently be transferred to the substrate material, often by etching. it was developed for manufacturing integrated circuits, and is also used for creating nanotechnology architectures. the primary advantage of electron beam lithography is that it is one of the ways to beat the diffraction limit of light and make features in the nanometer range. this form of maskless lithography has found wide usage in photomask - making used in photolithography, low - volume production of semiconductor components, and research & development. the key limitation of electron beam lithography is throughput, i. e., the very long time it takes to expose an entire silicon wafer or glass substrate. a long exposure time leaves the user vulnerable to beam drift or instability which may occur during the exposure. also, the turn - around time for reworking or re - design is this paper was withdrawn because subsequent measurements produced results not always consistent with the ones presented in the paper. various versions of club are shown to be different. a question of soukup, fuchino and juhasz, is it consistent to have a stick without club, is answered as a consequence. the more detailed version of the paper, which is coming up, also answers a question of galvin. Question: Which of the following is an example of a behavioral adaptation? A) hooves of a horse B) migration of birds C) a spider web D) a bee hive
B) migration of birds
Context: ##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s a watershed ( called a " divide " in north america ) over which rainfall flows down towards the river traversing the lowest part of the valley, whereas the rain falling on the far slope of the watershed flows away to another river draining an adjacent basin. river basins vary in extent according to the configuration of the country, ranging from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in ##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform significantly greater strength and fracture toughness. another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. the growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. if a mixture of different materials is used together in a ceramic, the sintering temperature is sometimes above the melting point of one minor component – a liquid phase sintering. this results in shorter sintering times compared to solid state sintering. such liquid phase sintering involves in faster diffusion processes and may result in abnormal grain ##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river molecular diffusion processes give rise to significant changes in the primary microstructural features. this includes the gradual elimination of porosity, which is typically accompanied by a net shrinkage and overall densification of the component. thus, the pores in the object may close up, resulting in a denser product of significantly greater strength and fracture toughness. another major change in the body during the firing or sintering process will be the establishment of the polycrystalline nature of the solid. significant grain growth tends to occur during sintering, with this growth depending on temperature and duration of the sintering process. the growth of grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. Question: Which two processes could result in the formation of high mountains with well-rounded peaks? A) volcanic eruptions and global warming B) earthquakes and tidal activity C) collision of crustal plates and erosion D) production of greenhouse gases and weathering
C) collision of crustal plates and erosion
Context: ( create a critical mass ) for detonation. it also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. the procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - time - dependent distribution of the global extinction of megafauna is compared with the growth of human population. there is no correlation between the two processes. furthermore, the size of human population and its growth rate were far too small to have any significant impact on the environment and on the life of megafauna. on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - nuclear states signed the limited test ban treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. the treaty permitted underground nuclear testing. france continued atmospheric testing until 1974, while china continued up until 1980. the last underground test by the united states was in 1992, the soviet union , and this often works with little to no disruption. to minimize collisions with wi - fi and non - wi - fi devices, wi - fi employs carrier - sense multiple access with collision avoidance ( csma / ca ), where transmitters listen before transmitting and delay transmission of packets if they detect that other devices are active on the channel, or if noise is detected from adjacent channels or non - wi - fi sources. nevertheless, wi - fi networks are still susceptible to the hidden node and exposed node problem. a standard speed wi - fi signal occupies five channels in the 2. 4 ghz band. interference can be caused by overlapping channels. any two channel numbers that differ by five or more, such as 2 and 7, do not overlap ( no adjacent - channel interference ). the oft - repeated adage that channels 1, 6, and 11 are the only non - overlapping channels is, therefore, not accurate. channels 1, 6, and 11 are the only group of three non - overlapping channels in north america. however, whether the overlap is significant depends on physical spacing. channels that are four apart interfere a negligible amount – much less than reusing channels ( which causes co - channel interference ) – if transmitters are at least a few metres apart. in europe and japan where channel 13 is available, using channels 1, 5, 9, and 13 for 802. 11g and 802. 11n is viable and recommended. however, multiple 2. 4 ghz 802. 11b and 802. 11g access - points default to the same channel on initial startup, contributing to congestion on certain channels. wi - fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices ' use of other access points as well as with decreased signal - to - noise ratio ( snr ) between access points. these issues can become a problem in high - density areas, such as large apartment complexes or office buildings with multiple wi - fi access points. other devices use the 2. 4 ghz band : microwave ovens, ism band devices, security cameras, zigbee devices, bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. it is also an issue when municipalities or other large entities ( such as universities ) seek to provide large area coverage. on some 5 ghz bands interference from radar systems can occur in some places. for base stations that support those bands they employ dynamic frequency selection an oscillation with a period of around 500 kb in guanine and cytosine content ( gc % ) is observed in the dna sequence of human chromosome 21. this oscillation is localized in the rightmost one - eighth region of the chromosome, from 43. 5 mb to 46. 5 mb. five cycles of oscillation are observed in this region with six gc - rich peaks and five gc - poor valleys. the gc - poor valleys comprise regions with low density of cpg islands and, alternating between the two dna strands, low gene density regions. consequently, the long - range oscillation of gc % result in spacing patterns of both cpg island density, and to a lesser extent, gene densities. the channel, or if noise is detected from adjacent channels or non - wi - fi sources. nevertheless, wi - fi networks are still susceptible to the hidden node and exposed node problem. a standard speed wi - fi signal occupies five channels in the 2. 4 ghz band. interference can be caused by overlapping channels. any two channel numbers that differ by five or more, such as 2 and 7, do not overlap ( no adjacent - channel interference ). the oft - repeated adage that channels 1, 6, and 11 are the only non - overlapping channels is, therefore, not accurate. channels 1, 6, and 11 are the only group of three non - overlapping channels in north america. however, whether the overlap is significant depends on physical spacing. channels that are four apart interfere a negligible amount – much less than reusing channels ( which causes co - channel interference ) – if transmitters are at least a few metres apart. in europe and japan where channel 13 is available, using channels 1, 5, 9, and 13 for 802. 11g and 802. 11n is viable and recommended. however, multiple 2. 4 ghz 802. 11b and 802. 11g access - points default to the same channel on initial startup, contributing to congestion on certain channels. wi - fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices ' use of other access points as well as with decreased signal - to - noise ratio ( snr ) between access points. these issues can become a problem in high - density areas, such as large apartment complexes or office buildings with multiple wi - fi access points. other devices use the 2. 4 ghz band : microwave ovens, ism band devices, security cameras, zigbee devices, bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. it is also an issue when municipalities or other large entities ( such as universities ) seek to provide large area coverage. on some 5 ghz bands interference from radar systems can occur in some places. for base stations that support those bands they employ dynamic frequency selection which listens for radar, and if it is found, it will not permit a network on that band. these bands can be used by low power transmitters without a licence, and with few restrictions. however, while unintended interference is common, users that have been found to cause deliberate interference ( particularly for attempting to = = = nuclear fission = = = in natural nuclear radiation, the byproducts are very small compared to the nuclei from which they originate. nuclear fission is the process of splitting a nucleus into roughly equal parts, and releasing energy and neutrons in the process. if these neutrons are captured by another unstable nucleus, they can fission as well, leading to a chain reaction. the average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self - sustaining chain reaction. a mass of fissile material large enough ( and in a suitable configuration ) to induce a self - sustaining chain reaction is called a critical mass. when a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. if there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. when discovered on the eve of world war ii, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb β€” a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. the manhattan project, run by the united states with the help of the united kingdom and canada, developed multiple fission weapons which were used against japan in 1945 at hiroshima and nagasaki. during the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity. in 1951, the first nuclear fission power plant was the first to produce electricity at the experimental breeder reactor no. 1 ( ebr - 1 ), in arco, idaho, ushering in the " atomic age " of more intensive human energy use. however, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. this is what allows nuclear reactors to be built. fast neutrons are not easily captured by nuclei ; they must be slowed ( slow neutrons ), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. today, this type of fission is commonly used to generate electricity. = = = nuclear fusion = = = if nuclei are forced to collide, they can undergo nuclear fusion. the r - process of nucleosynthesis requires a large neutron - to - seed nucleus ratio. this does not, however, that there be an excess of neutrons over protons. if the expansion of the material is sufficiently rapid and the entropy per nucleon is sufficiently high, the nucleosynthesis enters a heavy - element synthesis regime heretofore unexplored. in this extreme regime, characterized by a persistent disequilibrium between free nucleons and the abundant alpha particles, heavy r - process nuclei can form even in matter with more protons than neutrons. this observation bears on the issue of the site of the r - process, on the variability of abundance yields from r - process events, and on cnstraints on neutrino physics derived from nucleosynthesis. it also clarifies the difference between nucleosynthesis in the early universe and that in less extreme stellar explosive environments. jwst / nircam obtained high angular - resolution ( 0. 05 - 0. 1 ' ' ), deep near - infrared 1 - - 5 micron imaging of supernova ( sn ) 1987a taken 35 years after the explosion. in the nircam images, we identify : 1 ) faint h2 crescents, which are emissions located between the ejecta and the equatorial ring, 2 ) a bar, which is a substructure of the ejecta, and 3 ) the bright 3 - 5 micron continuum emission exterior to the equatorial ring. the emission of the remnant in the nircam 1 - 2. 3 micron images is mostly due to line emission, which is mostly emitted in the ejecta and in the hot spots within the equatorial ring. in contrast, the nircam 3 - 5 micron images are dominated by continuum emission. in the ejecta, the continuum is due to dust, obscuring the centre of the ejecta. in contrast, in the ring and exterior to the ring, synchrotron emission contributes a substantial fraction to the continuum. dust emission contributes to the continuum at outer spots and diffuse emission exterior to the ring, but little within the ring. this shows that dust cooling and destruction time scales are shorter than the synchrotron cooling time scale, and the time scale of hydrogen recombination in the ring is even longer than the synchrotron cooling time scale. with the advent of high sensitivity and high angular resolution images provided by jwst / nircam, our observations of sn 1987a demonstrate that nircam opens up a window to study particle - acceleration and shock physics in unprecedented details, probed by near - infrared synchrotron emission, building a precise picture of how a sn evolves. strangelets ( stable lumps of quark matter ) can have masses and charges much higher than those of nuclei, but have very low charge - to - mass ratios. this is confirmed in a relativistic thomas - fermi model. the high charge allows astrophysical strangelet acceleration to energies orders of magnitude higher than for protons. in addition, strangelets are much less susceptible to the interactions with the cosmic microwave background that suppress the flux of cosmic ray protons and nuclei above energies of $ 10 ^ { 19 } $ - - $ 10 ^ { 20 } $ ev ( the gzk - cutoff ). this makes strangelets an interesting possibility for explaining ultra - high energy cosmic rays. Question: The brown tree snake is a nonnative species found on the South Pacific island of Guam. The brown tree snake population in Guam is so large that it negatively affects the humans there. Which statement best explains why the brown tree snake has flourished in Guam? A) There are many animals for food. B) There are no natural snake predators. C) The climate is ideal for snake reproduction. D) The vegetation provides good habitat for hunting.
B) There are no natural snake predators.
Context: becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 ) the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial a watershed ( called a " divide " in north america ) over which rainfall flows down towards the river traversing the lowest part of the valley, whereas the rain falling on the far slope of the watershed flows away to another river draining an adjacent basin. river basins vary in extent according to the configuration of the country, ranging from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform ##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river Question: How are droughts always different from floods? A) They have different locations. B) They happen in different ecosystems. C) They have different amounts of water. D) They happen at different times of the year.
C) They have different amounts of water.
Context: porosimetry are utilized. = = introduction = = membrane technology covers all engineering approaches for the transport of substances between two fractions with the help of semi - permeable membranes. in general, mechanical separation processes for separating gaseous or liquid streams use membrane technology. in recent years, different methods have been used to remove environmental pollutants, like adsorption, oxidation, and membrane separation. different pollution occurs in the environment like air pollution, waste water pollution etc. as per industry requirement to prevent industrial pollution because more than 70 % of environmental pollution occurs due to industries. it is their responsibility to follow government rules of the air pollution control & prevention act 1981 to maintain and prevent the harmful chemical release into the environment. make sure to do prevention & safety processes after that industries are able to release their waste in the environment. biomass - based membrane technology is one of the most promising technologies for use as a pollutants removal weapon because it has low cost, more efficiency, & lack of secondary pollutants. typically polysulfone, polyvinylidene fluoride, and polypropylene are used in the membrane preparation process. these membrane materials are non - renewable and non - biodegradable which create harmful environmental pollution. researchers are trying to find a solution to synthesize an eco - friendly membrane which avoids environmental pollution. synthesis of biodegradable material with the help of naturally available material such as biomass - based membrane synthesis can be used to remove pollutants. = = = membrane overview = = = membrane separation processes operate without heating and therefore use less energy than conventional thermal separation processes such as distillation, sublimation or crystallization. the separation process is purely physical and both fractions ( permeate and retentate ) can be obtained as useful products. cold separation using membrane technology is widely used in the food technology, biotechnology and pharmaceutical industries. furthermore, using membranes enables separations to take place that would be impossible using thermal separation methods. for example, it is impossible to separate the constituents of azeotropic liquids or solutes which form isomorphic crystals by distillation or recrystallization but such separations can be achieved using membrane technology. depending on the type of membrane, the selective separation of certain individual substances or substance mixtures is possible. important technical applications include the production of drinking water by reverse osmosis. in waste water treatment, membrane technology is becoming increasingly important. ultra / microfiltration can be very effective in removing colloids and macro = = = = = = environmental remediation = = = environmental remediation is the process through which contaminants or pollutants in soil, water and other media are removed to improve environmental quality. the main focus is the reduction of hazardous substances within the environment. some of the areas involved in environmental remediation include ; soil contamination, hazardous waste, groundwater contamination, oil, gas and chemical spills. there are three most common types of environmental remediation. these include soil, water, and sediment remediation. soil remediation consists of removing contaminants in soil, as these pose great risks to humans and the ecosystem. some examples of this are heavy metals, pesticides, and radioactive materials. depending on the contaminant the remedial processes can be physical, chemical, thermal, or biological. water remediation is one of the most important considering water is an essential natural resource. depending on the source of water there will be different contaminants. surface water contamination mainly consists of agricultural, animal, and industrial waste, as well as acid mine drainage. there has been a rise in the need for water remediation due to the increased discharge of industrial waste, leading to a demand for sustainable water solutions. the market for water remediation is expected to consistently increase to $ 19. 6 billion by 2030. sediment remediation consists of removing contaminated sediments. is it almost similar to soil remediation except it is often more sophisticated as it involves additional contaminants. to reduce the contaminants it is likely to use physical, chemical, and biological processes that help with source control, but if these processes are executed correctly, there ' s a risk of contamination resurfacing. = = = solid waste management = = = solid waste management is the purification, consumption, reuse, disposal, and treatment of solid waste that is undertaken by the government or the ruling bodies of a city / town. it refers to the collection, treatment, and disposal of non - soluble, solid waste material. solid waste is associated with both industrial, institutional, commercial and residential activities. hazardous solid waste, when improperly disposed can encourage the infestation of insects and rodents, contributing to the spread of diseases. some of the most common types of solid waste management include ; landfills, vermicomposting, composting, recycling, and incineration. however, a major barrier for solid waste management practices is the high costs associated with recycling based on 1 / 10 and 1 / 100 weight percentages of the carbon and other alloying elements they contain. thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced. solid materials are generally grouped into three basic classifications : ceramics, metals, and polymers. this broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. an item that is often made from each of these materials types is the beverage container. the material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. ceramic ( glass ) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. metal ( aluminum alloy ) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. however, the cans are opaque, expensive to produce, and are easily dented and punctured. polymers ( polyethylene plastic ) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass. = = = ceramics and glasses = = = another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. many ceramics and glasses exhibit covalent or ionic - covalent bonding with sio2 ( silica ) as a fundamental building block. ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. the vast majority of commercial glasses contain a metal oxide fused with silica. at the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. windowpanes and eyeglasses are important examples. fibers of glass are also used for long - range telecommunication and optical transmission. scratch resistant corning gorilla glass is a well - known example of the application of materials science to drastically improve the properties of common components. engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. hot pressing provides higher density material. chemical vapor deposition can place a film of a ceramic on another in space, can adversely affect the earth ' s environment. some hypergolic rocket propellants, such as hydrazine, are highly toxic prior to combustion, but decompose into less toxic compounds after burning. rockets using hydrocarbon fuels, such as kerosene, release carbon dioxide and soot in their exhaust. carbon dioxide emissions are insignificant compared to those from other sources ; on average, the united states consumed 803 million us gal ( 3. 0 million m3 ) of liquid fuels per day in 2014, while a single falcon 9 rocket first stage burns around 25, 000 us gallons ( 95 m3 ) of kerosene fuel per launch. even if a falcon 9 were launched every single day, it would only represent 0. 006 % of liquid fuel consumption ( and carbon dioxide emissions ) for that day. additionally, the exhaust from lox - and lh2 - fueled engines, like the ssme, is almost entirely water vapor. nasa addressed environmental concerns with its canceled constellation program in accordance with the national environmental policy act in 2011. in contrast, ion engines use harmless noble gases like xenon for propulsion. an example of nasa ' s environmental efforts is the nasa sustainability base. additionally, the exploration sciences building was awarded the leed gold rating in 2010. on may 8, 2003, the environmental protection agency recognized nasa as the first federal agency to directly use landfill gas to produce energy at one of its facilities β€” the goddard space flight center, greenbelt, maryland. in 2018, nasa along with other companies including sensor coating systems, pratt & whitney, monitor coating and utrc launched the project caution ( coatings for ultra high temperature detection ). this project aims to enhance the temperature range of the thermal history coating up to 1, 500 Β°c ( 2, 730 Β°f ) and beyond. the final goal of this project is improving the safety of jet engines as well as increasing efficiency and reducing co2 emissions. = = = climate change = = = nasa also researches and publishes on climate change. its statements concur with the global scientific consensus that the climate is warming. bob walker, who has advised former us president donald trump on space issues, has advocated that nasa should focus on space exploration and that its climate study operations should be transferred to other agencies such as noaa. former nasa atmospheric scientist j. marshall shepherd countered that earth science study was built into nasa ' s mission at its creation in the 1958 national aeronautics and space act. nasa won the 2020 webby people ' s voice award for green in the category , buses, trucks, etc. it includes branch study of mechanical, electronic, software and safety elements. some of the engineering attributes and disciplines that are of importance to the automotive engineer include : safety engineering : safety engineering is the assessment of various crash scenarios and their impact on the vehicle occupants. these are tested against very stringent governmental regulations. some of these requirements include : seat belt and air bag functionality testing, front and side - impact testing, and tests of rollover resistance. assessments are done with various methods and tools, including computer crash simulation ( typically finite element analysis ), crash - test dummy, and partial system sled and full vehicle crashes. fuel economy / emissions : fuel economy is the measured fuel efficiency of the vehicle in miles per gallon or kilometers per liter. emissions - testing covers the measurement of vehicle emissions, including hydrocarbons, nitrogen oxides ( nox ), carbon monoxide ( co ), carbon dioxide ( co2 ), and evaporative emissions. nvh engineering ( noise, vibration, and harshness ) : nvh involves customer feedback ( both tactile [ felt ] and audible [ heard ] ) concerning a vehicle. while sound can be interpreted as a rattle, squeal, or hot, a tactile response can be seat vibration or a buzz in the steering wheel. this feedback is generated by components either rubbing, vibrating, or rotating. nvh response can be classified in various ways : powertrain nvh, road noise, wind noise, component noise, and squeak and rattle. note, there are both good and bad nvh qualities. the nvh engineer works to either eliminate bad nvh or change the " bad nvh " to good ( i. e., exhaust tones ). vehicle electronics : automotive electronics is an increasingly important aspect of automotive engineering. modern vehicles employ dozens of electronic systems. these systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 – 60 mph, etc. ) release the energy they contain, essentially the opposite of photosynthesis. molecules are moved within plants by transport processes that operate at a variety of spatial scales. subcellular transport of ions, electrons and molecules such as water and enzymes occurs across cell membranes. minerals and water are transported from roots to other parts of the plant in the transpiration stream. diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. examples of elements that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur. in vascular plants, these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the xylem. most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. sucrose produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones are transported by a variety of processes. = = = plant hormones = = = plants are not passive, but respond to external signals such as light, touch, and injury by moving or growing towards or away from the stimulus, as appropriate. tangible evidence of touch sensitivity is the almost instantaneous collapse of leaflets of mimosa pudica, the insect traps of venus flytrap and bladderworts, and the pollinia of orchids. the hypothesis that plant growth and development is coordinated by plant hormones or plant growth regulators first emerged in the late 19th century. darwin experimented on the movements of plant shoots and roots towards light and gravity, and concluded " it is hardly an exaggeration to say that the tip of the radicle.. acts like the brain of one of the lower animals.. directing the several movements ". about the same time, the role of auxins ( from the greek auxein, to grow ) in control of plant growth was first outlined by the dutch scientist frits went. the first known auxin, indole - 3 - acetic acid ( iaa ), which promotes cell growth, was only isolated from plants about 50 years later. this compound mediates the tropic responses of shoots and roots towards light and gravity. the finding in 1939 that plant callus could be maintained in culture containing iaa, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. cytokinins are a class of plant hormones named for their control of cell division ( especially earliest known depiction of a gun is a sculpture from a cave in sichuan, dating to 1128, that portrays a figure carrying a vase - shaped bombard, firing flames and a cannonball. however, the oldest existent archaeological discovery of a metal barrel handgun is from the chinese heilongjiang excavation, dated to 1288. : 293 the chinese also discovered the explosive potential of packing hollowed cannonball shells with gunpowder. written later by jiao yu in his huolongjing ( mid - 14th century ), this manuscript recorded an earlier song - era cast - iron cannon known as the ' flying - cloud thunderclap eruptor ' ( fei yun pi - li pao ). the manuscript stated that : as noted before, the change in terminology for these new weapons during the song period were gradual. the early song cannons were at first termed the same way as the chinese trebuchet catapult. a later ming dynasty scholar known as mao yuanyi would explain this use of terminology and true origins of the cannon in his text of the wubei zhi, written in 1628 : the 14th - century huolongjing was also one of the first chinese texts to carefully describe to the use of explosive land mines, which had been used by the late song chinese against the mongols in 1277, and employed by the yuan dynasty afterwards. the innovation of the detonated land mine was accredited to one luo qianxia in the campaign of defense against the mongol invasion by kublai khan, : 192 later chinese texts revealed that the chinese land mine employed either a rip cord or a motion booby trap of a pin releasing falling weights that rotated a steel flint wheel, which in turn created sparks that ignited the train of fuses for the land mines. : 199 furthermore, the song employed the earliest known gunpowder - propelled rockets in warfare during the late 13th century, : 477 its earliest form being the archaic fire arrow. when the northern song capital of kaifeng fell to the jurchens in 1126, it was written by xia shaozeng that 20, 000 fire arrows were handed over to the jurchens in their conquest. an even earlier chinese text of the wujing zongyao ( " collection of the most important military techniques " ), written in 1044 by the song scholars zeng kongliang and yang weide, described the use of three spring or triple bow arcuballista that fired arrow bolts holding gunpowder packets near the head of the arrow. : 154 going back yet even farther, and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. the most significant meltdowns occurred at three mile island in pennsylvania and chernobyl in the soviet ukraine. the earthquake and tsunami on march 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the fukushima daiichi nuclear power plant in japan. military reactors that experienced similar accidents were windscale in the united kingdom and sl - 1 in the united states. military accidents usually involve the loss or unexpected detonation of nuclear weapons. the castle bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a japanese fishing boat ( with one fatality ), and raised concerns about contaminated fish in japan. in the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. the last twenty years have seen a marked decline in such accidents. = = examples of environmental benefits = = proponents of nuclear energy note that annually, nuclear - generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed / recycled for other energy uses. proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. for example, the environmental protection agency estimates that coal kills 30, 000 people a year, as a result of its environmental impact, while 60 people died in the chernobyl disaster. a real world example of impact provided by proponents of nuclear energy is it is shown that self avoiding walk on the seven regular infinite planar triangulation has linear expected displacement. the first three greek letters. some of these kinds of radiation could pass through ordinary matter, and all of them could be harmful in large amounts. all of the early researchers received various radiation burns, much like sunburn, and thought little of it. the new phenomenon of radioactivity was seized upon by the manufacturers of quack medicine ( as had the discoveries of electricity and magnetism, earlier ), and a number of patent medicines and treatments involving radioactivity were put forward. gradually it was realized that the radiation produced by radioactive decay was ionizing radiation, and that even quantities too small to burn could pose a severe long - term hazard. many of the scientists working on radioactivity died of cancer as a result of their exposure. radioactive patent medicines mostly disappeared, but other applications of radioactive materials persisted, such as the use of radium salts to produce glowing dials on meters. as the atom came to be better understood, the nature of radioactivity became clearer. some larger atomic nuclei are unstable, and so decay ( release matter or energy ) after a random interval. the three forms of radiation that becquerel and the curies discovered are also more fully understood. alpha decay is when a nucleus releases an alpha particle, which is two protons and two neutrons, equivalent to a helium nucleus. beta decay is the release of a beta particle, a high - energy electron. gamma decay releases gamma rays, which unlike alpha and beta radiation are not matter but electromagnetic radiation of very high frequency, and therefore energy. this type of radiation is the most dangerous and most difficult to block. all three types of radiation occur naturally in certain elements. it has also become clear that the ultimate source of most terrestrial energy is nuclear, either through radiation from the sun caused by stellar thermonuclear reactions or by radioactive decay of uranium within the earth, the principal source of geothermal energy. = = = nuclear fission = = = in natural nuclear radiation, the byproducts are very small compared to the nuclei from which they originate. nuclear fission is the process of splitting a nucleus into roughly equal parts, and releasing energy and neutrons in the process. if these neutrons are captured by another unstable nucleus, they can fission as well, leading to a chain reaction. the average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self Question: Which of the following is a harmful waste material that leaves the blood and travels through the lungs before leaving the body? A) CO2. B) O2. C) H2O. D) NaCl.
A) CO2.
Context: , dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both ranks varying from family to subgenus have terms for their study, including agrostology ( or graminology ) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. study can also be divided by guild rather than clade or grade. for example, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomial nomenclature. the nomenclature of botanical organisms is codified in the international code of nomenclature for algae, fungi, and plants ( icn ) and administered by the international botanical congress. kingdom plantae belongs to domain eukaryota and is broken down recursively until each species is separately classified. the order is : hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost, the other can often regrow it. in fact it is possible to grow an entire plant from a single leaf, as is the case with plants in streptocarpus sect. saintpaulia, or even a single cell – which can dedifferentiate into a callus ( a mass of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. groups of organisms. divisions related to the broader historical sense of botany include bacteriology, mycology ( or fungology ), and phycology – respectively, the study of bacteria, fungi, and algae – with lichenology as a subfield of mycology. the narrower sense of botany as the study of embryophytes ( land plants ) is called phytology. bryology is the study of mosses ( and in the broader sense also liverworts and hornworts ). pteridology ( or filicology ) is the study of ferns and allied plants. a number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology ( or graminology ) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. study can also be divided by guild rather than clade or grade. for example, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical of embryophytes ( land plants ) is called phytology. bryology is the study of mosses ( and in the broader sense also liverworts and hornworts ). pteridology ( or filicology ) is the study of ferns and allied plants. a number of other taxa of ranks varying from family to subgenus have terms for their study, including agrostology ( or graminology ) for the study of grasses, synantherology for the study of composites, and batology for the study of brambles. study can also be divided by guild rather than clade or grade. for example, dendrology is the study of woody plants. many divisions of biology have botanical subfields. these are commonly denoted by prefixing the word plant ( e. g. plant taxonomy, plant ecology, plant anatomy, plant morphology, plant systematics ), or prefixing or substituting the prefix phyto - ( e. g. phytochemistry, phytogeography ). the study of fossil plants is called palaeobotany. other fields are denoted by adding or substituting the word botany ( e. g. systematic botany ). phytosociology is a subfield of plant ecology that classifies and studies communities of plants. the intersection of fields from the above pair of categories gives rise to fields such as bryogeography, the study of the distribution of mosses. different parts of plants also give rise to their own subfields, including xylology, carpology ( or fructology ), and palynology, these being the study of wood, fruit and pollen / spores respectively. botany also overlaps on the one hand with agriculture, horticulture and silviculture, and on the other hand with medicine and pharmacology, giving rise to fields such as agronomy, horticultural botany, phytopathology, and phytopharmacology. = = scope and importance = = the study of plants is vital because they underpin almost all animal life on earth by generating a large proportion of the oxygen and food that provide humans and other organisms with aerobic respiration with the chemical energy they need to exist. plants, algae and cyanobacteria are the major groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are used in much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable of creating cells of the other and producing adventitious shoots or roots. stolons and tubers are examples of shoots that can grow roots. roots that spread out close to the surface, such as those of willows, can produce shoots and ultimately new plants. in the event that one of the systems is lost ( or underlined when italics are not available ). the evolutionary relationships and heredity of a group of organisms is called its phylogeny. phylogenetic studies attempt to discover phylogenies. the basic approach is to use similarities based on shared inheritance to determine relationships. as an example, species of pereskia are trees or bushes with prominent leaves. they do not obviously resemble a typical leafless cactus such as an echinocactus. however, both pereskia and echinocactus have spines produced from areoles ( highly specialised pad - like structures ) suggesting that the two genera are indeed related. judging relationships based on shared characters requires care, since plants may resemble one another through convergent evolution in which characters have arisen independently. some euphorbias have leafless, rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure of their flowers make it clear that the two groups are not closely related. the cladistic method takes a systematic approach to characters, distinguishing between those that carry no information about shared evolutionary history – such as those evolved separately in different groups ( homoplasies ) or those left over from ancestors ( plesiomorphies ) – and derived characters, which have been passed down from innovations in a shared ancestor ( apomorphies ). only derived characters, such as the spine - producing areoles of cacti, provide evidence for descent from a common ancestor. the results of cladistic analyses are expressed as cladograms : tree - like diagrams showing the pattern of evolutionary branching and descent. from the 1990s onwards, the predominant approach to constructing phylogenies for living plants has been molecular phylogenetics, which uses molecular characters, particularly dna sequences, rather than morphological characters like the presence or absence of spines and areoles. the difference is that the genetic code itself is used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to. clive stace describes this as having " direct access to the genetic basis of evolution. " as a simple example, prior to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than animals. genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in the cladogram below – fungi are more closely related to animals than to plants. in 1998, the angiosperm phylogeny group published a phylogeny for flowering plants based on an analysis of unspecialised cells ) that can grow into a new plant. in vascular plants, the xylem and phloem are the conductive tissues that transport resources between shoots and roots. roots are often adapted to store food such as sugars or starch, as in sugar beets and carrots. stems mainly provide support to the leaves and reproductive structures, but can store water in succulent plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants or in the process of layering. leaves gather sunlight and carry out photosynthesis. large, flat, flexible, green leaves are called foliage leaves. gymnosperms, such as conifers, cycads, ginkgo, and gnetophytes are seed - producing plants with open seeds. angiosperms are seed - producing plants that produce flowers and have enclosed seeds. woody plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues : wood ( secondary xylem ) and bark ( secondary phloem and cork ). all gymnosperms and many angiosperms are woody plants. some plants reproduce sexually, some asexually, and some via both means. although reference to major morphological categories such as root, stem, leaf, and trichome are useful, one has to keep in mind that these categories are linked through intermediate forms so that a continuum between the categories results. furthermore, structures can be seen as processes, that is, process combinations. = = systematic botany = = systematic botany is part of systematic biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined by their evolutionary history. it involves, or is related to, biological classification, scientific taxonomy and phylogenetics. biological classification is the method by which botanists group organisms into categories such as genera or species. biological classification is a form of scientific taxonomy. modern taxonomy is rooted in the work of carl linnaeus, who grouped species according to shared physical characteristics. these groupings have since been revised to align better with the darwinian principle of common descent – grouping organisms by ancestry rather than superficial characteristics. while scientists do not always agree on how to classify organisms, molecular phylogenetics, which uses dna sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue to do so. the dominant classification system is called linnaean taxonomy. it includes ranks and binomi with one allele inducing a change on the other. = = plant evolution = = the chloroplasts of plants have a number of biochemical, structural and genetic similarities to cyanobacteria, ( commonly but incorrectly known as " blue - green algae " ) and are thought to be derived from an ancient endosymbiotic relationship between an ancestral eukaryotic cell and a cyanobacterial resident. the algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others. there are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast structure and nutrient reserves. the algal division charophyta, sister to the green algal division chlorophyta, is considered to contain the ancestor of true plants. the charophyte class charophyceae and the land plant sub - kingdom embryophyta together form the monophyletic group or clade streptophytina. nonvascular land plants are embryophytes that lack the vascular tissues xylem and phloem. they include mosses, liverworts and hornworts. pteridophytic vascular plants with true xylem and phloem that reproduced by spores germinating into free - living gametophytes evolved during the silurian period and diversified into several lineages during the late silurian and early devonian. representatives of the lycopods have survived to the present day. by the end of the devonian period, several groups, including the lycopods, sphenophylls and progymnosperms, had independently evolved " megaspory " – their spores were of two distinct sizes, larger megaspores and smaller microspores. their reduced gametophytes developed from megaspores retained within the spore - producing organs ( megasporangia ) of the sporophyte, a condition known as endospory. seeds consist of an endosporic megasporangium surrounded by one or two sheathing layers ( integuments ). the young sporophyte develops within the seed, which on germination splits to release it. the earliest known seed plants date from the latest devonian famennian stage. following the evolution of the seed habit, seed plants diversified, giving rise to a number of now - extinct groups, including seed ferns, as well as the modern gym Question: How are a tree and grass alike? A) Both make wood. B) Both have roots. C) Both need moonlight. D) Both have short lives.
B) Both have roots.
Context: i transform the trapdoor problem of hfe into a linear algebra problem. has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends. = = = = compound = = = = a compound is a pure chemical substance composed of more than one element. the properties of a compound bear little similarity to those of its elements. the standard nomenclature of compounds is set by the international union of pure and applied chemistry ( iupac ). organic compounds are named defective body parts. inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures. prostheses are specifically not orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis. prostheses are technically the complete finished item. for instance, a c - leg knee alone is not a prosthesis, but only a prosthetic component. the complete prosthesis would consist of the attachment system to the residual limb – usually a " socket ", and all the attachment hardware components all the way down to and including the terminal device. despite the technical difference, the terms are often used interchangeably. the terms " prosthetic " and " orthotic " are adjectives used to describe devices such as a prosthetic knee. the terms " prosthetics " and " orthotics " are used to describe the respective allied health fields. an occupational therapist ' s role in prosthetics include therapy, training and evaluations. prosthetic training includes orientation to prosthetics components and terminology, donning and doffing, wearing schedule, and how to care for residual limb and the prosthesis. = = = exoskeletons = = = a powered exoskeleton is a wearable mobile machine that is powered by a system of electric motors, pneumatics, levers, hydraulics, or a combination of technologies that allow for limb movement with increased strength and endurance. its design aims to provide back support, sense the user ' s motion, and send a signal to motors which manage the gears. the exoskeleton supports the shoulder, waist and thigh, and assists movement for lifting and holding heavy items, while lowering back stress. = = = adaptive seating and positioning = = = people with balance and motor function challenges often need specialized equipment to sit or stand safely and securely. this equipment is frequently specialized for specific settings such as in a classroom or nursing home. positioning is often important in seating arrangements to ensure that user ' s body pressure is distributed equally without inhibiting movement in a desired way. positioning devices have been developed to aid in allowing people to stand and bear weight on their legs without risk of a fall. the hun tian theory ), or as being without substance while the heavenly bodies float freely ( the hsuan yeh theory ), the earth was at all times flat, although perhaps bulging up slightly. the model of an egg was often used by chinese astronomers such as zhang heng ( 78 – 139 ad ) to describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they functions of the human body, if necessary, through the use of technology. modern medicine can replace several of the body ' s functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. the fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems. conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. this has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. there are also substantial interdisciplinary interactions between engineering and medicine. both fields provide solutions to real world problems. this often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both. medicine, in part, studies the function of the human body. the human body, as a biological machine, has many functions that can be modeled using engineering methods. the heart for example functions much like a pump, the skeleton is like a linked structure with levers, the brain produces electrical signals etc. these similarities as well as the increasing importance and application of engineering principles in medicine, led to the development of the field of biomedical engineering that uses concepts developed in both disciplines. newly emerging branches of science, such as systems biology, are adapting analytical tools traditionally used for engineering, such as systems modeling and computational analysis, to the description of biological systems. = = = art = = = there are connections between engineering and art, for example, architecture, landscape architecture and industrial design ( even to the extent that these disciplines may sometimes be included in a university ' s faculty of engineering ). the art institute of chicago, for instance, held an exhibition about the art of nasa ' s aerospace design. robert maillart ' s bridge design is perceived by some to have been deliberately artistic. at the university of south florida, an engineering professor, through a grant with the national science foundation, has developed a course that connects art and engineering. among famous historical figures, leonardo da vinci is a well - known renaissance artist and engineer, and a prime example of the nexus between art and engineering. = = = business = = = business engineering deals with the relationship between professional engineering, it systems, business administration and change management. engineering management or " management engineering " is a specialized field of management concerned with engineering practice or the engineering industry sector. the demand for management of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - nuclear states signed the limited test ban treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. the treaty permitted underground nuclear testing. france continued atmospheric testing until 1974, while china continued up until 1980. the last underground test by the united states was in 1992, the soviet union in 1990, the united kingdom in 1991, and both france and china continued testing until 1996. after signing the comprehensive test ban treaty in 1996 ( which had as of 2011 not entered into force ), all of these states have pledged to discontinue all nuclear testing. non - signatories india and pakistan last like it, assist physical therapists by providing task - specific practice of walking in people following neurological injury. = = = prosthesis = = = a prosthesis, prosthetic, or prosthetic limb is a device that replaces a missing body part. it is part of the field of biomechatronics, the science of using mechanical devices with human muscular, musculoskeletal, and nervous systems to assist or enhance motor control lost by trauma, disease, or defect. prostheses are typically used to replace parts lost by injury ( traumatic ) or missing from birth ( congenital ) or to supplement defective body parts. inside the body, artificial heart valves are in common use with artificial hearts and lungs seeing less common use but under active technology development. other medical devices and aids that can be considered prosthetics include hearing aids, artificial eyes, palatal obturator, gastric bands, and dentures. prostheses are specifically not orthoses, although given certain circumstances a prosthesis might end up performing some or all of the same functionary benefits as an orthosis. prostheses are technically the complete finished item. for instance, a c - leg knee alone is not a prosthesis, but only a prosthetic component. the complete prosthesis would consist of the attachment system to the residual limb – usually a " socket ", and all the attachment hardware components all the way down to and including the terminal device. despite the technical difference, the terms are often used interchangeably. the terms " prosthetic " and " orthotic " are adjectives used to describe devices such as a prosthetic knee. the terms " prosthetics " and " orthotics " are used to describe the respective allied health fields. an occupational therapist ' s role in prosthetics include therapy, training and evaluations. prosthetic training includes orientation to prosthetics components and terminology, donning and doffing, wearing schedule, and how to care for residual limb and the prosthesis. = = = exoskeletons = = = a powered exoskeleton is a wearable mobile machine that is powered by a system of electric motors, pneumatics, levers, hydraulics, or a combination of technologies that allow for limb movement with increased strength and endurance. its design aims to provide back support, sense the user ' s motion, and send a signal to motors which manage the metal hydrides have earlier been suggested for utilization in solar cells. with this as a motivation we have prepared thin films of yttrium hydride by reactive magnetron sputter deposition. the resulting films are metallic for low partial pressure of hydrogen during the deposition, and black or yellow - transparent for higher partial pressure of hydrogen. both metallic and semiconducting transparent yhx films have been prepared directly in - situ without the need of capping layers and post - deposition hydrogenation. optically the films are similar to what is found for yhx films prepared by other techniques, but the crystal structure of the transparent films differ from the well - known yh3 phase, as they have an fcc lattice instead of hcp. describe the heavens as spherical : the heavens are like a hen ' s egg and as round as a crossbow bullet ; the earth is like the yolk of the egg, and lies in the centre. this analogy with a curved egg led some modern historians, notably joseph needham, to conjecture that chinese astronomers were, after all, aware of the earth ' s sphericity. the egg reference, however, was rather meant to clarify the relative position of the flat earth to the heavens : in a passage of zhang heng ' s cosmogony not translated by needham, zhang himself says : " heaven takes its body from the yang, so it is round and in motion. earth takes its body from the yin, so it is flat and quiescent ". the point of the egg analogy is simply to stress that the earth is completely enclosed by heaven, rather than merely covered from above as the kai tian describes. chinese astronomers, many of them brilliant men by any standards, continued to think in flat - earth terms until the seventeenth century ; this surprising fact might be the starting - point for a re - examination of the apparent facility with which the idea of a spherical earth found acceptance in fifth - century bc greece. further examples cited by needham supposed to demonstrate dissenting voices from the ancient chinese consensus actually refer without exception to the earth being square, not to it being flat. accordingly, the 13th - century scholar li ye, who argued that the movements of the round heaven would be hindered by a square earth, did not advocate a spherical earth, but rather that its edge should be rounded off so as to be circular. however, needham disagrees, affirming that li ye believed the earth to be spherical, similar in shape to the heavens but much smaller. this was preconceived by the 4th - century scholar yu xi, who argued for the infinity of outer space surrounding the earth and that the latter could be either square or round, in accordance to the shape of the heavens. when chinese geographers of the 17th century, influenced by european cartography and astronomy, showed the earth as a sphere that could be circumnavigated by sailing around the globe, they did so with formulaic terminology previously used by zhang heng to describe the spherical shape of the sun and moon ( i. e. that they were as round as a crossbow bullet ). as noted in the book huainanzi, in the 2nd century bc, chinese astronomers effectively inverted eratosthenes ' calculation the influence of a neutrinoless electron to positron conversion on a cooling of strongly magnetized iron white dwarfs is studied. Question: What organ system in the human body contains the pituitary gland, hypothalamus gland, and thyroid gland? A) reproductive system B) excretory system C) endocrine system D) circulatory system
C) endocrine system
Context: organic compounds, such as sugars, to ammonia, metal ions or even hydrogen gas. salt - tolerant archaea ( the haloarchaea ) use sunlight as an energy source, and other species of archaea fix carbon, but unlike plants and cyanobacteria, no known species of archaea does both. archaea reproduce asexually by binary fission, fragmentation, or budding ; unlike bacteria, no known species of archaea form endospores. the first observed archaea were extremophiles, living in extreme environments, such as hot springs and salt lakes with no other organisms. improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. archaea are particularly numerous in the oceans, and the archaea in plankton may be one of the most abundant groups of organisms on the planet. archaea are a major part of earth ' s life. they are part of the microbiota of all organisms. in the human microbiome, they are important in the gut, mouth, and on the skin. their morphological, metabolic, and geographical diversity permits them to play multiple ecological roles : carbon fixation ; nitrogen cycling ; organic compound turnover ; and maintaining microbial symbiotic and syntrophic communities, for example. = = = eukaryotes = = = eukaryotes are hypothesized to have split from archaea, which was followed by their endosymbioses with bacteria ( or symbiogenesis ) that gave rise to mitochondria and chloroplasts, both of which are now part of modern - day eukaryotic cells. the major lineages of eukaryotes diversified in the precambrian about 1. 5 billion years ago and can be classified into eight major clades : alveolates, excavates, stramenopiles, plants, rhizarians, amoebozoans, fungi, and animals. five of these clades are collectively known as protists, which are mostly microscopic eukaryotic organisms that are not plants, fungi, or animals. while it is likely that protists share a common ancestor ( the last eukaryotic common ancestor ), protists by themselves do not constitute a separate clade as some protists may be more closely related to plants, fungi, or animals than they are to other protists. like groupings such as algae, species occupying the same geographical area at the same time. a biological interaction is the effect that a pair of organisms living together in a community have on each other. they can be either of the same species ( intraspecific interactions ), or of different species ( interspecific interactions ). these effects may be short - term, like pollination and predation, or long - term ; both often strongly influence the evolution of the species involved. a long - term interaction is called a symbiosis. symbioses range from mutualism, beneficial to both partners, to competition, harmful to both partners. every species participates as a consumer, resource, or both in consumer – resource interactions, which form the core of food chains or food webs. there are different trophic levels within any food web, with the lowest level being the primary producers ( or autotrophs ) such as plants and algae that convert energy and inorganic material into organic compounds, which can then be used by the rest of the community. at the next level are the heterotrophs, which are the species that obtain energy by breaking apart organic compounds from other organisms. heterotrophs that consume plants are primary consumers ( or herbivores ) whereas heterotrophs that consume herbivores are secondary consumers ( or carnivores ). and those that eat secondary consumers are tertiary consumers and so on. omnivorous heterotrophs are able to consume at multiple levels. finally, there are decomposers that feed on the waste products or dead bodies of organisms. on average, the total amount of energy incorporated into the biomass of a trophic level per unit of time is about one - tenth of the energy of the trophic level that it consumes. waste and dead material used by decomposers as well as heat lost from metabolism make up the other ninety percent of energy that is not consumed by the next trophic level. = = = biosphere = = = in the global ecosystem or biosphere, matter exists as different interacting compartments, which can be biotic or abiotic as well as accessible or inaccessible, depending on their forms and locations. for example, matter from terrestrial autotrophs are both biotic and accessible to other organisms whereas the matter in rocks and minerals are abiotic and inaccessible. a biogeochemical cycle is a pathway by which specific elements of matter are turned over or moved through the biotic ( biosphere ) and the abiotic ( lithos the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the ancestor of plants by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant cells. the new photosynthetic plants ( along with their algal relatives ) accelerated the rise in atmospheric oxygen started by the cyanobacteria, changing the ancient oxygen - free, reducing, atmosphere to one in which free oxygen has been abundant for more than 2 billion years. among the important botanical questions of the 21st century are the role of plants as primary producers in the global cycling of life ' s basic ingredients : energy, carbon, oxygen, nitrogen and water, and ways biology is the scientific study of life and living organisms. it is a broad natural science that encompasses a wide range of fields and unifying principles that explain the structure, function, growth, origin, evolution, and distribution of life. central to biology are five fundamental themes : the cell as the basic unit of life, genes and heredity as the basis of inheritance, evolution as the driver of biological diversity, energy transformation for sustaining life processes, and the maintenance of internal stability ( homeostasis ). biology examines life across multiple levels of organization, from molecules and cells to organisms, populations, and ecosystems. subdisciplines include molecular biology, physiology, ecology, evolutionary biology, developmental biology, and systematics, among others. each of these fields applies a range of methods to investigate biological phenomena, including observation, experimentation, and mathematical modeling. modern biology is grounded in the theory of evolution by natural selection, first articulated by charles darwin, and in the molecular understanding of genes encoded in dna. the discovery of the structure of dna and advances in molecular genetics have transformed many areas of biology, leading to applications in medicine, agriculture, biotechnology, and environmental science. life on earth is believed to have originated over 3. 7 billion years ago. today, it includes a vast diversity of organisms β€” from single - celled archaea and bacteria to complex multicellular plants, fungi, and animals. biologists classify organisms based on shared characteristics and evolutionary relationships, using taxonomic and phylogenetic frameworks. these organisms interact with each other and with their environments in ecosystems, where they play roles in energy flow and nutrient cycling. as a constantly evolving field, biology incorporates new discoveries and technologies that enhance the understanding of life and its processes, while contributing to solutions for challenges such as disease, climate change, and biodiversity loss. = = history = = the earliest of roots of science, which included medicine, can be traced to ancient egypt and mesopotamia in around 3000 to 1200 bce. their contributions shaped ancient greek natural philosophy. ancient greek philosophers such as aristotle ( 384 – 322 bce ) contributed extensively to the development of biological knowledge. he explored biological causation and the diversity of life. his successor, theophrastus, began the scientific study of plants. scholars of the medieval islamic world who wrote on biology included al - jahiz ( 781 – 869 ), al - dinawari ( 828 – 896 ), who wrote on botany, and rhazes ( 865 – 925 ) who wrote on anatomy and physiology. medicine was especially well and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying and heredity as the basis of inheritance, evolution as the driver of biological diversity, energy transformation for sustaining life processes, and the maintenance of internal stability ( homeostasis ). biology examines life across multiple levels of organization, from molecules and cells to organisms, populations, and ecosystems. subdisciplines include molecular biology, physiology, ecology, evolutionary biology, developmental biology, and systematics, among others. each of these fields applies a range of methods to investigate biological phenomena, including observation, experimentation, and mathematical modeling. modern biology is grounded in the theory of evolution by natural selection, first articulated by charles darwin, and in the molecular understanding of genes encoded in dna. the discovery of the structure of dna and advances in molecular genetics have transformed many areas of biology, leading to applications in medicine, agriculture, biotechnology, and environmental science. life on earth is believed to have originated over 3. 7 billion years ago. today, it includes a vast diversity of organisms β€” from single - celled archaea and bacteria to complex multicellular plants, fungi, and animals. biologists classify organisms based on shared characteristics and evolutionary relationships, using taxonomic and phylogenetic frameworks. these organisms interact with each other and with their environments in ecosystems, where they play roles in energy flow and nutrient cycling. as a constantly evolving field, biology incorporates new discoveries and technologies that enhance the understanding of life and its processes, while contributing to solutions for challenges such as disease, climate change, and biodiversity loss. = = history = = the earliest of roots of science, which included medicine, can be traced to ancient egypt and mesopotamia in around 3000 to 1200 bce. their contributions shaped ancient greek natural philosophy. ancient greek philosophers such as aristotle ( 384 – 322 bce ) contributed extensively to the development of biological knowledge. he explored biological causation and the diversity of life. his successor, theophrastus, began the scientific study of plants. scholars of the medieval islamic world who wrote on biology included al - jahiz ( 781 – 869 ), al - dinawari ( 828 – 896 ), who wrote on botany, and rhazes ( 865 – 925 ) who wrote on anatomy and physiology. medicine was especially well studied by islamic scholars working in greek philosopher traditions, while natural history drew heavily on aristotelian thought. biology began to quickly develop with anton van leeuwenhoek ' s dramatic improvement of the microscope. it was then that scholars discovered spermatozoa, bacteria, infusoria and the diversity of microscopic ? if the latter, an important question is how the internal experiences of others can be measured. self - reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self - deception or selective memory may affect their responses. then even in the case of accurate self - reports, how can responses be compared across individuals? even if two individuals respond with the same answer on a likert scale, they may be experiencing very different things. other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. for example, are humans rational creatures? is there any sense in which they have free will, and how does that relate to the experience of making choices? philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology. philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. in particular, neurophilosophy has just recently become its own field with the works of paul churchland and patricia churchland. philosophy of mind, by contrast, has been a well - established discipline since before psychology was a field of study at all. it is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism. = = = philosophy of social science = = = the philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency. the french philosopher, auguste comte ( 1798 – 1857 ), established the epistemological perspective of positivism in the course in positivist philosophy, a series of texts published between 1830 and 1842. the first three volumes of the course dealt chiefly with the natural sciences already in existence ( geoscience, astronomy, physics, chemistry, biology ), whereas the latter two emphasised the inevitable coming of social science : " sociologie ". for comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex " queen science " of human society the structural components of cells. as a by - product of photosynthesis, plants release oxygen into the atmosphere, a gas that is required by nearly all living things to carry out cellular respiration. in addition, they are influential in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. plants are crucial to the future of human society as they provide food, oxygen, biochemicals, and products for people, as well as creating and preserving soil. historically, all living things were classified as either animals or plants and botany covered the study of all organisms not considered animals. botanists examine both the internal functions and processes within plant organelles, cells, tissues, whole plants, plant populations and plant communities. at each of these levels, a botanist may be concerned with the classification ( taxonomy ), phylogeny and evolution, structure ( anatomy and morphology ), or function ( physiology ) of plant life. the strictest definition of " plant " includes only the " land plants " or embryophytes, which include seed plants ( gymnosperms, including the pines, and flowering plants ) and the free - sporing cryptogams including ferns, clubmosses, liverworts, hornworts and mosses. embryophytes are multicellular eukaryotes descended from an ancestor that obtained its energy from sunlight by photosynthesis. they have life cycles with alternating haploid and diploid phases. the sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte itself is nurtured by its parent sporophyte. other groups of organisms that were previously studied by botanists include bacteria ( now studied in bacteriology ), fungi ( mycology ) – including lichen - forming fungi ( lichenology ), non - chlorophyte algae ( phycology ), and viruses ( virology ). however, attention is still given to these groups by botanists, and fungi ( including lichens ) and photosynthetic protists are usually covered in introductory botany courses. palaeobotanists study ancient plants in the fossil record to provide information about the evolutionary history of plants. cyanobacteria, the first oxygen - releasing photosynthetic organisms on earth, are thought to have given rise to the as medical hardware, plastics, tubes for gas - pipelines, hoses for floor - heating, shrink - foils for food packaging, automobile parts, wires and cables ( isolation ), tires, and even gemstones. compared to the amount of food irradiated, the volume of those every - day applications is huge but not noticed by the consumer. the genuine effect of processing food by ionizing radiation relates to damages to the dna, the basic genetic information for life. microorganisms can no longer proliferate and continue their malignant or pathogenic activities. spoilage causing micro - organisms cannot continue their activities. insects do not survive or become incapable of procreation. plants cannot continue the natural ripening or aging process. all these effects are beneficial to the consumer and the food industry, likewise. the amount of energy imparted for effective food irradiation is low compared to cooking the same ; even at a typical dose of 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation and bad nvh qualities. the nvh engineer works to either eliminate bad nvh or change the " bad nvh " to good ( i. e., exhaust tones ). vehicle electronics : automotive electronics is an increasingly important aspect of automotive engineering. modern vehicles employ dozens of electronic systems. these systems are responsible for operational controls such as the throttle, brake and steering controls ; as well as many comfort - and - convenience systems such as the hvac, infotainment, and lighting systems. it would not be possible for automobiles to meet modern safety and fuel - economy requirements without electronic controls. performance : performance is a measurable and testable value of a vehicle ' s ability to perform in various conditions. performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate ( e. g. standing start 1 / 4 mile elapsed time, 0 – 60 mph, etc. ), its top speed, how short and quickly a car can come to a complete stop from a set speed ( e. g. 70 - 0 mph ), how much g - force a car can generate without losing grip, recorded lap - times, cornering speed, brake fade, etc. performance can also reflect the amount of control in inclement weather ( snow, ice, rain ). shift quality : shift quality is the driver ' s perception of the vehicle to an automatic transmission shift event. this is influenced by the powertrain ( internal combustion engine, transmission ), and the vehicle ( driveline, suspension, engine and powertrain mounts, etc. ) shift feel is both a tactile ( felt ) and audible ( heard ) response of the vehicle. shift quality is experienced as various events : transmission shifts are felt as an upshift at acceleration ( 1 – 2 ), or a downshift maneuver in passing ( 4 – 2 ). shift engagements of the vehicle are also evaluated, as in park to reverse, etc. durability / corrosion engineering : durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. tests include mileage accumulation, severe driving conditions, and corrosive salt baths. drivability : drivability is the vehicle ' s response to general driving conditions. cold starts and stalls, rpm dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. cost : the cost of a vehicle program is typically split into the effect Question: Which term describes an organism's ability to maintain a stable internal environment? A) reproduction B) extinction C) locomotion D) regulation
D) regulation
Context: monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous, and premature. currently, germline modification is banned in 40 countries. scientists that do this type of research will often let embryos grow for a few days without allowing it to develop into a baby. researchers are altering the genome of pigs to induce the growth of human organs, with the aim of increasing the success of ##logous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source – induced pluripotent stem cells – may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells are stem cells which can divide into further stem cells or differentiate into any cell type in the body, including extra - embryonic tissue. pluripotent cells are stem cells which can differentiate into any cell type in the body except extra - embryonic tissue. induced pluripotent stem cells ( ipscs ) are subclass of pluripotent stem cells resembling embryonic stem cells ( escs ) that have been derived from adult differentiated cells. ipscs are created by altering the expression of transcriptional factors in adult cells until they become like embryonic stem cells. multipotent stem cells can be differentiated into any cell s immune system recognizes these re - implanted cells as its own, and does not target them for attack. autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. adipose - derived and bone marrow - derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source – induced pluripotent stem cells – may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells are stem cells which can divide into further stem cells or differentiate into any cell type in the body, including extra - embryonic tissue. pluripotent cells are stem cells which can differentiate into any cell type in the body except extra - embryonic tissue. induced pluripotent stem cells ( ipscs ) ##ry. immunology is the study of the immune system, which includes the innate and adaptive immune system in humans, for example. lifestyle medicine is the study of the chronic conditions, and how to prevent, treat and reverse them. medical physics is the study of the applications of physics principles in medicine. microbiology is the study of microorganisms, including protozoa, bacteria, fungi, and viruses. molecular biology is the study of molecular underpinnings of the process of replication, transcription and translation of the genetic material. neuroscience includes those disciplines of science that are related to the study of the nervous system. a main focus of neuroscience is the biology and physiology of the human brain and spinal cord. some related clinical specialties include neurology, neurosurgery and psychiatry. nutrition science ( theoretical focus ) and dietetics ( practical focus ) is the study of the relationship of food and drink to health and disease, especially in determining an optimal diet. medical nutrition therapy is done by dietitians and is prescribed for diabetes, cardiovascular diseases, weight and eating disorders, allergies, malnutrition, and neoplastic diseases. pathology as a science is the study of disease – the causes, course, progression and resolution thereof. pharmacology is the study of drugs and their actions. photobiology is the study of the interactions between non - ionizing radiation and living organisms. physiology is the study of the normal functioning of the body and the underlying regulatory mechanisms. radiobiology is the study of the interactions between ionizing radiation and living organisms. toxicology is the study of hazardous effects of drugs and poisons. = = = specialties = = = in the broadest meaning of " medicine ", there are many different specialties. in the uk, most specialities have their own body or college, which has its own entrance examination. these are collectively known as the royal colleges, although not all currently use the term " royal ". the development of a speciality is often driven by new technology ( such as the development of effective anaesthetics ) or ways of working ( such as emergency departments ) ; the new specialty leads to the formation of a unifying body of doctors and the prestige of administering their own examination. within medical circles, specialities usually fit into one of two broad categories : " medicine " and " surgery ". " medicine " refers to the practice of non - operative medicine, and most of its subspecialties require preliminary training in internal medicine. in the uk of cells = = = autologous : the donor and the recipient of the cells are the same individual. cells are harvested, cultured or stored, and then reintroduced to the host. as a result of the host ' s own cells being reintroduced, an antigenic response is not elicited. the body ' s immune system recognizes these re - implanted cells as its own, and does not target them for attack. autologous cell dependence on host cell health and donor site morbidity may be deterrents to their use. adipose - derived and bone marrow - derived mesenchymal stem cells are commonly autologous in nature, and can be used in a myriad of ways, from helping repair skeletal tissue to replenishing beta cells in diabetic patients. allogenic : cells are obtained from the body of a donor of the same species as the recipient. while there are some ethical constraints to the use of human cells for in vitro studies ( i. e. human brain tissue chimera development ), the employment of dermal fibroblasts from human foreskin demonstrates an immunologically safe and thus a viable choice for allogenic tissue engineering of the skin. xenogenic : these cells are derived isolated cells from alternate species from the recipient. a notable example of xenogeneic tissue utilization is cardiovascular implant construction via animal cells. chimeric human - animal farming raises ethical concerns around the potential for improved consciousness from implanting human organs in animals. syngeneic or isogenic : these cells describe those borne from identical genetic code. this imparts an immunologic benefit similar to autologous cell lines ( see above ). autologous cells can be considered syngenic, but the classification also extends to non - autologously derived cells such as those from an identical twin, from genetically identical ( cloned ) research models, or induced stem cells ( isc ) as related to the donor. = = = stem cells = = = stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. stem cells are divided into " adult " and " embryonic " stem cells according to their source. while there is still a large ethical debate related to the use of embryonic stem cells, it is thought that another alternative source – induced pluripotent stem cells – may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. totipotent cells . long - term memory allows us to store information over prolonged periods ( days, weeks, years ). we do not yet know the practical limit of long - term memory capacity. short - term memory allows us to store information over short time scales ( seconds or minutes ). memory is also often grouped into declarative and procedural forms. declarative memory β€” grouped into subsets of semantic and episodic forms of memory β€” refers to our memory for facts and specific knowledge, specific meanings, and specific experiences ( e. g. " are apples food? ", or " what did i eat for breakfast four days ago? " ). procedural memory allows us to remember actions and motor sequences ( e. g. how to ride a bicycle ) and is often dubbed implicit knowledge or memory. cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. one example of this could be, what mental processes does a person go through to retrieve a long - lost memory? or, what differentiates between the cognitive process of recognition ( seeing hints of something before remembering it, or memory in context ) and recall ( retrieving a memory, as in " fill - in - the - blank " )? = = = perception and action = = = perception is the ability to take in information via the senses, and process it in some way. vision and hearing are two dominant senses that allow us to perceive the environment. some questions in the study of visual perception, for example, include : ( 1 ) how are we able to recognize objects?, ( 2 ) why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? one tool for studying visual perception is by looking at how people process optical illusions. the image on the right of a necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions. the study of haptic ( tactile ), olfactory, and gustatory stimuli also fall into the domain of perception. action is taken to refer to the output of a system. in humans, this is accomplished through motor responses. spatial planning and movement, speech production, and complex motor movements are all aspects of action. = = = consciousness = = = = = research methods = = many different methodologies are used to study cognitive science. as the field is highly interdisciplinary, research often cuts across include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences. genetic engineering is also used to create animal models of human diseases. genetically modified mice are the most common genetically engineered animal model. they have been used to study and model cancer ( the oncomouse ), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and parkinson disease. potential cures can be tested against these mouse models. gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. clinical research using somatic gene therapy has been conducted with several diseases, including x - linked scid, chronic lymphocytic leukemia ( cll ), and parkinson ' s disease. in 2012, alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. in 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy ' s body which was affected by the illness. germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. in 2015, crispr was used to edit the dna of non - viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. there are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings ' appearance, adaptability, intelligence, character or behavior. the distinction between cure and enhancement can also be difficult to establish. in november 2018, he jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the ccr5 gene, which codes for a receptor that hiv uses to enter cells. the work was widely condemned as unethical, dangerous, ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, spinal cord and peripheral nerves ) psychiatric ( orientation, mental state, mood, evidence of abnormal perception or thought ). respiratory ( large airways and lungs ) skin vital signs including height, weight, body temperature, blood pressure, pulse, respiration rate, and hemoglobin oxygen saturation it is to likely focus on areas of interest highlighted in the medical history and may not include everything listed above. the treatment plan may include ordering additional medical laboratory tests and medical imaging studies, starting therapy, referral to a specialist, or watchful observation. a follow - up may be advised. depending upon the health insurance plan and the managed care system listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, pallor or clubbing ) genitalia ( and pregnancy if the patient is or could be pregnant ) head, eye, ear, nose, and throat ( heent ) musculoskeletal ( including spine and extremities ) neurological ( consciousness, awareness, brain, vision, cranial nerves, ) : the reason for the current medical visit. these are the symptoms. they are in the patient ' s own words and are recorded along with the duration of each one. also called chief concern or presenting complaint. current activity : occupation, hobbies, what the patient actually does. family history ( fh ) : listing of diseases in the family that may impact the patient. a family tree is sometimes used. history of present illness ( hpi ) : the chronological order of events of symptoms and further clarification of each symptom. distinguishable from history of previous illness, often called past medical history ( pmh ). medical history comprises hpi and pmh. medications ( rx ) : what drugs the patient takes including prescribed, over - the - counter, and home remedies, as well as alternative and herbal medicines or remedies. allergies are also recorded. past medical history ( pmh / pmhx ) : concurrent medical problems, past hospitalizations and operations, injuries, past infectious diseases or vaccinations, history of known allergies. review of systems ( ros ) or systems inquiry : a set of additional questions to ask, which may be missed on hpi : a general enquiry ( have you noticed any weight loss, change in sleep quality, fevers, lumps and bumps? etc. ), followed by questions on the body ' s main organ systems ( heart, lungs, digestive tract, urinary tract, etc. ). social history ( sh ) : birthplace, residences, marital history, social and economic status, habits ( including diet, medications, tobacco, alcohol ). the physical examination is the examination of the patient for medical signs of disease that are objective and observable, in contrast to symptoms that are volunteered by the patient and are not necessarily objectively observable. the healthcare provider uses sight, hearing, touch, and sometimes smell ( e. g., in infection, uremia, diabetic ketoacidosis ). four actions are the basis of physical examination : inspection, palpation ( feel ), percussion ( tap to determine resonance characteristics ), and auscultation ( listen ), generally in that order, although auscultation occurs prior to percussion and palpation for abdominal assessments. the clinical examination involves the study of : abdomen and rectum cardiovascular ( heart and blood vessels ) general appearance of the patient and specific indicators of disease ( nutritional status, presence of jaundice, Question: Which of the following can provide the human body with long-term immunity against some diseases? A) antibiotics B) vitamins C) vaccines D) red blood cells
C) vaccines
Context: best - known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products. the first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. fungal and virus resistant crops have also been developed or are in development. this makes the insect and weed management of crops easier and can indirectly increase crop yield. gm crops that directly improve yield by accelerating growth or making the plant more hardy ( by improving salt, cold or drought tolerance ) are also under development. in 2016 salmon have been genetically modified with growth hormones to reach normal adult size much faster. gmos have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. the amflora potato produces a more industrially useful blend of starches. soybeans and canola have been genetically modified to produce more healthy oils. the first commercialised gm food was a tomato that had delayed ripening, increasing its shelf life. plants and animals have been engineered to produce materials they do not normally make. pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves ; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the fda approved a drug produced in goat milk. = = = other applications = = = genetic engineering has potential applications in conservation and natural area management. gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. with the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice. genetic engineering is also being used to create microbial art. some bacteria have been genetically engineered to create black and white photographs. novelty items such as lavender - colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering. = = regulation = = the regulation of genetic engineering phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. the frequency of gene targeting can be greatly enhanced through genome editing. genome editing uses artificially engineered nucleases that create specific double - stranded breaks at desired locations in the genome, and use the cell ' s endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end - joining. there are four families of engineered nucleases : meganucleases, zinc finger nucleases, transcription activator - like effector nucleases ( talens ), and the cas9 - guiderna system ( adapted from crispr ). talen and crispr are the two most commonly used and each has its own advantages. talens have greater target specificity, while crispr is easier to design and more efficient. in addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations at endogenous genes that generate a gene knockout. = = applications = = genetic engineering has applications in medicine, research, industry and agriculture and can be used on a wide range of plants, animals and microorganisms. bacteria, the first organisms to be genetically modified, can have plasmid dna inserted containing new genes that code for medicines or enzymes that process food and other substrates. plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures and the production of edible vaccines. most commercialised gmos are insect resistant or herbicide tolerant crop plants. genetically modified animals have been used for research, model animals and the production of agricultural or pharmaceutical products. the genetically modified animals include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth and the ability to express proteins in their milk. = = = medicine = = = genetic engineering has many applications to medicine that include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. one of the earliest uses of genetic engineering was to mass - produce human insulin in bacteria. this application has now been applied to human growth hormones, follicle stimulating hormones ( for treating infertility ), human albumin, genetic engineering takes the gene directly from one organism and delivers it to the other. this is much faster, can be used to insert any genes from any organism ( even ones from different domains ) and prevents other undesirable genes from also being added. genetic engineering could potentially fix severe genetic disorders in humans by replacing the defective gene with a functioning one. it is an important tool in research that allows the function of specific genes to be studied. drugs, vaccines and other products have been harvested from organisms engineered to produce them. crops have been developed that aid food security by increasing yield, nutritional value and tolerance to environmental stresses. the dna can be introduced directly into the host organism or into a cell that is then fused or hybridised with the host. this relies on recombinant nucleic acid techniques to form new combinations of heritable genetic material followed by the incorporation of that material either indirectly through a vector system or directly through micro - injection, macro - injection or micro - encapsulation. genetic engineering does not normally include traditional breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. however, some broad definitions of genetic engineering include selective breeding. cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesised material into an organism. plants, animals or microorganisms that have been changed through genetic engineering are termed genetically modified organisms or gmos. if genetic material from another species is added to the host, the resulting organism is called transgenic. if genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. if genetic engineering is used to remove genetic material from the target organism the resulting organism is termed a knockout organism. in europe genetic modification is synonymous with genetic engineering while within the united states of america and canada genetic modification can also be used to refer to more conventional breeding methods. = = history = = humans have altered the genomes of species for thousands of years through selective breeding, or artificial selection : 1 : 1 as contrasted with natural selection. more recently, mutation breeding has used exposure to chemicals or radiation to produce a high frequency of random mutations, for selective breeding purposes. genetic engineering as the direct manipulation of dna by humans outside breeding and new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in animal agriculture. additionally, agricultural biotechnology can expedite breeding processes in order to yield faster results and provide greater quantities of food. transgenic biofortification in cereals has been considered as a promising method to combat malnutrition in india and other countries. = = = industrial = = = industrial biotechnology ( known mainly in europe as white biotechnology ) is the application of biotechnology for industrial purposes, including industrial fermentation. it includes the practice of using cells such as microorganisms, or components of cells like enzymes, to generate industrially useful products in sectors such as chemicals, food and feed, detergents, paper for natural scientists, with the creation of transgenic organisms one of the most important tools for analysis of gene function. genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at - 80 Β°c almost indefinitely. once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research. organisms are genetically engineered to discover the functions of certain genes. this could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. these experiments generally involve loss of function, gain of function, tracking and expression. loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. in a simple knockout a copy of the desired gene has been altered to make it non - functional. embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. these stem cells are injected into blastocysts, which are implanted into surrogate mothers. this allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. it is used especially frequently in developmental biology. when this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called " scanning mutagenesis ". the simplest method, and the first to be used, is " alanine scanning ", where every position in turn is mutated to the unreactive amino acid alanine. gain of function experiments, the logical counterpart of knockouts. these are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. the process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy. tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. one way to do this is to replace the wild - type gene with a ' fusion ' gene, which is a juxtaposition genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism ' s genes using technology. it is a set of technologies used to change the genetic makeup of cells, including the transfer of genes within and across species boundaries to produce improved or novel organisms. new dna is obtained by either isolating and copying the genetic material of interest using recombinant dna methods or by artificially synthesising the dna. a construct is usually created and used to insert this dna into the host organism. the first recombinant dna molecule was made by paul berg in 1972 by combining dna from the monkey virus sv40 with the lambda virus. as well as inserting genes, the process can be used to remove, or " knock out ", genes. the new dna can be inserted randomly, or targeted to a specific part of the genome. an organism that is generated through genetic engineering is considered to be genetically modified ( gm ) and the resulting entity is a genetically modified organism ( gmo ). the first gmo was a bacterium generated by herbert boyer and stanley cohen in 1973. rudolf jaenisch created the first gm animal when he inserted foreign dna into a mouse in 1974. the first company to focus on genetic engineering, genentech, was founded in 1976 and started the production of human proteins. genetically engineered human insulin was produced in 1978 and insulin - producing bacteria were commercialised in 1982. genetically modified food has been sold since 1994, with the release of the flavr savr tomato. the flavr savr was engineered to have a longer shelf life, but most current gm crops are modified to increase resistance to insects and herbicides. glofish, the first gmo designed as a pet, was sold in the united states in december 2003. in 2016 salmon modified with a growth hormone were sold. genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. in research, gmos are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. by knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. as well as producing hormones, vaccines and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. chinese hamster ovary ( cho ) cells are used in industrial genetic engineering. additionally mrna vaccines are made through genetic engineering to prevent infections by viruses such as covid - 19. the same techniques that are used to produce drugs can also have industrial applications such naturally take up foreign dna. this ability can be induced in other bacteria via stress ( e. g. thermal or electric shock ), which increases the cell membrane ' s permeability to dna ; up - taken dna can either integrate with the genome or exist as extrachromosomal dna. dna is generally inserted into animal cells using microinjection, where it can be injected through the cell ' s nuclear envelope directly into the nucleus, or through the use of viral vectors. plant genomes can be engineered by physical methods or by use of agrobacterium for the delivery of sequences hosted in t - dna binary vectors. in plants the dna is often inserted using agrobacterium - mediated transformation, taking advantage of the agrobacteriums t - dna sequence that allows natural insertion of genetic material into plant cells. other methods include biolistics, where particles of gold or tungsten are coated with dna and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid dna. as only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. in plants this is accomplished through the use of tissue culture. in animals it is necessary to ensure that the inserted dna is present in the embryonic stem cells. bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. selectable markers are used to easily differentiate transformed from untransformed cells. these markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant. further testing using pcr, southern hybridization, and dna sequencing is conducted to confirm that an organism contains the new gene. these tests can also confirm the chromosomal location and copy number of the inserted gene. the presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products ( rna and protein ) are also used. these include northern hybridisation, quantitative rt - pcr, western blot, immunofluorescence, elisa and phenotypic analysis. the new genetic material can be inserted randomly within the host genome or targeted to a specific location. the technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. this tends to occur at a relatively low frequency in plants and animals and generally kilometers ( 4, 200, 000 to 395, 400, 000 acres ). 10 % of the world ' s crop lands were planted with gm crops in 2010. as of 2011, 11 different transgenic crops were grown commercially on 395 million acres ( 160 million hectares ) in 29 countries such as the us, brazil, argentina, india, canada, china, paraguay, pakistan, south africa, uruguay, bolivia, australia, philippines, myanmar, burkina faso, mexico and spain. genetically modified foods are foods produced from organisms that have had specific changes introduced into their dna with the methods of genetic engineering. these techniques have allowed for the introduction of new crop traits as well as a far greater control over a food ' s genetic structure than previously afforded by methods such as selective breeding and mutation breeding. commercial sale of genetically modified foods began in 1994, when calgene first marketed its flavr savr delayed ripening tomato. to date most genetic modification of foods have primarily focused on cash crops in high demand by farmers such as soybean, corn, canola, and cotton seed oil. these have been engineered for resistance to pathogens and herbicides and better nutrient profiles. gm livestock have also been experimentally developed ; in november 2013 none were available on the market, but in 2015 the fda approved the first gm salmon for commercial production and consumption. there is a scientific consensus that currently available food derived from gm crops poses no greater risk to human health than conventional food, but that each gm food needs to be tested on a case - by - case basis before introduction. nonetheless, members of the public are much less likely than scientists to perceive gm foods as safe. the legal and regulatory status of gm foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. gm crops also provide a number of ecological benefits, if not used in excess. insect - resistant crops have proven to lower pesticide usage, therefore reducing the environmental impact of pesticides as a whole. however, opponents have objected to gm crops per se on several grounds, including environmental concerns, whether food produced from gm crops is safe, whether gm crops are needed to address the world ' s food needs, and economic concerns raised by the fact these organisms are subject to intellectual property law. biotechnology has several applications in the realm of food security. crops like golden rice are engineered to have higher nutritional content, and there is potential for food products with longer shelf lives. though not a form of agricultural biotechnology, vaccines can help prevent diseases found in process by which a genotype encoded in dna gives rise to an observable phenotype in the proteins of an organism ' s body. this process is summarized by the central dogma of molecular biology, which was formulated by francis crick in 1958. according to the central dogma, genetic information flows from dna to rna to protein. there are two gene expression processes : transcription ( dna to rna ) and translation ( rna to protein ). = = = gene regulation = = = the regulation of gene expression by environmental factors and during different stages of development can occur at each step of the process such as transcription, rna splicing, translation, and post - translational modification of a protein. gene expression can be influenced by positive or negative regulation, depending on which of the two types of regulatory proteins called transcription factors bind to the dna sequence close to or at a promoter. a cluster of genes that share the same promoter is called an operon, found mainly in prokaryotes and some lower eukaryotes ( e. g., caenorhabditis elegans ). in positive regulation of gene expression, the activator is the transcription factor that stimulates transcription when it binds to the sequence near or at the promoter. negative regulation occurs when another transcription factor called a repressor binds to a dna sequence called an operator, which is part of an operon, to prevent transcription. repressors can be inhibited by compounds called inducers ( e. g., allolactose ), thereby allowing transcription to occur. specific genes that can be activated by inducers are called inducible genes, in contrast to constitutive genes that are almost constantly active. in contrast to both, structural genes encode proteins that are not involved in gene regulation. in addition to regulatory events involving the promoter, gene expression can also be regulated by epigenetic changes to chromatin, which is a complex of dna and protein found in eukaryotic cells. = = = genes, development, and evolution = = = development is the process by which a multicellular organism ( plant or animal ) goes through a series of changes, starting from a single cell, and taking on various forms that are characteristic of its life cycle. there are four key processes that underlie development : determination, differentiation, morphogenesis, and growth. determination sets the developmental fate of a cell, which becomes more restrictive during development. differentiation is the process by which specialized cells arise from less specialized cells such as stem , subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from Question: Which event is most likely to increase the genetic variation in a population of organisms? A) improved hunting techniques B) greater environmental stress C) increased immigration D) introduced predators
C) increased immigration
Context: becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea and their competitive or mutualistic interactions with other species. some ecologists even rely on empirical data from indigenous people that is gathered by ethnobotanists. this information can relay a great deal of information on how the land once was thousands of years ago and how it has changed over that time. the goals of plant ecology are to understand the causes of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change. plants depend on certain edaphic ( soil ) and climatic factors in their environment but can modify these factors too. for example, they can change their environment ' s albedo, increase runoff interception, stabilise mineral soils and develop their organic content, and affect local temperature. plants compete with other organisms in their ecosystem for resources. they interact with their neighbours at a variety of spatial scales in groups, populations and communities that collectively constitute vegetation. regions with characteristic vegetation types and dominant plants as well as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest. herbivores eat plants, but plants can defend themselves and some species are parasitic or even carnivorous. other organisms form mutually beneficial relationships with plants. for example, mycorrhizal fungi and rhizobia provide plants with nutrients in exchange for food, ants are recruited by ant plants to provide protection, honey bees, bats and other animals pollinate flowers and humans and other animals act as dispersal vectors to spread spores and seeds. = = = plants, climate and environmental change = = = plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. for example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. estimates of atmospheric co2 concentrations since the palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. ozone depletion can expose plants to higher levels of ultraviolet radiation - b ( uv - b ), resulting in lower growth rates. moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. = = genetics = = inheritance in plants follows the same fundamental principles of genetics as in other multicellular organisms. gregor mendel discovered the genetic laws of inheritance by studying approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references - wildland hydrology at the library of congress web archives ( archived 2002 - 08 - 13 ) , lightning strikes, tornadoes, building fires, wildfires, and mass shootings disabling most of the system if not the entirety of it. geographic redundancy locations can be more than 621 miles ( 999 km ) continental, more than 62 miles apart and less than 93 miles ( 150 km ) apart, less than 62 miles apart, but not on the same campus, or different buildings that are more than 300 feet ( 91 m ) apart on the same campus. the following methods can reduce the risks of damage by a fire conflagration : large buildings at least 80 feet ( 24 m ) to 110 feet ( 34 m ) apart, but sometimes a minimum of 210 feet ( 64 m ) apart. : 9 high - rise buildings at least 82 feet ( 25 m ) apart : 12 open spaces clear of flammable vegetation within 200 feet ( 61 m ) on each side of objects different wings on the same building, in rooms that are separated by more than 300 feet ( 91 m ) different floors on the same wing of a building in rooms that are horizontally offset by a minimum of 70 feet ( 21 m ) with fire walls between the rooms that are on different floors two rooms separated by another room, leaving at least a 70 - foot gap between the two rooms there should be a minimum of two separated fire walls and on opposite sides of a corridor geographic redundancy is used by amazon web services ( aws ), google cloud platform ( gcp ), microsoft azure, netflix, dropbox, salesforce, linkedin, paypal, twitter, facebook, apple icloud, cisco meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services. as another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles ( 3. 2 km ) away from the shore, with an elevation of at least 5 feet ( 1. 5 m ) above sea level. for additional protection, they can be located at least 100 feet ( 30 m ) away from flood plain areas. = = functions of redundancy = = the two functions of redundancy are passive redundancy and active redundancy. both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity. passive redundancy uses excess capacity to reduce the impact of component failures. one common form of passive redundancy is the extra strength of cabling and struts used in bridges. Question: Where would animals and plants be most affected by a flood? A) low areas B) high areas C) warm areas D) cold areas
A) low areas
Context: ##ctonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s , crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest ##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " ##sphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as ##hosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' earth science or geoscience includes all fields of natural science related to the planet earth. this is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of earth ' s four spheres : the biosphere, hydrosphere / cryosphere, atmosphere, and geosphere ( or lithosphere ). earth science can be considered to be a branch of planetary science but with a much older history. = = geology = = geology is broadly the study of earth ' s structure, substance, and processes. geology is largely the study of the lithosphere, or earth ' s surface, including the crust and rocks. it includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. it incorporates aspects of chemistry, physics, and biology as elements of geology interact. historical geology is the application of geology to interpret earth history and how it has changed over time. geochemistry studies the chemical components and processes of the earth. geophysics studies the physical properties of the earth. paleontology studies fossilized biological material in the lithosphere. planetary geology studies geoscience as it pertains to extraterrestrial bodies. geomorphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = = , glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " sub - disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry. " glaciology covers the icy parts of the earth ( or cryosphere ). atmospheric sciences cover the gaseous parts of the earth ( or atmosphere ) between the surface and the exosphere ( about 1000 km ). major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics. = = = earth science breakup = = = = = see also = = = = references = = = = = sources = = = = = further reading = = = = external links = = earth science picture of the day, a service of universities space research association, sponsored by nasa goddard space flight center. geoethics in planetary and space exploration. geology buzz : earth science archived 2021 - 11 - 04 at the wayback machine Question: Which seismic wave phenomenon, found at structural boundaries, allows scientists to interpret the interior structure of Earth? A) refraction of waves B) generation of new waves C) transformation of transverse waves into longitudinal waves D) transformation of mechanical waves into electromagnetic waves
A) refraction of waves
Context: weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial 10 kgy most food, which is ( with regard to warming ) physically equivalent to water, would warm by only about 2. 5 Β°c ( 4. 5 Β°f ). the specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is the gas giant planets in the solar system have a retinue of icy moons, and we expect giant exoplanets to have similar satellite systems. if a jupiter - like planet were to migrate toward its parent star the icy moons orbiting it would evaporate, creating atmospheres and possible habitable surface oceans. here, we examine how long the surface ice and possible oceans would last before being hydrodynamically lost to space. the hydrodynamic loss rate from the moons is determined, in large part, by the stellar flux available for absorption, which increases as the giant planet and icy moons migrate closer to the star. at some planet - star distance the stellar flux incident on the icy moons becomes so great that they enter a runaway greenhouse state. this runaway greenhouse state rapidly transfers all available surface water to the atmosphere as vapor, where it is easily lost from the small moons. however, for icy moons of ganymede ' s size around a sun - like star we found that surface water ( either ice or liquid ) can persist indefinitely outside the runaway greenhouse orbital distance. in contrast, the surface water on smaller moons of europa ' s size will only persist on timescales greater than 1 gyr at distances ranging 1. 49 to 0. 74 au around a sun - like star for bond albedos of 0. 2 and 0. 8, where the lower albedo becomes relevant if ice melts. consequently, small moons can lose their icy shells, which would create a torus of h atoms around their host planet that might be detectable in future observations. temperature changes up to 1000 Β°c. = = processing steps = = the traditional ceramic process generally follows this sequence : milling β†’ batching β†’ mixing β†’ forming β†’ drying β†’ firing β†’ assembly. milling is the process by which materials are reduced from a large size to a smaller size. milling may involve breaking up cemented material ( in which case individual particles retain their shape ) or pulverization ( which involves grinding the particles themselves to a smaller size ). milling is generally done by mechanical means, including attrition ( which is particle - to - particle collision that results in agglomerate break up or particle shearing ), compression ( which applies a forces that results in fracturing ), and impact ( which employs a milling medium or the particles themselves to cause fracturing ). attrition milling equipment includes the wet scrubber ( also called the planetary mill or wet attrition mill ), which has paddles in water creating vortexes in which the material collides and break up. compression mills include the jaw crusher, roller crusher and cone crusher. impact mills include the ball mill, which has media that tumble and fracture the material, or the resonantacoustic mixer. shaft impactors cause particle - to particle attrition and compression. batching is the process of weighing the oxides according to recipes, and preparing them for mixing and drying. mixing occurs after batching and is performed with various machines, such as dry mixing ribbon mixers ( a type of cement mixer ), resonantacoustic mixers, mueller mixers, and pug mills. wet mixing generally involves the same equipment. forming is making the mixed material into shapes, ranging from toilet bowls to spark plug insulators. forming can involve : ( 1 ) extrusion, such as extruding " slugs " to make bricks, ( 2 ) pressing to make shaped parts, ( 3 ) slip casting, as in making toilet bowls, wash basins and ornamentals like ceramic statues. forming produces a " green " part, ready for drying. green parts are soft, pliable, and over time will lose shape. handling the green product will change its shape. for example, a green brick can be " squeezed ", and after squeezing it will stay that way. drying is removing the water or binder from the formed material. spray drying is widely used to prepare powder for pressing operations. other dryers are tunnel dryers and periodic dryers. controlled heat is applied in this two - stage process. first, of the 21st century. characteristics of speculative fiction have been recognized in older works whose authors ' intentions are now known, or in the social contexts of the stories they tell. an example is the ancient greek dramatist, euripides ( c. 480 – c. 406 bce ), whose play medea seems to have offended athenian audiences ; in this play, he speculated that the titular sorceress medea killed her own children, as opposed to their being killed by other corinthians after her departure. in historiography, what is now called speculative fiction has previously been termed historical invention, historical fiction, and similar names. these terms have been extensively applied in literary criticism to the works of william shakespeare. for example, in a midsummer night ' s dream, he places several characters from different locations and times into the fairyland of the fictional merovingian germanic sovereign oberon ; these characters include the athenian duke theseus, the amazonian queen hippolyta, the english fairy puck, and the roman god cupid. in mythography, the concept of speculative fiction has been termed mythopoesis or mythopoeia. this process involves the creative design and development of lore and mythology for works of fiction. the term ' s definition comes from use by j. r. r. tolkien ; his series of novels, the lord of the rings, shows an application of the process. themes common in mythopoeia, such as the supernatural, alternate history, and sexuality, continue to be explored in works produced in modern speculative fiction. speculative fiction in the general sense of hypothetical history, explanation, or ahistorical storytelling has been attributed to authors in ostensibly non - fiction modes since herodotus of halicarnassus ( fl. 5th century bce ) with his histories ; it was already both created and edited out by early encyclopedic writers such sima qian ( c. 145 or 135 bce – 86 bce ), author of shiji. these examples highlight a caveat β€” many works that are now viewed as speculative fiction long predated the labelling of the genre. in the broadest sense, the genre ' s concept does two things : it captures both conscious and unconscious aspects of human psychology in making sense of the world, and it responds to the world by creating imaginative, inventive, and artistic expressions. such expressions can contribute to practical societal progress through interpersonal influences ; social and cultural movements ; scientific research and advances ; and the philosophy of science. in english - language the recent report on laser cooling of liquid may contradict the law of energy conservation. uv ice photodesorption is an important non - thermal desorption pathway in many interstellar environments that has been invoked to explain observations of cold molecules in disks, clouds and cloud cores. systematic laboratory studies of the photodesorption rates, between 7 and 14 ev, from co : n2 binary ices, have been performed at the desirs vacuum uv beamline of the synchrotron facility soleil. the photodesorption spectral analysis demonstrates that the photodesorption process is indirect, i. e. the desorption is induced by a photon absorption in sub - surface molecular layers, while only surface molecules are actually desorbing. the photodesorption spectra of co and n2 in binary ices therefore depend on the absorption spectra of the dominant species in the subsurface ice layer, which implies that the photodesorption efficiency and energy dependence are dramatically different for mixed and layered ices compared to pure ices. in particular, a thin ( 1 - 2 ml ) n2 ice layer on top of co will effectively quench co photodesorption, while enhancing n2 photodesorption by a factors of a few ( compared to the pure ices ) when the ice is exposed to a typical dark cloud uv field, which may help to explain the different distributions of co and n2h + in molecular cloud cores. this indirect photodesorption mechanism may also explain observations of small amounts of complex organics in cold interstellar environments. molecules and induce ionization ( hence the name ) which cannot be achieved by mere heating. this is the reason for new beneficial effects, however at the same time, for new concerns. the treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. however, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. detractors of food irradiation have concerns about the health hazards of induced radioactivity. a report for the industry advocacy group american council on science and health entitled " irradiated foods " states : " the types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. food undergoing irradiation does not become any more radioactive than luggage passing through an airport x - ray scanner or teeth that have been x - rayed. " food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed 500, 000 metric tons ( 490, 000 long tons ; 550, 000 short tons ) annually worldwide. food irradiation is essentially a non - nuclear technology ; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma - rays from nuclear decay. there is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. = = accidents = = nuclear accidents, because of the powerful forces involved, are often very dangerous. historically, the first incidents involved fatal radiation exposure. marie curie died from aplastic anemia which resulted from her high levels of exposure. two scientists, an american and canadian respectively, harry daghlian and louis slotin, died after mishandling the same plutonium mass. unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. civilian nuclear and radiological accidents primarily involve nuclear power plants. most common are nuclear leaks that expose workers to hazardous material. a nuclear meltdown refers to the more serious hazard of i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid – base reactions are hydroxide ( ohβˆ’ ) and phosphate ( po43βˆ’ ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid – base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. according to brΓΈnsted – lowry acid – base theory, acids are substances that donate a positive hydrogen ion to another substance in a chemical reaction ; by extension, a base is the substance which receives that hydrogen ion. a third common theory is lewis acid – base theory, which is based on the formation of new chemical bonds. lewis theory explains that an acid is a substance which is capable of accepting a pair of electrons from another substance during the process of bond formation, while a base is a substance which can provide a pair of electrons to form a new bond. there are several other ways in which a substance may be classified as an acid or a base, as is evident in the history of this concept. acid strength is commonly measured by two methods. one measurement, based on the arrhenius definition of acidity, is ph, which is a measurement of the hydronium ion concentration in a solution, as expressed on a negative logarithmic scale. thus, solutions that have a low ph have a high hydronium ion concentration and can be said to be more acidic. the other measurement, based on the brΓΈnsted – lowry definition, is the acid dissociation constant ( ka ), which measures the relative ability of a substance to act as an navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea Question: When ice melts, it becomes a A) gas. B) solid. C) liquid. D) plasma.
C) liquid.
Context: is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged , but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature. = = = = substance and mixture = = = = a chemical substance is a kind of matter with a definite composition and set of properties. a collection of substances is called a mixture. examples of mixtures are air and alloys. = = = = mole and amount of substance = = = = the mole is a unit of measurement that denotes an amount of substance ( also called chemical amount ). one mole is defined to contain exactly 6. 02214076Γ—1023 particles ( atoms, molecules, ions, or electrons ), where the number of particles per mole is known as the avogadro constant. molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol / dm3. = = = phase = = = in addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. for the most part, the chemical classifications are independent of these bulk phase classifications ; however, some more exotic phases are incompatible with certain chemical properties. a phase is a set of states of a chemical system that have similar bulk structural properties, over a range of conditions, such as pressure or temperature. physical properties, such as density and refractive index tend to fall within values characteristic of the phase . oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid – base reactions are hydroxide ( ohβˆ’ ) and phosphate ( po43βˆ’ ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature. = = = = substance and mixture = = = = a chemical substance is a kind of matter with a definite composition and set of properties. a collection of substances is called a mixture. examples of mixtures are air and alloys. = = = = mole and amount of substance = = = = the mole is a unit of measurement that denotes an amount of substance ( also called chemical amount ). one mole is defined to contain exactly 6. 02214076Γ—1023 particles ( atoms, molecules, ions, or electrons ), where the number of particles per mole is known as the avogadro constant. molar concentration is the amount of a particular substance per volume of solution, and is commonly reported in mol / dm3. = = = phase = = = in addition to the specific chemical properties that distinguish different chemical classifications, chemicals can exist in several phases. for the most part, the chemical classifications are independent of these bulk phase with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of analyzing their radiation spectra. the term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. = = = reaction = = = when a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. for example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy. the existence of characteristic energy levels for different chemical substances is useful for their identification by the analysis of spectral lines. different kinds of spectra are often used in chemical spectroscopy, e. g. ir, microwave, nmr, esr, etc. spectroscopy is also used to identify the composition of remote objects – like stars and distant galaxies – by a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid – base reactions are hydroxide ( ohβˆ’ ) and phosphate ( po43βˆ’ ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid – base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. set of chemical reactions with other substances. however, this definition only works well for substances that are composed of molecules, which is not true of many substances ( see below ). molecules are typically a set of atoms bound together by covalent bonds, such that the structure is electrically neutral and all valence electrons are paired with other electrons either in bonds or in lone pairs. thus, molecules exist as electrically neutral units, unlike ions. when this rule is broken, giving the " molecule " a charge, the result is sometimes named a molecular ion or a polyatomic ion. however, the discrete and separate nature of the molecular concept usually requires that molecular ions be present only in well - separated form, such as a directed beam in a vacuum in a mass spectrometer. charged polyatomic collections residing in solids ( for example, common sulfate or nitrate ions ) are generally not considered " molecules " in chemistry. some molecules contain one or more unpaired electrons, creating radicals. most radicals are comparatively reactive, but some, such as nitric oxide ( no ) can be stable. the " inert " or noble gas elements ( helium, neon, argon, krypton, xenon and radon ) are composed of lone atoms as their smallest discrete unit, but the other isolated chemical elements consist of either molecules or networks of atoms bonded to each other in some way. identifiable molecules compose familiar substances such as water, air, and many organic compounds like alcohol, sugar, gasoline, and the various pharmaceuticals. however, not all substances or chemical compounds consist of discrete molecules, and indeed most of the solid substances that make up the solid crust, mantle, and core of the earth are chemical compounds without molecules. these other types of substances, such as ionic compounds and network solids, are organized in such a way as to lack the existence of identifiable molecules per se. instead, these substances are discussed in terms of formula units or unit cells as the smallest repeating structure within the substance. examples of such substances are mineral salts ( such as table salt ), solids like carbon and diamond, metals, and familiar silica and silicate minerals such as quartz and granite. one of the main characteristics of a molecule is its geometry often called its structure. while the structure of diatomic, triatomic or tetra - atomic molecules may be trivial, ( linear, angular pyramidal etc. ) the structure of polyatomic molecules, that are constituted of more than six atoms ( of several elements ) can be crucial for its chemical nature. or molecules that show characteristic chemical properties in a compound. physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. in particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, spectroscopy, and more recently, astrochemistry. physical chemistry has large overlap with molecular physics. physical chemistry involves the use of infinitesimal calculus in deriving equations. it is usually associated with quantum chemistry and theoretical chemistry. physical chemistry is a distinct discipline from chemical physics, but again, there is very strong overlap. theoretical chemistry is the study of chemistry via fundamental theoretical reasoning ( usually within mathematics or physics ). in particular the application of quantum mechanics to chemistry is called quantum chemistry. since the end of the second world war, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. theoretical chemistry has large overlap with ( theoretical and experimental ) condensed matter physics and molecular physics. other subdivisions include electrochemistry, femtochemistry, flavor chemistry, flow chemistry, immunohistochemistry, hydrogenation chemistry, mathematical chemistry, molecular mechanics, natural product chemistry, organometallic chemistry, petrochemistry, photochemistry, physical organic chemistry, polymer chemistry, radiochemistry, sonochemistry, supramolecular chemistry, synthetic chemistry, and many others. = = = interdisciplinary = = = interdisciplinary fields include agrochemistry, astrochemistry ( and cosmochemistry ), atmospheric chemistry, chemical engineering, chemical biology, chemo - informatics, environmental chemistry, geochemistry, green chemistry, immunochemistry, marine chemistry, materials science, mechanochemistry, medicinal chemistry, molecular biology, nanotechnology, oenology, pharmacology, phytochemistry, solid - state chemistry, surface science, thermochemistry, and many others. = = = industry = = = the chemical industry represents an important economic activity worldwide. the global top 50 chemical producers in 2013 had sales of us $ 980. 5 billion with a profit margin of 10. 3 %. = = = professional societies = = = = = see also = = = = references = = = = bibliography = = = = further reading = = popular reading atkins, p. w. galileo ' s finger ( oxford university press ) Question: Which combination of letters could be used as a chemical symbol for an element? A) BR B) Chl C) Dy D) FeO
C) Dy
Context: enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make a minimum atmospheric temperature, or tropopause, occurs at a pressure of around 0. 1 bar in the atmospheres of earth, titan, jupiter, saturn, uranus and neptune, despite great differences in atmospheric composition, gravity, internal heat and sunlight. in all these bodies, the tropopause separates a stratosphere with a temperature profile that is controlled by the absorption of shortwave solar radiation, from a region below characterised by convection, weather, and clouds. however, it is not obvious why the tropopause occurs at the specific pressure near 0. 1 bar. here we use a physically - based model to demonstrate that, at atmospheric pressures lower than 0. 1 bar, transparency to thermal radiation allows shortwave heating to dominate, creating a stratosphere. at higher pressures, atmospheres become opaque to thermal radiation, causing temperatures to increase with depth and convection to ensue. a common dependence of infrared opacity on pressure, arising from the shared physics of molecular absorption, sets the 0. 1 bar tropopause. we hypothesize that a tropopause at a pressure of approximately 0. 1 bar is characteristic of many thick atmospheres, including exoplanets and exomoons in our galaxy and beyond. judicious use of this rule could help constrain the atmospheric structure, and thus the surface environments and habitability, of exoplanets. consisting of several distinct layers, often referred to as spheres : the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the earth ' s surface and its various processes these correspond to rocks, water, air and life. also included by some are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth are the cryosphere ( corresponding to ice ) as a distinct portion of the hydrosphere and the pedosphere ( corresponding to soil ) as an active and intermixed sphere. the following fields of science are generally categorized within the earth sciences : geology describes the rocky parts of the earth ' s crust ( or lithosphere ) and its historic development. major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology. physical geography focuses on geography as an earth science. physical geography is the study of earth ' s seasons, climate, atmosphere, soil, streams, landforms, and oceans. physical geography can be divided into several branches or related fields, as follows : geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology. geophysics and geodesy investigate the shape of the earth, its reaction to forces and its magnetic and gravity fields. geophysicists explore the earth ' s core and mantle as well as the tectonic and seismic activity of the lithosphere. geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity. geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. geochemists use the tools and principles of chemistry to study the earth ' s composition, structure, processes, and other physical aspects. major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry. soil science covers the outermost layer of the earth ' s crust that is subject to soil formation processes ( or pedosphere ). major subdivisions in this field of study include edaphology and pedology. ecology covers the interactions between organisms and their environment. this field of study differentiates the study of earth from other planets in the solar system, earth being the only planet teeming with life. hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the earth and its atmosphere ( or hydrosphere ). " higher concentrations of atmospheric nitrous oxide ( n2o ) are expected to slightly warm earth ' s surface because of increases in radiative forcing. radiative forcing is the difference in the net upward thermal radiation flux from the earth through a transparent atmosphere and radiation through an otherwise identical atmosphere with greenhouse gases. radiative forcing, normally measured in w / m ^ 2, depends on latitude, longitude and altitude, but it is often quoted for the tropopause, about 11 km of altitude for temperate latitudes, or for the top of the atmosphere at around 90 km. for current concentrations of greenhouse gases, the radiative forcing per added n2o molecule is about 230 times larger than the forcing per added carbon dioxide ( co2 ) molecule. this is due to the heavy saturation of the absorption band of the relatively abundant greenhouse gas, co2, compared to the much smaller saturation of the absorption bands of the trace greenhouse gas n2o. but the rate of increase of co2 molecules, about 2. 5 ppm / year ( ppm = part per million by mole ), is about 3000 times larger than the rate of increase of n2o molecules, which has held steady at around 0. 00085 ppm / year since 1985. so, the contribution of nitrous oxide to the annual increase in forcing is 230 / 3000 or about 1 / 13 that of co2. if the main greenhouse gases, co2, ch4 and n2o have contributed about 0. 1 c / decade of the warming observed over the past few decades, this would correspond to about 0. 00064 k per year or 0. 064 k per century of warming from n2o. proposals to place harsh restrictions on nitrous oxide emissions because of warming fears are not justified by these facts. restrictions would cause serious harm ; for example, by jeopardizing world food supplies. acid rain. climatology studies the climate and climate change. the troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up earth ' s atmosphere. 75 % of the mass in the atmosphere is located within the troposphere, the lowest layer. in all, the atmosphere is made up of about 78. 0 % nitrogen, 20. 9 % oxygen, and 0. 92 % argon, and small amounts of other gases including co2 and water vapor. water vapor and co2 cause the earth ' s atmosphere to catch and hold the sun ' s energy through the greenhouse effect. this makes earth ' s surface warm enough for liquid water and life. in addition to trapping heat, the atmosphere also protects living organisms by shielding the earth ' s surface from cosmic rays. the magnetic field β€” created by the internal motions of the core β€” produces the magnetosphere which protects earth ' s atmosphere from the solar wind. as the earth is 4. 5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere. = = earth ' s magnetic field = = = = hydrology = = hydrology is the study of the hydrosphere and the movement of water on earth. it emphasizes the study of how humans use and interact with freshwater supplies. study of water ' s movement is closely related to geomorphology and other branches of earth science. applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. oceanography is the study of oceans. hydrogeology is the study of groundwater. it includes the mapping of groundwater supplies and the analysis of groundwater contaminants. applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. the earliest exploitation of groundwater resources dates back to 3000 bc, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. ecohydrology is the study of ecological systems in the hydrosphere. it can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. glaciology is the study of the cryosphere, including glaciers and coverage of the earth by ice and snow. concerns of gla the higher microwave band 3 – 6 ghz, and millimeter wave band, around 28 and 39 ghz. since these frequencies have a shorter range than previous cellphone bands, the cells will be smaller than the cells in previous cellular networks which could be many miles across. millimeter - wave cells will only be a few blocks long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. satellite phone ( satphone ) – a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. they are more expensive than cell phones ; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the earth. in order for the phone to communicate with a satellite using a small omnidirectional antenna, first - generation systems use satellites in low earth orbit, about 400 – 700 miles ( 640 – 1, 100 km ) above the surface. with an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 – 15 minutes, so the call is " handed off " to another satellite when one passes beyond the local horizon. therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on earth. other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot be used at high latitudes because of terrestrial interference. cordless phone – a landline telephone in which the handset is portable and communicates with the rest of the phone by a short - range full duplex radio link, instead of being attached by a cord. both the handset and the base station have low - power radio transceivers that handle the short - range bidirectional radio link. as of 2022, cordless phones in most nations use the dect transmission standard. land mobile radio system – short - range mobile or portable half - duplex radio transceivers operating in the vhf or uhf band that can be used without a license. they are often installed in vehicles, with the mobile units communicating with a dispatcher at a fixed base station. special systems with reserved frequencies are used by first responder services ; police, fire, ambulance, and emergency services, and other government services. other systems are made for long, and instead of a cell base station and antenna tower, they will have many small antennas attached to utility poles and buildings. satellite phone ( satphone ) – a portable wireless telephone similar to a cell phone, connected to the telephone network through a radio link to an orbiting communications satellite instead of through cell towers. they are more expensive than cell phones ; but their advantage is that, unlike a cell phone which is limited to areas covered by cell towers, satphones can be used over most or all of the geographical area of the earth. in order for the phone to communicate with a satellite using a small omnidirectional antenna, first - generation systems use satellites in low earth orbit, about 400 – 700 miles ( 640 – 1, 100 km ) above the surface. with an orbital period of about 100 minutes, a satellite can only be in view of a phone for about 4 – 15 minutes, so the call is " handed off " to another satellite when one passes beyond the local horizon. therefore, large numbers of satellites, about 40 to 70, are required to ensure that at least one satellite is in view continuously from each point on earth. other satphone systems use satellites in geostationary orbit in which only a few satellites are needed, but these cannot be used at high latitudes because of terrestrial interference. cordless phone – a landline telephone in which the handset is portable and communicates with the rest of the phone by a short - range full duplex radio link, instead of being attached by a cord. both the handset and the base station have low - power radio transceivers that handle the short - range bidirectional radio link. as of 2022, cordless phones in most nations use the dect transmission standard. land mobile radio system – short - range mobile or portable half - duplex radio transceivers operating in the vhf or uhf band that can be used without a license. they are often installed in vehicles, with the mobile units communicating with a dispatcher at a fixed base station. special systems with reserved frequencies are used by first responder services ; police, fire, ambulance, and emergency services, and other government services. other systems are made for use by commercial firms such as taxi and delivery services. vhf systems use channels in the range 30 – 50 mhz and 150 – 172 mhz. uhf systems use the 450 – 470 mhz band and in some areas the 470 – 512 mhz range. in general, vhf systems have a longer range than uhf but require longer antennas. modeling of the x - ray spectra of the galactic superluminal jet sources grs 1915 + 105 and gro j1655 - 40 reveal a three - layered atmospheric structure in the inner region of their accretion disks. above the cold and optically thick disk of a temperature 0. 2 - 0. 5 kev, there is a warm layer with a temperature of 1. 0 - 1. 5 kev and an optical depth around 10. sometimes there is also a much hotter, optically thin corona above the warm layer, with a temperature of 100 kev or higher and an optical depth around unity. the structural similarity between the accretion disks and the solar atmosphere suggest that similar physical processes may be operating in these different systems. Question: How is the stratosphere different from the troposphere? A) The clouds in the stratosphere produce more rain. B) The air in the stratosphere has a greater density. C) The composition of the stratosphere lacks ozone. D) The temperature of the stratosphere warms at higher altitude.
D) The temperature of the stratosphere warms at higher altitude.
Context: is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of the walls of a victim ' s stomach. toxicology, a subfield of forensic chemistry, focuses on detecting and identifying drugs, poisons, and other toxic substances in biological samples. forensic toxicologists work on cases involving drug overdoses, poisoning, and substance abuse. their work is critical in determining whether harmful substances play a role in a person ’ s death or impairment. read more james marsh was the first to apply this new science to the art of forensics. he was called by the prosecution in a murder trial to give evidence as a chemist in 1832. the defendant, john bodle, was accused of poisoning his grandfather with arsenic - laced coffee. marsh performed the standard test by mixing a suspected sample with hydrogen sulfide and hydrochloric acid. while he was able to detect arsenic as yellow arsenic trisulfide, when it was shown to the jury it had deteriorated, allowing the suspect to be acquitted due to reasonable doubt. annoyed by that, marsh developed a much better test. he combined a sample containing arsenic with sulfuric acid and arsenic - free zinc, resulting in arsine gas. the gas was ignited, and it decomposed to pure metallic arsenic, which, when passed to a cold surface, would appear as a silvery - black deposit. so sensitive was the test, known formally as the marsh test, that it could detect as little as one - fiftieth of a milligram of arsenic. he first described this test in the edinburgh philosophical journal in 1836. = = = ballistics and firearms = = = ballistics is " the science of the motion of projectiles in flight ". in forensic science, analysts examine the patterns left on bullets and cartridge casings after being ejected from a weapon. when fired, a bullet is left with indentations and markings that are unique to the barrel and firing pin of the firearm that ejected the bullet. this examination can help scientists identify possible makes and models of weapons connected to a crime. henry goddard at scotland yard pioneered the use of bullet comparison in 1835. he noticed a flaw in the bullet that killed the victim and was able to trace this back to the mold that was used in the manufacturing process. = = = anthropometry = = = the french police officer alphonse bertillon was the first to apply the anthropological technique of anthropometry to law enforcement, thereby creating an identification system based on physical measurements. before that time, criminals could be identified only by name or photograph. dissatisfied with the ad hoc methods used to identify captured a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid – base reactions are hydroxide ( ohβˆ’ ) and phosphate ( po43βˆ’ ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be classified as an acid or a base. there are several different theories which explain acid – base behavior. the simplest is arrhenius theory, which states that an acid is a substance that produces hydronium ions when it is dissolved in water, and a base is one that produces hydroxide ions when dissolved in water. wounds or dead bodies should be examined, not avoided. the book became the first form of literature to help determine the cause of death. in one of song ci ' s accounts ( washing away of wrongs ), the case of a person murdered with a sickle was solved by an investigator who instructed each suspect to bring his sickle to one location. ( he realized it was a sickle by testing various blades on an animal carcass and comparing the wounds. ) flies, attracted by the smell of blood, eventually gathered on a single sickle. in light of this, the owner of that sickle confessed to the murder. the book also described how to distinguish between a drowning ( water in the lungs ) and strangulation ( broken neck cartilage ), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident. methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the polygraph test. in ancient india, some suspects were made to fill their mouths with dried rice and spit it back out. similarly, in ancient china, those accused of a crime would have rice powder placed in their mouths. in ancient middle - eastern cultures, the accused were made to lick hot metal rods briefly. it is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth ; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva. = = education and training = = initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data - flow management software. however, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. in doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces β€” remnants of criminal activity. embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners ' mindset to accept concepts and methodologies in forensic intelligence. recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, undersco . oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( i. e. ' microscopic chemical events ' ). = = = ions and salts = = = an ion is a charged species, an atom or a molecule, that has lost or gained one or more electrons. when an atom loses an electron and thus has more protons than electrons, the atom is a positively charged ion or cation. when an atom gains an electron and thus has more electrons than protons, the atom is a negatively charged ion or anion. cations and anions can form a crystalline lattice of neutral salts, such as the na + and clβˆ’ ions forming sodium chloride, or nacl. examples of polyatomic ions that do not split up during acid – base reactions are hydroxide ( ohβˆ’ ) and phosphate ( po43βˆ’ ). plasma is composed of gaseous matter that has been completely ionized, usually through high temperature. = = = acidity and basicity = = = a substance can often be his sickle to one location. ( he realized it was a sickle by testing various blades on an animal carcass and comparing the wounds. ) flies, attracted by the smell of blood, eventually gathered on a single sickle. in light of this, the owner of that sickle confessed to the murder. the book also described how to distinguish between a drowning ( water in the lungs ) and strangulation ( broken neck cartilage ), and described evidence from examining corpses to determine if a death was caused by murder, suicide or accident. methods from around the world involved saliva and examination of the mouth and tongue to determine innocence or guilt, as a precursor to the polygraph test. in ancient india, some suspects were made to fill their mouths with dried rice and spit it back out. similarly, in ancient china, those accused of a crime would have rice powder placed in their mouths. in ancient middle - eastern cultures, the accused were made to lick hot metal rods briefly. it is thought that these tests had some validity since a guilty person would produce less saliva and thus have a drier mouth ; the accused would be considered guilty if rice was sticking to their mouths in abundance or if their tongues were severely burned due to lack of shielding from saliva. = = education and training = = initial glance, forensic intelligence may appear as a nascent facet of forensic science facilitated by advancements in information technologies such as computers, databases, and data - flow management software. however, a more profound examination reveals that forensic intelligence represents a genuine and emerging inclination among forensic practitioners to actively participate in investigative and policing strategies. in doing so, it elucidates existing practices within scientific literature, advocating for a paradigm shift from the prevailing conception of forensic science as a conglomerate of disciplines merely aiding the criminal justice system. instead, it urges a perspective that views forensic science as a discipline studying the informative potential of traces β€” remnants of criminal activity. embracing this transformative shift poses a significant challenge for education, necessitating a shift in learners ' mindset to accept concepts and methodologies in forensic intelligence. recent calls advocating for the integration of forensic scientists into the criminal justice system, as well as policing and intelligence missions, underscore the necessity for the establishment of educational and training initiatives in the field of forensic intelligence. this article contends that a discernible gap exists between the perceived and actual comprehension of forensic intelligence among law enforcement and forensic science managers, positing that this asymmetry can be rectified only through educational interventions. endothermic reactions, the reaction absorbs heat from the surroundings. chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. the speed of a chemical reaction ( at given temperature t ) is related to the activation energy e, by the boltzmann ' s population factor e βˆ’ e / k t { \ displaystyle e ^ { - e / kt } } – that is the probability of a molecule to have energy greater than or equal to e at the given temperature t. this exponential dependence of a reaction rate on temperature is known as the arrhenius equation. the activation energy necessary for a chemical reaction to occur can be in the form of heat, light, electricity or mechanical force in the form of ultrasound. a related concept free energy, which also incorporates entropy considerations, is a very useful means for predicting the feasibility of a reaction and determining the state of equilibrium of a chemical reaction, in chemical thermodynamics. a reaction is feasible only if the total change in the gibbs free energy is negative, Ξ΄ g ≀ 0 { \ displaystyle \ delta g \ leq 0 \, } ; if it is equal to zero the chemical reaction is said to be at equilibrium. there exist only limited possible states of energy for electrons, atoms and molecules. these are determined by the rules of quantum mechanics, which require quantization of energy of a bound system. the atoms / molecules in a higher energy state are said to be excited. the molecules / atoms of substance in an excited energy state are often much more reactive ; that is, more amenable to chemical reactions. the phase of a substance is invariably determined by its energy and the energy of its surroundings. when the intermolecular forces of a substance are such that the energy of the surroundings is not sufficient to overcome them, it occurs in a more ordered phase like liquid or solid as is the case with water ( h2o ) ; a liquid at room temperature because its molecules are bound by hydrogen bonds. whereas hydrogen sulfide ( h2s ) is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole – dipole interactions. the transfer of energy from one chemical substance to another depends on the size of energy quanta emitted from one substance. however, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer analyzing their radiation spectra. the term chemical energy is often used to indicate the potential of a chemical substance to undergo a transformation through a chemical reaction or to transform other chemical substances. = = = reaction = = = when a chemical substance is transformed as a result of its interaction with another substance or with energy, a chemical reaction is said to have occurred. a chemical reaction is therefore a concept related to the " reaction " of a substance when it comes in close contact with another, whether as a mixture or a solution ; exposure to some form of energy, or both. it results in some energy exchange between the constituents of the reaction as well as with the system environment, which may be designed vessels β€” often laboratory glassware. chemical reactions can result in the formation or dissociation of molecules, that is, molecules breaking apart to form two or more molecules or rearrangement of atoms within or across molecules. chemical reactions usually involve the making or breaking of chemical bonds. oxidation, reduction, dissociation, acid – base neutralization and molecular rearrangement are some examples of common chemical reactions. a chemical reaction can be symbolically depicted through a chemical equation. while in a non - nuclear chemical reaction the number and kind of atoms on both sides of the equation are equal, for a nuclear reaction this holds true only for the nuclear particles viz. protons and neutrons. the sequence of steps in which the reorganization of chemical bonds may be taking place in the course of a chemical reaction is called its mechanism. a chemical reaction can be envisioned to take place in a number of steps, each of which may have a different speed. many reaction intermediates with variable stability can thus be envisaged during the course of a reaction. reaction mechanisms are proposed to explain the kinetics and the relative product mix of a reaction. many physical chemists specialize in exploring and proposing the mechanisms of various chemical reactions. several empirical rules, like the woodward – hoffmann rules often come in handy while proposing a mechanism for a chemical reaction. according to the iupac gold book, a chemical reaction is " a process that results in the interconversion of chemical species. " accordingly, a chemical reaction may be an elementary reaction or a stepwise reaction. an additional caveat is made, in that this definition includes cases where the interconversion of conformers is experimentally observable. such detectable chemical reactions normally involve sets of molecular entities as indicated by this definition, but it is often conceptually convenient to use the term also for changes involving single molecular entities ( during aqueous corrosion, atoms in the solid react chemically with oxygen, leading either to the formation of an oxide film or to the dissolution of the host material. commonly, the first step in corrosion involves an oxygen atom from the dissociated water that reacts with the surface atoms and breaks near surface bonds. in contrast, hydrogen on the surface often functions as a passivating species. here, we discovered that the roles of o and h are reversed in the early corrosion stages on a si terminated sic surface. o forms stable species on the surface, and chemical attack occurs by h that breaks the si - c bonds. this so - called hydrogen scission reaction is enabled by a newly discovered metastable bridging hydroxyl group that can form during water dissociation. the si atom that is displaced from the surface during water attack subsequently forms h2sio3, which is a known precursor to the formation of silica and silicic acid. this study suggests that the roles of h and o in oxidation need to be reconsidered. Question: If a student gets a chemical splashed into an eye, what is the most appropriate first action to be taken? A) wipe it with a paper towel B) call 911 for emergency services C) have someone go get the school nurse D) flush the eye with water at an eyewash station
D) flush the eye with water at an eyewash station
Context: industrial applications. this branch of biotechnology is the most used for the industries of refining and combustion principally on the production of bio - oils with photosynthetic micro - algae. green biotechnology is biotechnology applied to agricultural processes. an example would be the selection and domestication of plants via micropropagation. another example is the designing of transgenic plants to grow under specific environments in the presence ( or absence ) of chemicals. one hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. an example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of poll ##tion, and pasteurization in order to become products that can be sold. there are three levels of food processing : primary, secondary, and tertiary. primary food processing involves turning agricultural products into other products that can be turned into food, secondary food processing is the making of food from readily available ingredients, and tertiary food processing is commercial production of ready - to eat or heat - and - serve foods. drying, pickling, salting, and fermenting foods were some of the oldest food processing techniques used to preserve food by preventing yeasts, molds, and bacteria to cause spoiling. methods for preserving food have evolved to meet current standards of food safety but still use the same processes as the past. biochemical engineers also work to improve the nutritional value of food products, such as in golden rice, which was developed to prevent vitamin a deficiency in certain areas where this was an issue. efforts to advance preserving technologies can also ensure lasting retention of nutrients as foods are stored. packaging plays a key role in preserving as well as ensuring the safety of the food by protecting the product from contamination, physical damage, and tampering. packaging can also make it easier to transport and serve food. a common job for biochemical engineers working in the food industry is to design ways to perform all these processes on a large scale in order to meet the demands of the population. responsibilities for this career path include designing and performing experiments, optimizing processes, consulting with groups to develop new technologies, and preparing project plans for equipment and facilities. = = = pharmaceuticals = = = in the pharmaceutical industry, bioprocess engineering plays a crucial role in the large - scale production of biopharmaceuticals, such as monoclonal antibodies, vaccines, and therapeutic proteins. the development and optimization of bioreactors and fermentation systems are essential for the mass production of these products, ensuring consistent quality and high yields. for example, recombinant proteins like insulin and erythropoietin are produced through cell culture systems using genetically modified cells. the bioprocess engineer ’ s role is to optimize variables like temperature, ph, nutrient availability, and oxygen levels to maximize the efficiency of these systems. the growing field of gene therapy also relies on bioprocessing techniques to produce viral vectors, which are used to deliver therapeutic genes to patients. this involves scaling up processes from laboratory to industrial scale while maintaining safety and regulatory compliance. as the demand for biopharmaceutical products increases, advancements grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. if a mixture of different materials is used together in a ceramic, the sintering temperature is sometimes above the melting point of one minor component – a liquid phase sintering. this results in shorter sintering times compared to solid state sintering. such liquid phase sintering involves in faster diffusion processes and may result in abnormal grain growth. = = strength of ceramics = = a material ' s strength is dependent on its microstructure. the engineering processes to which a material is subjected can alter its microstructure. the variety of strengthening mechanisms that alter the strength of a material include the mechanism of grain boundary strengthening. thus, although yield strength is maximized with decreasing grain size, ultimately, very small grain sizes make the material brittle. considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending on its microstructural high machining costs. there is a possibility for melt casting to be used for many of these approaches. potentially even more desirable is using melt - derived particles. in this method, quenching is done in a solid solution or in a fine eutectic structure, in which the particles are then processed by more typical ceramic powder processing methods into a useful body. there have also been preliminary attempts to use melt spraying as a means of forming composites by introducing the dispersed particulate, whisker, or fiber phase in conjunction with the melt spraying process. other methods besides melt infiltration to manufacture ceramic composites with long fiber reinforcement are chemical vapor infiltration and the infiltration of fiber preforms with organic precursor, which after pyrolysis yield an amorphous ceramic matrix, initially with a low density. with repeated cycles of infiltration and pyrolysis one of those types of ceramic matrix composites is produced. chemical vapor infiltration is used to manufacture carbon / carbon and silicon carbide reinforced with carbon or silicon carbide fibers. besides many process improvements, the first of two major needs for fiber composites is lower fiber costs. the second major need is fiber compositions or coatings, or composite processing, to reduce degradation that results from high - temperature composite exposure under oxidizing conditions. = = applications = = the products of technical ceramics include tiles used in the space shuttle program, gas burner nozzles, ballistic protection, nuclear fuel uranium oxide pellets, bio - medical implants, jet engine turbine blades, and missile nose cones. its products are often made from materials other than clay, chosen for their particular physical properties. these may be classified as follows : oxides : silica, alumina, zirconia non - oxides : carbides, borides, nitrides, silicides composites : particulate or whisker reinforced matrices, combinations of oxides and non - oxides ( e. g. polymers ). ceramics can be used in many technological industries. one application is the ceramic tiles on nasa ' s space shuttle, used to protect it and the future supersonic space planes from the searing heat of re - entry into the earth ' s atmosphere. they are also used widely in electronics and optics. in addition to the applications listed here, ceramics are also used as a coating in various engineering cases. an example would be a ceramic bearing coating over a titanium frame used for an aircraft. recently the field has come to include the studies of single fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity ( space bioeconomy ) dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and crops. = = = medicine = = = in medicine, modern biotechnology has many applications in areas such as pharmaceutical drug discoveries and production, pharmacogenomics, and genetic testing ( or genetic screening ). in 2021, nearly 40 % of the total company value of pharmaceutical biotech companies worldwide were active in oncology the broad definition of " utilizing a biotechnological system to make products ". indeed, the cultivation of plants may be viewed as the earliest biotechnological enterprise. agriculture has been theorized to have become the dominant way of producing food since the neolithic revolution. through early biotechnology, the earliest farmers selected and bred the best - suited crops ( e. g., those with the highest yields ) to produce enough food to support a growing population. as crops and fields became increasingly large and difficult to maintain, it was discovered that specific organisms and their by - products could effectively fertilize, restore nitrogen, and control pests. throughout the history of agriculture, farmers have inadvertently altered the genetics of their crops through introducing them to new environments and breeding them with other plants β€” one of the first forms of biotechnology. these processes also were included in early fermentation of beer. these processes were introduced in early mesopotamia, egypt, china and india, and still use the same basic biological methods. in brewing, malted grains ( containing enzymes ) convert starch from grains into sugar and then adding specific yeasts to produce beer. in this process, carbohydrates in the grains broke down into alcohols, such as ethanol. later, other cultures produced the process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united grains will result in some form of grain size distribution, which will have a significant impact on the ultimate physical properties of the material. in particular, abnormal grain growth in which certain grains grow very large in a matrix of finer grains will significantly alter the physical and mechanical properties of the obtained ceramic. in the sintered body, grain sizes are a product of the thermal processing parameters as well as the initial particle size, or possibly the sizes of aggregates or particle clusters which arise during the initial stages of processing. the ultimate microstructure ( and thus the physical properties ) of the final product will be limited by and subject to the form of the structural template or precursor which is created in the initial stages of chemical synthesis and physical forming. hence the importance of chemical powder and polymer processing as it pertains to the synthesis of industrial ceramics, glasses and glass - ceramics. there are numerous possible refinements of the sintering process. some of the most common involve pressing the green body to give the densification a head start and reduce the sintering time needed. sometimes organic binders such as polyvinyl alcohol are added to hold the green body together ; these burn out during the firing ( at 200 – 350 Β°c ). sometimes organic lubricants are added during pressing to increase densification. it is common to combine these, and add binders and lubricants to a powder, then press. ( the formulation of these organic chemical additives is an art in itself. this is particularly important in the manufacture of high performance ceramics such as those used by the billions for electronics, in capacitors, inductors, sensors, etc. ) a slurry can be used in place of a powder, and then cast into a desired shape, dried and then sintered. indeed, traditional pottery is done with this type of method, using a plastic mixture worked with the hands. if a mixture of different materials is used together in a ceramic, the sintering temperature is sometimes above the melting point of one minor component – a liquid phase sintering. this results in shorter sintering times compared to solid state sintering. such liquid phase sintering involves in faster diffusion processes and may result in abnormal grain growth. = = strength of ceramics = = a material ' s strength is dependent on its microstructure. the engineering processes to which a material is subjected can alter its microstructure. the variety of strengthening mechanisms that alter the strength of a material include the mechanism of grain boundary strengthening. thus, although yield . an example of this would be bt corn. whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate. it is commonly considered as the next phase of green revolution, which can be seen as a platform to eradicate world hunger by using technologies which enable the production of more fertile and resistant, towards biotic and abiotic stress, plants and ensures application of environmentally friendly fertilizers and the use of biopesticides, it is mainly focused on the development of agriculture. on the other hand, some of the uses of green biotechnology involve microorganisms to clean and reduce waste. red biotechnology is the use of biotechnology in the medical and pharmaceutical industries, and health preservation. this branch involves the production of vaccines and antibiotics, regenerative therapies, creation of artificial organs and new diagnostics of diseases. as well as the development of hormones, stem cells, antibodies, sirna and diagnostic tests. white biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. an example is the designing of an organism to produce a useful chemical. another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous / polluting chemicals. white biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. yellow biotechnology refers to the use of biotechnology in food production ( food industry ), for example in making wine ( winemaking ), cheese ( cheesemaking ), and beer ( brewing ) by fermentation. it has also been used to refer to biotechnology applied to insects. this includes biotechnology - based approaches for the control of harmful insects, the characterisation and utilisation of active ingredients or genes of insects for research, or application in agriculture and medicine and various other approaches. gray biotechnology is dedicated to environmental applications, and focused on the maintenance of biodiversity and the remotion of pollutants. brown biotechnology is related to the management of arid lands and deserts. one application is the creation of enhanced seeds that resist extreme environmental conditions of arid regions, which is related to the innovation, creation of agriculture techniques and management of resources. violet biotechnology is related to law, ethical and philosophical issues around biotechnology. microbial biotechnology has been proposed for the rapidly emerging area of biotechnology applications in space and microgravity ( space bioeconomy ) dark biotechnology is the color associated with bioterrorism or biological weapons and biowarfare which uses microorganisms, and toxins to cause diseases and death in humans, livestock and electric motors, servo - mechanisms, and other electrical systems in conjunction with special software. a common example of a mechatronics system is a cd - rom drive. mechanical systems open and close the drive, spin the cd and move the laser, while an optical system reads the data on the cd and converts it to bits. integrated software controls the process and communicates the contents of the cd to the computer. robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. these robots may be of any shape and size, but all are preprogrammed and interact physically with the world. to create a robot, an engineer typically employs kinematics ( to determine the robot ' s range of motion ) and mechanics ( to determine the stresses within the robot ). robots are used extensively in industrial automation engineering. they allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. many companies employ assembly lines of robots, especially in automotive industries and some factories are so robotized that they can run by themselves. outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. robots are also sold for various residential applications, from recreation to domestic applications. = = = structural analysis = = = structural analysis is the branch of mechanical engineering ( and also civil engineering ) devoted to examining why and how objects fail and to fix the objects and their performance. structural failures occur in two general modes : static failure, and fatigue failure. static structural failure occurs when, upon being loaded ( having a force applied ) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. fatigue failure occurs because of imperfections in the object : a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle ( propagation ) until the crack is large enough to cause ultimate failure. failure is not simply defined as when a part breaks, however ; it is defined as when a part does not operate as intended. some systems, such as the perforated top sections of some plastic bags, are designed to break. if these systems do not break, failure analysis might be employed to determine the cause. structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure process of lactic acid fermentation, which produced other preserved foods, such as soy sauce. fermentation was also used in this time period to produce leavened bread. although the process of fermentation was not fully understood until louis pasteur ' s work in 1857, it is still the first use of biotechnology to convert a food source into another form. before the time of charles darwin ' s work and life, animal and plant scientists had already used selective breeding. darwin added to that body of work with his scientific observations about the ability of science to change species. these accounts contributed to darwin ' s theory of natural selection. for thousands of years, humans have used selective breeding to improve the production of crops and livestock to use them for food. in selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. for example, this technique was used with corn to produce the largest and sweetest crops. in the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. in 1917, chaim weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using clostridium acetobutylicum, to produce acetone, which the united kingdom desperately needed to manufacture explosives during world war i. biotechnology has also led to the development of antibiotics. in 1928, alexander fleming discovered the mold penicillium. his work led to the purification of the antibiotic formed by the mold by howard florey, ernst boris chain and norman heatley – to form what we today know as penicillin. in 1940, penicillin became available for medicinal use to treat bacterial infections in humans. the field of modern biotechnology is generally thought of as having been born in 1971 when paul berg ' s ( stanford ) experiments in gene splicing had early success. herbert w. boyer ( univ. calif. at san francisco ) and stanley n. cohen ( stanford ) significantly advanced the new technology in 1972 by transferring genetic material into a bacterium, such that the imported material would be reproduced. the commercial viability of a biotechnology industry was significantly expanded on june 16, 1980, when the united states supreme court ruled that a genetically modified microorganism could be patented in the case of diamond v. chakrabarty. indian - born ananda chakrabarty, working for general electric, had modified a bacterium ( of the genus pseudomonas ) capable of breaking down crude oil, which he proposed to Question: Which of the following best describes an advantage of using a mass production manufacturing system instead of a custom manufacturing system? A) Customers can provide specific feedback to workers. B) Workers become skilled in all aspects of assembly. C) Goods can be easily modified for customers. D) Products can be made at a lower cost.
D) Products can be made at a lower cost.
Context: weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial ##ediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river high temperature superconducting ( hts ) tape can be cut and stacked to generate large magnetic fields at cryogenic temperatures after inducing persistent currents in the superconducting layers. a field of 17. 7 t was trapped between two stacks of hts tape at 8 k with no external mechanical reinforcement. 17. 6 t could be sustained when warming the stack up to 14 k. a new type of hybrid stack was used consisting of a 12 mm square insert stack embedded inside a larger 34. 4 mm diameter stack made from different tape. the magnetic field generated is the largest for any trapped field magnet reported and 30 % greater than previously achieved in a stack of hts tapes. such stacks are being considered for superconducting motors as rotor field poles where the cryogenic penalty is justified by the increased power to weight ratio. the sample reported can be considered the strongest permanent magnet ever created. becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea water content and the internal evolution of terrestrial planets and icy bodies are closely linked. the distribution of water in planetary systems is controlled by the temperature structure in the protoplanetary disk and dynamics and migration of planetesimals and planetary embryos. this results in the formation of planetesimals and planetary embryos with a great variety of compositions, water contents and degrees of oxidation. the internal evolution and especially the formation time of planetesimals relative to the timescale of radiogenic heating by short - lived 26al decay may govern the amount of hydrous silicates and leftover rock - ice mixtures available in the late stages of their evolution. in turn, water content may affect the early internal evolution of the planetesimals and in particular metal - silicate separation processes. moreover, water content may contribute to an increase of oxygen fugacity and thus affect the concentrations of siderophile elements within the silicate reservoirs of solar system objects. finally, the water content strongly influences the differentiation rate of the icy moons, controls their internal evolution and governs the alteration processes occurring in their deep interiors. the recent report on laser cooling of liquid may contradict the law of energy conservation. current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models should be capable of furnishing valuable indications of the respective effects and comparative merits of the different schemes proposed for works. = = see also = = bridge scour flood control = = references = = = = external links = = u. s. army corps of engineers – civil works program river morphology and stream restoration references for inland navigation in the lower portion of their course, as, for instance, the rhine, the danube and the mississippi. river engineering works are only required to prevent changes in the course of the stream, to regulate its depth, and especially to fix the low - water channel and concentrate the flow in it, so as to increase as far as practicable the navigable depth at the lowest stage of the water level. engineering works to increase the navigability of rivers can only be advantageously undertaken in large rivers with a moderate fall and a fair discharge at their lowest stage, for with a large fall the current presents a great impediment to up - stream navigation, and there are generally variations in water level, and when the discharge becomes small in the dry season. it is impossible to maintain a sufficient depth of water in the low - water channel. the possibility to secure uniformity of depth in a river by lowering the shoals obstructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is 2h nmr spin - lattice relaxation and line - shape analyses are performed to study the temperature - dependent dynamics of water in the hydration shells of myoglobin, elastin, and collagen. Question: Which adaptation allows a walrus to stay warm in cold water? A) reddish coat B) bristly mustache C) wrinkled skin D) thick layer of blubber
D) thick layer of blubber
Context: has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends. = = = = compound = = = = a compound is a pure chemical substance composed of more than one element. the properties of a compound bear little similarity to those of its elements. the standard nomenclature of compounds is set by the international union of pure and applied chemistry ( iupac ). organic compounds are named g. spectroscopy and chromatography. scientists engaged in chemical research are known as chemists. most chemists specialize in one or more sub - disciplines. several concepts are essential for the study of chemistry ; some of them are : = = = matter = = = in chemistry, matter is defined as anything that has rest mass and volume ( it takes up space ) and is made up of particles. the particles that make up matter have rest mass as well – not all particles have rest mass, such as the photon. matter can be a pure chemical substance or a mixture of substances. = = = = atom = = = = the atom is the basic unit of chemistry. it consists of a dense core called the atomic nucleus surrounded by a space occupied by an electron cloud. the nucleus is made up of positively charged protons and uncharged neutrons ( together called nucleons ), while the electron cloud consists of negatively charged electrons which orbit the nucleus. in a neutral atom, the negatively charged electrons balance out the positive charge of the protons. the nucleus is dense ; the mass of a nucleon is approximately 1, 836 times that of an electron, yet the radius of an atom is about 10, 000 times that of its nucleus. the atom is also the smallest entity that can be envisaged to retain the chemical properties of the element, such as electronegativity, ionization potential, preferred oxidation state ( s ), coordination number, and preferred types of bonds to form ( e. g., metallic, ionic, covalent ). = = = = element = = = = a chemical element is a pure substance which is composed of a single type of atom, characterized by its particular number of protons in the nuclei of its atoms, known as the atomic number and represented by the symbol z. the mass number is the sum of the number of protons and neutrons in a nucleus. although all the nuclei of all atoms belonging to one element will have the same atomic number, they may not necessarily have the same mass number ; atoms of an element which have different mass numbers are known as isotopes. for example, all atoms with 6 protons in their nuclei are atoms of the chemical element carbon, but atoms of carbon may have mass numbers of 12 or 13. the standard presentation of the chemical elements is in the periodic table, which orders elements by atomic number. the periodic table is arranged in groups, or columns, and periods, or rows. the periodic table is useful in identifying periodic trends on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - nuclear states signed the limited test ban treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. the treaty permitted underground nuclear testing. france continued atmospheric testing until 1974, while china continued up until 1980. the last underground test by the united states was in 1992, the soviet union ( create a critical mass ) for detonation. it also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. the procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - nuclear states signed the limited test ban treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. the treaty permitted underground nuclear testing. france continued atmospheric testing until 1974, while china continued up until 1980. the last underground test by the united states was in 1992, the soviet union in 1990, the united kingdom in 1991, and both france and china continued testing until 1996. after signing the comprehensive test ban treaty in 1996 ( which had as of 2011 not entered into force ), all of these states have pledged to discontinue all nuclear testing. non - signatories india and pakistan last tested nuclear weapons in 1998. nuclear weapons are the most destructive weapons known - the archetypal weapons of mass destruction. throughout the cold war, the opposing powers had huge nuclear arsenals, sufficient to kill hundreds of millions of people. generations of people grew up under the shadow of nuclear devastation, portrayed in films such as the r - process of nucleosynthesis requires a large neutron - to - seed nucleus ratio. this does not, however, that there be an excess of neutrons over protons. if the expansion of the material is sufficiently rapid and the entropy per nucleon is sufficiently high, the nucleosynthesis enters a heavy - element synthesis regime heretofore unexplored. in this extreme regime, characterized by a persistent disequilibrium between free nucleons and the abundant alpha particles, heavy r - process nuclei can form even in matter with more protons than neutrons. this observation bears on the issue of the site of the r - process, on the variability of abundance yields from r - process events, and on cnstraints on neutrino physics derived from nucleosynthesis. it also clarifies the difference between nucleosynthesis in the early universe and that in less extreme stellar explosive environments. of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. a radiological weapon has never been deployed. while considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. there have been over 2, 000 nuclear tests conducted since 1945. in 1963, all nuclear and many non - nuclear states signed the limited test ban treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. the treaty permitted underground nuclear testing. france continued atmospheric testing until 1974, while china continued up until 1980. the last underground test by the united states was in 1992, the soviet union in 1990, the united kingdom in 1991, and both france and china continued testing until 1996. after signing the comprehensive test ban treaty in 1996 ( which had as of 2011 not entered into force ), all of these states have pledged to discontinue all nuclear testing. non - signatories india and pakistan last . nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut. the design of a nuclear weapon is more complicated than it might seem. such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality ( create a critical mass ) for detonation. it also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. the procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on earth in suitable amounts. one isotope of uranium, namely uranium - 235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium - 238. the latter accounts for more than 99 % of the weight of natural uranium. therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich ( isolate ) uranium - 235. alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. ultimately, the manhattan project manufactured nuclear weapons based on each of these elements. they detonated the first nuclear weapon in a test code - named " trinity ", near alamogordo, new mexico, on july 16, 1945. the test was conducted to ensure that the implosion method of detonation would work, which it did. a uranium bomb, little boy, was dropped on the japanese city hiroshima on august 6, 1945, followed three days later by the plutonium - based fat man on nagasaki. in the wake of unprecedented devastation and casualties from a single weapon, the japanese government soon surrendered, ending world war ii. since these bombings, no nuclear weapons have been deployed offensively. nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. just over four years later, on august 29, 1949, the soviet union detonated its first fission weapon. the united kingdom followed on october 2, 1952 ; france, on february 13, 1960 ; and china component to a nuclear weapon. approximately half of the deaths from hiroshima and nagasaki died two to five years afterward from radiation exposure. a radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. such a weapon would not have the explosive capability of a fission or ##nita and hamangia, which are often grouped together under the name of ' old europe '. with the carpatho - balkan region described as the ' earliest metallurgical province in eurasia ', its scale and technical quality of metal production in the 6th – 5th millennia bc totally overshadowed that of any other contemporary production centre. the earliest documented use of lead ( possibly native or smelted ) in the near east dates from the 6th millennium bc, is from the late neolithic settlements of yarim tepe and arpachiyah in iraq. the artifacts suggest that lead smelting may have predated copper smelting. metallurgy of lead has also been found in the balkans during the same period. copper smelting is documented at sites in anatolia and at the site of tal - i iblis in southeastern iran from c. 5000 bc. copper smelting is first documented in the delta region of northern egypt in c. 4000 bc, associated with the maadi culture. this represents the earliest evidence for smelting in africa. the varna necropolis, bulgaria, is a burial site located in the western industrial zone of varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin. the process appears to have been invented by the hittites in about 1200 bc, beginning the iron age. the secret of extracting and working iron was a key factor in the success of the philistines. historical developments in ferrous metallurgy can be found in a wide variety of past cultures and ##nik, in present - day serbia. the site of plocnik has produced a smelted copper axe dating from 5, 500 bc, belonging to the vinca culture. the balkans and adjacent carpathian region were the location of major chalcolithic cultures including vinca, varna, karanovo, gumelnita and hamangia, which are often grouped together under the name of ' old europe '. with the carpatho - balkan region described as the ' earliest metallurgical province in eurasia ', its scale and technical quality of metal production in the 6th – 5th millennia bc totally overshadowed that of any other contemporary production centre. the earliest documented use of lead ( possibly native or smelted ) in the near east dates from the 6th millennium bc, is from the late neolithic settlements of yarim tepe and arpachiyah in iraq. the artifacts suggest that lead smelting may have predated copper smelting. metallurgy of lead has also been found in the balkans during the same period. copper smelting is documented at sites in anatolia and at the site of tal - i iblis in southeastern iran from c. 5000 bc. copper smelting is first documented in the delta region of northern egypt in c. 4000 bc, associated with the maadi culture. this represents the earliest evidence for smelting in africa. the varna necropolis, bulgaria, is a burial site located in the western industrial zone of varna, approximately 4 km from the city centre, internationally considered one of the key archaeological sites in world prehistory. the oldest gold treasure in the world, dating from 4, 600 bc to 4, 200 bc, was discovered at the site. the gold piece dating from 4, 500 bc, found in 2019 in durankulak, near varna is another important example. other signs of early metals are found from the third millennium bc in palmela, portugal, los millares, spain, and stonehenge, united kingdom. the precise beginnings, however, have not be clearly ascertained and new discoveries are both continuous and ongoing. in approximately 1900 bc, ancient iron smelting sites existed in tamil nadu. in the near east, about 3, 500 bc, it was discovered that by combining copper and tin, a superior metal could be made, an alloy called bronze. this represented a major technological shift known as the bronze age. the extraction of iron from its ore into a workable metal is much more difficult than for copper or tin Question: Ninety-nine percent of the mass of an atom is located in A) the outermost energy level. B) the first energy level. C) the electron clouds. D) the nucleus.
D) the nucleus.
Context: world made wide use of hydropower, along with early uses of tidal power, wind power, fossil fuels such as petroleum, and large factory complexes ( tiraz in arabic ). a variety of industrial mills were employed in the islamic world, including fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, and tide mills. by the 11th century, every province throughout the islamic world had these industrial mills in operation. muslim engineers also employed water turbines and gears in mills and water - raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water - raising machines. many of these technologies were transferred to medieval europe. wind - powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now iran, afghanistan and pakistan by the 9th century. they were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 – 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music sequencer, a programmable musical instrument, was an automated flute player invented by the banu musa brothers, described in their book of ingenious devices, in the 9th century. in 1206, al - jazari invented programmable automata / robots. he described four automaton musicians, including two this third part of the lecture series deals with the question : who will pay for your retirement? for western europe the answer may be ` ` nobody ' ', but for algeria the demography looks more promising. and irrigation in the alluvial south, and catchment systems stretching for tens of kilometers in the hilly north. their palaces had sophisticated drainage systems. writing was invented in mesopotamia, using the cuneiform script. many records on clay tablets and stone inscriptions have survived. these civilizations were early adopters of bronze technologies which they used for tools, weapons and monumental statuary. by 1200 bc they could cast objects 5 m long in a single piece. several of the six classic simple machines were invented in mesopotamia. mesopotamians have been credited with the invention of the wheel. the wheel and axle mechanism first appeared with the potter ' s wheel, invented in mesopotamia ( modern iraq ) during the 5th millennium bc. this led to the invention of the wheeled vehicle in mesopotamia during the early 4th millennium bc. depictions of wheeled wagons found on clay tablet pictographs at the eanna district of uruk are dated between 3700 and 3500 bc. the lever was used in the shadoof water - lifting device, the first crane machine, which appeared in mesopotamia circa 3000 bc, and then in ancient egyptian technology circa 2000 bc. the earliest evidence of pulleys date back to mesopotamia in the early 2nd millennium bc. the screw, the last of the simple machines to be invented, first appeared in mesopotamia during the neo - assyrian period ( 911 – 609 ) bc. the assyrian king sennacherib ( 704 – 681 bc ) claims to have invented automatic sluices and to have been the first to use water screw pumps, of up to 30 tons weight, which were cast using two - part clay molds rather than by the ' lost wax ' process. the jerwan aqueduct ( c. 688 bc ) is made with stone arches and lined with waterproof concrete. the babylonian astronomical diaries spanned 800 years. they enabled meticulous astronomers to plot the motions of the planets and to predict eclipses. the earliest evidence of water wheels and watermills date back to the ancient near east in the 4th century bc, specifically in the persian empire before 350 bc, in the regions of mesopotamia ( iraq ) and persia ( iran ). this pioneering use of water power constituted the first human - devised motive force not to rely on muscle power ( besides the sail ). = = = = egypt = = = = the egyptians, known for building pyramids centuries before the creation of modern tools, invented and used many simple machines, such as the ramp to aid construction processes. historians and archaeologists have found evidence that the pyramids were built using ##morphology studies the origin of landscapes. structural geology studies the deformation of rocks to produce mountains and lowlands. resource geology studies how energy resources can be obtained from minerals. environmental geology studies how pollution and contaminants affect soil and rock. mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. petrology is the study of rocks, including the formation and composition of rocks. petrography is a branch of petrology that studies the typology and classification of rocks. = = earth ' s interior = = plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the earth ' s crust. beneath the earth ' s crust lies the mantle which is heated by the radioactive decay of heavy elements. the mantle is not quite solid and consists of magma which is in a state of semi - perpetual convection. this convection process causes the lithospheric plates to move, albeit slowly. the resulting process is known as plate tectonics. areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform ( or conservative ) boundaries. earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction. plate tectonics might be thought of as the process by which the earth is resurfaced. as the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. volcanoes result primarily from the melting of subducted crust material. crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface β€” giving birth to volcanoes. = = atmospheric science = = atmospheric science initially developed in the late - 19th century as a means to forecast the weather through meteorology, the study of weather. atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to sumerians in mesopotamia used a complex system of canals and levees to divert water from the tigris and euphrates rivers for irrigation. archaeologists estimate that the wheel was invented independently and concurrently in mesopotamia ( in present - day iraq ), the northern caucasus ( maykop culture ), and central europe. time estimates range from 5, 500 to 3, 000 bce with most experts putting it closer to 4, 000 bce. the oldest artifacts with drawings depicting wheeled carts date from about 3, 500 bce. more recently, the oldest - known wooden wheel in the world as of 2024 was found in the ljubljana marsh of slovenia ; austrian experts have established that the wheel is between 5, 100 and 5, 350 years old. the invention of the wheel revolutionized trade and war. it did not take long to discover that wheeled wagons could be used to carry heavy loads. the ancient sumerians used a potter ' s wheel and may have invented it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground and supported by arches. = = = pre - modern = = = innovations continued through the middle ages with the introduction of silk production ( in asia and later europe ), the horse collar, and horseshoes. simple machines ( such as the lever, the screw, and the pulley ) were combined into more complicated tools, such as the wheelbarrow, windmills, and clocks. a system of universities developed and spread scientific ideas and practices, including oxford and cambridge. the renaissance era produced many innovations, including the introduction of the movable type printing press to europe, which facilitated the communication of knowledge. technology became increasingly influenced by science , they use the energy of plants ( agricultural revolution ). in the fourth, they learn to use the energy of natural resources : coal, oil, gas. in the fifth, they harness nuclear energy. white introduced the formula p = e / t, where p is the development index, e is a measure of energy consumed, and t is the measure of the efficiency of technical factors using the energy. in his own words, " culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased ". nikolai kardashev extrapolated his theory, creating the kardashev scale, which categorizes the energy use of advanced civilizations. lenski ' s approach focuses on information. the more information and knowledge ( especially allowing the shaping of natural environment ) a given society has, the more advanced it is. he identifies four stages of human development, based on advances in the history of communication. in the first stage, information is passed by genes. in the second, when humans gain sentience, they can learn and pass information through experience. in the third, the humans start using signs and develop logic. in the fourth, they can create symbols, develop language and writing. advancements in communications technology translate into advancements in the economic system and political system, distribution of wealth, social inequality and other spheres of social life. he also differentiates societies based on their level of technology, communication, and economy : hunter - gatherer, simple agricultural, advanced agricultural, industrial, special ( such as fishing societies ). in economics, productivity is a measure of technological progress. productivity increases when fewer inputs ( classically labor and capital but some measures include energy and materials ) are used in the production of a unit of output. another indicator of technological progress is the development of new products and services, which is necessary to offset unemployment that would otherwise result as labor inputs are reduced. in developed countries productivity growth has been slowing since the late 1970s ; however, productivity growth was higher in some economic sectors, such as manufacturing. for example, employment in manufacturing in the united states declined from over 30 % in the 1940s to just over 10 % 70 years later. similar changes occurred in other developed countries. this stage is referred to as post - industrial. in the late 1970s sociologists and anthropologists like alvin toffler ( author of future shock ), daniel bell and john naisbitt have approached the theories of post - industrial societies, adaptation of crops and techniques from and to regions outside it. advances were made in animal husbandry, irrigation, and farming, with the help of new technology such as the windmill. these changes made agriculture much more productive, supporting population growth, urbanisation, and increased stratification of society. muslim engineers in the islamic world made wide use of hydropower, along with early uses of tidal power, wind power, fossil fuels such as petroleum, and large factory complexes ( tiraz in arabic ). a variety of industrial mills were employed in the islamic world, including fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, and tide mills. by the 11th century, every province throughout the islamic world had these industrial mills in operation. muslim engineers also employed water turbines and gears in mills and water - raising machines, and pioneered the use of dams as a source of water power, used to provide additional power to watermills and water - raising machines. many of these technologies were transferred to medieval europe. wind - powered machines used to grind grain and pump water, the windmill and wind pump, first appeared in what are now iran, afghanistan and pakistan by the 9th century. they were used to grind grains and draw up water, and used in the gristmilling and sugarcane industries. sugar mills first appeared in the medieval islamic world. they were first driven by watermills, and then windmills from the 9th and 10th centuries in what are today afghanistan, pakistan and iran. crops such as almonds and citrus fruit were brought to europe through al - andalus, and sugar cultivation was gradually adopted across europe. arab merchants dominated trade in the indian ocean until the arrival of the portuguese in the 16th century. the muslim world adopted papermaking from china. the earliest paper mills appeared in abbasid - era baghdad during 794 – 795. the knowledge of gunpowder was also transmitted from china via predominantly islamic countries, where formulas for pure potassium nitrate were developed. the spinning wheel was invented in the islamic world by the early 11th century. it was later widely adopted in europe, where it was adapted into the spinning jenny, a key device during the industrial revolution. the crankshaft was invented by al - jazari in 1206, and is central to modern machinery such as the steam engine, internal combustion engine and automatic controls. the camshaft was also first described by al - jazari in 1206. early programmable machines were also invented in the muslim world. the first music within the military ranges from educational purposes, training exercises and sustainability technology. the technology used for educational purposes within the military are mainly wearables that tracks a soldier ' s vitals. by tracking a soldier ' s heart rate, blood pressure, emotional status, etc. helps the research and development team best help the soldiers. according to chemist, matt coppock, he has started to enhance a soldier ' s lethality by collecting different biorecognition receptors. by doing so it will eliminate emerging environmental threats to the soldiers. with the emergence of virtual reality it is only natural to start creating simulations using vr. this will better prepare the user for whatever situation they are training for. in the military there are combat simulations that soldiers will train on. the reason the military will use vr to train its soldiers is because it is the most interactive / immersive experience the user will feels without being put in a real situation. recent simulations include a soldier wearing a shock belt during a combat simulation. each time they are shot the belt will release a certain amount of electricity directly to the user ' s skin. this is to simulate a shot wound in the most humane way possible. there are many sustainability technologies that military personnel wear in the field. one of which is a boot insert. this insert gauges how soldiers are carrying the weight of their equipment and how daily terrain factors impact their mission panning optimization. these sensors will not only help the military plan the best timeline but will help keep the soldiers at best physical / mental health. = = fashion = = fashionable wearables are " designed garments and accessories that combines aesthetics and style with functional technology. " garments are the interface to the exterior mediated through digital technology. it allows endless possibilities for the dynamic customization of apparel. all clothes have social, psychological and physical functions. however, with the use of technology these functions can be amplified. there are some wearables that are called e - textiles. these are the combination of textiles ( fabric ) and electronic components to create wearable technology within clothing. they are also known as smart textile and digital textile. wearables are made from a functionality perspective or from an aesthetic perspective. when made from a functionality perspective, designers and engineers create wearables to provide convenience to the user. clothing and accessories are used as a tool to provide assistance to the user. designers and engineers are working together to incorporate technology in the manufacturing of garments in order to provide functionalities that can simplify the lives of the user. for example, through smartwatches it. a stone pottery wheel found in the city - state of ur dates to around 3, 429 bce, and even older fragments of wheel - thrown pottery have been found in the same area. fast ( rotary ) potters ' wheels enabled early mass production of pottery, but it was the use of the wheel as a transformer of energy ( through water wheels, windmills, and even treadmills ) that revolutionized the application of nonhuman power sources. the first two - wheeled carts were derived from travois and were first used in mesopotamia and iran in around 3, 000 bce. the oldest known constructed roadways are the stone - paved streets of the city - state of ur, dating to c. 4, 000 bce, and timber roads leading through the swamps of glastonbury, england, dating to around the same period. the first long - distance road, which came into use around 3, 500 bce, spanned 2, 400 km from the persian gulf to the mediterranean sea, but was not paved and was only partially maintained. in around 2, 000 bce, the minoans on the greek island of crete built a 50 km road leading from the palace of gortyn on the south side of the island, through the mountains, to the palace of knossos on the north side of the island. unlike the earlier road, the minoan road was completely paved. ancient minoan private homes had running water. a bathtub virtually identical to modern ones was unearthed at the palace of knossos. several minoan private homes also had toilets, which could be flushed by pouring water down the drain. the ancient romans had many public flush toilets, which emptied into an extensive sewage system. the primary sewer in rome was the cloaca maxima ; construction began on it in the sixth century bce and it is still in use today. the ancient romans also had a complex system of aqueducts, which were used to transport water across long distances. the first roman aqueduct was built in 312 bce. the eleventh and final ancient roman aqueduct was built in 226 ce. put together, the roman aqueducts extended over 450 km, but less than 70 km of this was above ground and supported by arches. = = = pre - modern = = = innovations continued through the middle ages with the introduction of silk production ( in asia and later europe ), the horse collar, and horseshoes. simple machines ( such as the lever, the screw, and the pulley ) were combined into more complicated tools Question: Which of the following is a common renewable resource found in deserts? A) biodiesel B) uranium C) natural gas D) solar energy
D) solar energy
Context: becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under the injuries of the inundations they have been designed to prevent, as the escape of floods from the raised river must occur sooner or later. inadequate planning controls which have permitted development on floodplains have been blamed for the flooding of domestic properties. channelization was done under the auspices or overall direction of engineers employed by the local authority or the national government. one of the most heavily channelized areas in the united states is west tennessee, where every major stream with one exception ( the hatchie river ) has been partially or completely channelized. channelization of a stream may be undertaken for several reasons. one is to make a stream more suitable for navigation or for navigation by larger vessels with deep draughts. another is to restrict water to a certain area of a stream ' s natural bottom lands so that the bulk of such lands can be made available for agriculture. a third reason is flood control, with the idea of giving a stream a sufficiently large and deep channel so that flooding beyond those limits will be minimal or nonexistent, at least on a routine basis. one major reason is to reduce natural erosion ; as a natural waterway curves back and forth, it usually deposits sand and gravel on the inside of the corners where the water flows slowly, and cuts sand, gravel, subsoil, and precious topsoil from the outside corners where it flows rapidly due to a change in direction. unlike sand and gravel, the topsoil that is eroded does not get deposited on the inside of the next corner of the river. it simply washes away. = = loss of wetlands = = channelization has several predictable and negative effects. one of them is loss of wetlands. wetlands are an excellent habitat for multiple forms of wildlife, and additionally serve as a " filter " for much of the world ' s surface fresh water. another is the fact that channelized streams are almost invariably straightened. for example, the channelization of florida ' s kissimmee river has been cited as a cause contributing to the loss of wetlands. this straightening causes the streams to flow more rapidly, which can, in some instances, vastly increase soil erosion. it can also increase flooding downstream from the channelized area, as larger volumes of water traveling more rapidly than normal can reach choke points over a shorter period of time than they otherwise would, with a net effect of flood control in one area coming at the expense of aggravated flooding in another. in addition, studies have shown that stream channelization results in declines of river fish populations. : 3 - 1ff a navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding sea approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a watershed ( called a " divide " in north america ) over which rainfall flows down towards the river traversing the lowest part of the valley, whereas the rain falling on the far slope of the watershed flows away to another river draining an adjacent basin. river basins vary in extent according to the configuration of the country, ranging from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their floods occur in the summer from the melting of snow and ice, as exemplified by the rhone above the lake of geneva, and the arve which joins it below. but even these rivers are liable to have their flow modified by the influx of tributaries subject to different conditions, so that the rhone below lyon has a more uniform discharge than most rivers, as the summer floods of the arve are counteracted to a great extent by the low stage of the saone flowing into the rhone at lyon, which has its floods in the winter when the arve, on the contrary, is low. another serious obstacle encountered in river engineering consists in the large quantity of detritus they bring down in flood - time, derived mainly from the disintegration of the surface layers of the hills and slopes in the upper parts of the valleys by glaciers, frost and rain. the power of a current to transport materials varies with its velocity, so that torrents with a rapid fall near the sources of rivers can carry down rocks, boulders and large stones, which are by degrees ground by attrition in their onward course into slate, gravel, sand and silt, simultaneously with the gradual reduction in fall, and, consequently, in the transporting force of the current. accordingly, under ordinary conditions, most of the materials brought down from the high lands by torrential water courses are carried forward by the main river to the sea, or partially strewn over flat alluvial plains during floods ; the size of the materials forming the bed of the river or borne along by the stream is gradually reduced on proceeding seawards, so that in the po river in italy, for instance, pebbles and gravel are found for about 140 miles below turin, sand along the next 100 miles, and silt and mud in the last 110 miles ( 176 km ). = = channelization = = the removal of obstructions, natural or artificial ##ructing the channel depends on the nature of the shoals. a soft shoal in the bed of a river is due to deposit from a diminution in velocity of flow, produced by a reduction in fall and by a widening of the channel, or to a loss in concentration of the scour of the main current in passing over from one concave bank to the next on the opposite side. the lowering of such a shoal by dredging merely effects a temporary deepening, for it soon forms again from the causes which produced it. the removal, moreover, of the rocky obstructions at rapids, though increasing the depth and equalizing the flow at these places, produces a lowering of the river above the rapids by facilitating the efflux, which may result in the appearance of fresh shoals at the low stage of the river. where, however, narrow rocky reefs or other hard shoals stretch across the bottom of a river and present obstacles to the erosion by the current of the soft materials forming the bed of the river above and below, their removal may result in permanent improvement by enabling the river to deepen its bed by natural scour. the capability of a river to provide a waterway for navigation during the summer or throughout the dry season depends on the depth that can be secured in the channel at the lowest stage. the problem in the dry season is the small discharge and deficiency in scour during this period. a typical solution is to restrict the width of the low - water channel, concentrate all of the flow in it, and also to fix its position so that it is scoured out every year by the floods which follow the deepest part of the bed along the line of the strongest current. this can be effected by closing subsidiary low - water channels with dikes across them, and narrowing the channel at the low stage by low - dipping cross dikes extending from the river banks down the slope and pointing slightly up - stream so as to direct the water flowing over them into a central channel. = = estuarine works = = the needs of navigation may also require that a stable, continuous, navigable channel is prolonged from the navigable river to deep water at the mouth of the estuary. the interaction of river flow and tide needs to be modeled by computer or using scale models, moulded to the configuration of the estuary under consideration and reproducing in miniature the tidal ebb and flow and fresh - water discharge over a bed of fine sand, in which various lines of training walls can be successively inserted. the models ##lling, pipe jacking and other operations. a caisson is sunk by self - weight, concrete or water ballast placed on top, or by hydraulic jacks. the leading edge ( or cutting shoe ) of the caisson is sloped out at a sharp angle to aid sinking in a vertical manner ; it is usually made of steel. the shoe is generally wider than the caisson to reduce friction, and the leading edge may be supplied with pressurised bentonite slurry, which swells in water, stabilizing settlement by filling depressions and voids. an open caisson may fill with water during sinking. the material is excavated by clamshell excavator bucket on crane. the formation level subsoil may still not be suitable for excavation or bearing capacity. the water in the caisson ( due to a high water table ) balances the upthrust forces of the soft soils underneath. if dewatered, the base may " pipe " or " boil ", causing the caisson to sink. to combat this problem, piles may be driven from the surface to act as : load - bearing walls, in that they transmit loads to deeper soils. anchors, in that they resist flotation because of the friction at the interface between their surfaces and the surrounding earth into which they have been driven. h - beam sections ( typical column sections, due to resistance to bending in all axis ) may be driven at angles " raked " to rock or other firmer soils ; the h - beams are left extended above the base. a reinforced concrete plug may be placed under the water, a process known as tremie concrete placement. when the caisson is dewatered, this plug acts as a pile cap, resisting the upward forces of the subsoil. = = = monolithic = = = a monolithic caisson ( or simply a monolith ) is larger than the other types of caisson, but similar to open caissons. such caissons are often found in quay walls, where resistance to impact from ships is required. = = = pneumatic = = = shallow caissons may be open to the air, whereas pneumatic caissons ( sometimes called pressurized caissons ), which penetrate soft mud, are bottomless boxes sealed at the top and filled with compressed air to keep water and mud out at depth. an airlock allows access to the chamber. workers, called sandhogs in american english, move mud and rock debris ( called from the insignificant drainage areas of streams rising on high ground near the coast and flowing straight down into the sea, up to immense tracts of continents, where rivers rising on the slopes of mountain ranges far inland have to traverse vast stretches of valleys and plains before reaching the ocean. the size of the largest river basin of any country depends on the extent of the continent in which it is situated, its position in relation to the hilly regions in which rivers generally arise and the sea into which they flow, and the distance between the source and the outlet into the sea of the river draining it. the rate of flow of rivers depends mainly upon their fall, also known as the gradient or slope. when two rivers of different sizes have the same fall, the larger river has the quicker flow, as its retardation by friction against its bed and banks is less in proportion to its volume than is the case with the smaller river. the fall available in a section of a river approximately corresponds to the slope of the country it traverses ; as rivers rise close to the highest part of their basins, generally in hilly regions, their fall is rapid near their source and gradually diminishes, with occasional irregularities, until, in traversing plains along the latter part of their course, their fall usually becomes quite gentle. accordingly, in large basins, rivers in most cases begin as torrents with a variable flow, and end as gently flowing rivers with a comparatively regular discharge. the irregular flow of rivers throughout their course forms one of the main difficulties in devising works for mitigating inundations or for increasing the navigable capabilities of rivers. in tropical countries subject to periodical rains, the rivers are in flood during the rainy season and have hardly any flow during the rest of the year, while in temperate regions, where the rainfall is more evenly distributed throughout the year, evaporation causes the available rainfall to be much less in hot summer weather than in the winter months, so that the rivers fall to their low stage in the summer and are liable to be in flood in the winter. in fact, with a temperate climate, the year may be divided into a warm and a cold season, extending from may to october and from november to april in the northern hemisphere respectively ; the rivers are low and moderate floods are of rare occurrence during the warm period, and the rivers are high and subject to occasional heavy floods after a considerable rainfall during the cold period in most years. the only exceptions are rivers which have their sources amongst mountains clad with perpetual snow and are fed by glaciers ; their Question: Beaver dams can cause floods. This statement shows how A) animal growth is affected by environmental conditions B) animal behavior may affect the environment C) an animal's health depends on its environment D) an animal's development depends on its environment
B) animal behavior may affect the environment
Context: inherited traits such as shape in pisum sativum ( peas ). what mendel learned from studying plants has had far - reaching benefits outside of botany. similarly, " jumping genes " were discovered by barbara mcclintock while she was studying maize. nevertheless, there are some distinctive genetic differences between plants and other organisms. species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. a familiar example is peppermint, mentha Γ— piperita, a sterile hybrid between mentha aquatica and spearmint, mentha spicata. the many cultivated varieties of wheat are the result of multiple inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one . species boundaries in plants may be weaker than in animals, and cross species hybrids are often possible. a familiar example is peppermint, mentha Γ— piperita, a sterile hybrid between mentha aquatica and spearmint, mentha spicata. the many cultivated varieties of wheat are the result of multiple inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like inter - and intra - specific crosses between wild species and their hybrids. angiosperms with monoecious flowers often have self - incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach the stigma or fails to germinate and produce male gametes. this is one of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent species but live within the same geographical area, may be sufficiently successful to form a new species. some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations of identical individuals. durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid. the commercial banana is an example of a sterile, seedless triploid hybrid. common dandelion is a triploid that produces viable seeds by apomictic seed. as in other eukaryotes, the inheritance of endosymbiotic organelles like mitochondria and chloroplasts in plants is non - mendelian. chloroplasts are inherited through the male parent in gymnosperms but often through the female parent in flowering plants. = = = molecular genetics = = = a considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the smallest genomes among flowering plants. arabidopsis was the first plant to have its genome sequenced, in 2000. the sequencing of some other relatively small genomes, of rice ( oryza sativa ) and brachypodium distachyon, has made them important model species for understanding the genetics, of several methods used by plants to promote outcrossing. in many land plants the male and female gametes are produced by separate individuals. these species are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes. charles darwin in his 1878 book the effects of cross and self - fertilization in the vegetable kingdom at the start of chapter xii noted " the first and most important of the conclusions which may be drawn from the observations given in this volume, is that generally cross - fertilisation is beneficial and self - fertilisation often injurious, at least with the plants on which i experimented. " an important adaptive benefit of outcrossing is that it allows the masking of deleterious mutations in the genome of progeny. this beneficial effect is also known as hybrid vigor or heterosis. once outcrossing is established, subsequent switching to inbreeding becomes disadvantageous since it allows expression of the previously masked deleterious recessive mutations, commonly referred to as inbreeding depression. unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different mechanisms. the formation of stem tubers in potato is one example. particularly in arctic or alpine habitats, where opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers, replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical to the parent. this is one of several types of apomixis that occur in plants. apomixis can also happen in a seed, producing a seed that contains an embryo genetically identical to the parent. most sexually reproducing organisms are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis. this can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal processes of cellular differentiation to produce some cell types that are polyploid ( endopolyploidy ), or during gamete formation. an allopolyploid plant may result from a hybridisation event between two different species. both autopolyploid and allopolyploid plants can often reproduce normally, but may be unable to cross - breed successfully with the parent population because there is a mismatch in chromosome numbers. these plants that are reproductively isolated from the parent from the oil of jasminum grandiflorum which regulates wound responses in plants by unblocking the expression of genes required in the systemic acquired resistance response to pathogen attack. in addition to being the primary energy source for plants, light functions as a signalling device, providing information to the plant, such as how much sunlight the plant receives each day. this can result in adaptive changes in a process known as photomorphogenesis. phytochromes are the photoreceptors in a plant that are sensitive to light. = = plant anatomy and morphology = = plant anatomy is the study of the structure of plant cells and tissues, whereas plant morphology is the study of their external form. all plants are multicellular eukaryotes, their dna stored in nuclei. the characteristic features of plant cells that distinguish them from those of animals and fungi include a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts. other plastids contain storage products such as starch ( amyloplasts ) or lipids ( elaioplasts ). uniquely, streptophyte cells and those of the green algal order trentepohliales divide by construction of a phragmoplast as a template for building a cell plate late in cell division. the bodies of vascular plants including clubmosses, ferns and seed plants ( gymnosperms and angiosperms ) generally have aerial and subterranean subsystems. the shoots consist of stems bearing green photosynthesising leaves and reproductive structures. the underground vascularised roots bear root hairs at their tips and generally lack chlorophyll. non - vascular plants, the liverworts, hornworts and mosses do not produce ground - penetrating vascular roots and most of the plant participates in photosynthesis. the sporophyte generation is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses and hornworts. the root system and the shoot system are interdependent – the usually nonphotosynthetic root system depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from the root system. cells in each system are capable generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example of epigenetic change is the marking of the genes by dna methylation which determines whether they will be expressed or not. gene expression can also be controlled by repressor proteins that attach to silencer regions of the dna and prevent that region of the dna code from being expressed. epigenetic marks may be added or removed from the dna during programmed stages of development of the plant, and are responsible, for example, for the differences between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code. epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell ' s life. some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. epigenetic changes in eukaryotic biology serve to regulate the process of cellular differentiation. during morphogenesis, totipotent stem cells become the various studies of the molecular genetics of model plants such as the thale cress, arabidopsis thaliana, a weedy species in the mustard family ( brassicaceae ). the genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of dna, forming one of the smallest genomes among flowering plants. arabidopsis was the first plant to have its genome sequenced, in 2000. the sequencing of some other relatively small genomes, of rice ( oryza sativa ) and brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally. model plants such as arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in c4 plants. the single celled green alga chlamydomonas reinhardtii, while not an embryophyte itself, contains a green - pigmented chloroplast related to that of land plants, making it useful for study. a red alga cyanidioschyzon merolae has also been used to study some basic chloroplast functions. spinach, peas, soybeans and a moss physcomitrella patens are commonly used to study plant cell biology. agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus - inducing ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. schell and van montagu ( 1977 ) hypothesised that the ti plasmid could be a natural vector for introducing the nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. today, genetic modification of the ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops. = = = epigenetics = = = epigenetics is the study of heritable changes in gene function that cannot be explained by changes in the underlying dna sequence but cause the organism ' s genes to behave ( or " express themselves " ) differently. one example Question: A scientist crosses a red-flowered plant with a white-flowered plant, and all offspring have red flowers. What will most likely result if these red-flowered offspring are crossed with white-flowered plants? A) All of the offspring will have red flowers. B) All of the offspring will have white flowers. C) The offspring will have either red or white flowers. D) The offspring will have neither red nor white flowers.
C) The offspring will have either red or white flowers.