text
stringlengths
263
344k
id
stringlengths
47
47
dump
stringclasses
23 values
url
stringlengths
16
862
file_path
stringlengths
125
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
81.9k
score
float64
2.52
4.78
int_score
int64
3
5
After an Australian vessel, Ocean Shield, again detected deep-sea signals consistent with those from an airplane’s black box, the official leading a multination search expressed hope Wednesday that crews will begin to find wreckage of a missing Malaysian airliner “within a matter of days.” “I believe we’re searching in the right area,” Retired Air Chief Marshal Angus Houston said. All commercial transport aircraft are fitted with underwater locator beacons to assist in the relocation of black box flight data recorders and cockpit voice recorders. These beacons are free-running pingers that transmit signals at an acoustic frequency of 37.5 kilohertz and have an expected battery life of 30 days. The scale of the challenge in locating the black boxes is immense. 1 mile down 2 miles down 2,717 feet – the depth of an inverted Burj Khalifa — the world’s tallest building, located in Dubai, United Arab Emirates. 2,600 feet – the maximum known depth at which giant squids swim. 3,280 feet – the maximum known depth of a sperm whale dive. They are thought to be capable of remaining submerged for 90 minutes. 4,600 feet – the depth to which the towed pinger locator was lowered when the Ocean Shield’s crew was able to detect the signal for more than two hours Sunday, according to Cmdr. William Marks of U.S. 7th Fleet. 6,000 feet – is the depth that an underwater pinger locator would have to reach to hear the beacon on the bottom of the ocean, depending on environmental conditions, according to Hydro International magazine. 9,816 feet – the maximum known depth of the deepest diving mammal, the Cuvier’s beaked whale. 14,763 feet – the maximum dive depth of Alvin, the first deep-sea submersible capable of carrying passengers. 15,000 feet – just shy of three miles down. This is around the depth that the signal was detected, and the maximum known depth of the ocean floor below the Ocean Shield. 12,500 feet – the depth of the wreck of the Titanic. The Titanic sank after striking an iceberg on its maiden voyage to New York in April 1912. It took 73 years to locate the wreck. 13,100 feet – the depth at which the flight data recorders from Air France Flight 447 were found. The flight crashed in the Atlantic Ocean en route from Rio de Janeiro to Paris in 2009. The black boxes from the missing Airbus A330-203 took two years to locate. 555 feet – the depth of an inverted Washington Monument — it is the tallest structure in the District of Columbia. 22 feet – the draft of the Australian offshore support vessel Ocean Shield, now searching for the black box. It is 347 long. 200 feet – the width of a Boeing 777-200. 1,250 feet – the depth of an inverted Empire State building — it was the tallest building in the world from 1931 to 1973. 1,600 feet – the test depth of the American Seawolf-class submarine The search for Flight 370 entered its fourth week Sunday with a growing group of planes and ships assisting in search. One of aviation’s greatest mysteries began when Flight 370 took off into clear skies March 8 and seemed to disappear into thin air. A black box emits high-frequency signals that can create a complex pattern of sound waves under the ocean's surface. Worldviews: China's "pings" not being investigated for now. SOURCE: Australian Maritime Safety Authority, Hydro International magazine, National Oceanic and Atmospheric Administration Fisheries, BBC.co.uk, and Plosone.org. GRAPHIC: Richard Johnson and Ben Chartoff - The Washington Post.
<urn:uuid:ba92b462-8919-4324-9197-7edaebcbe8fb>
CC-MAIN-2017-26
http://apps.washingtonpost.com/g/page/world/the-depth-of-the-problem/931/?hpid=z1
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320263.78/warc/CC-MAIN-20170624133941-20170624153941-00546.warc.gz
en
0.919321
786
2.859375
3
Muckaway (or Muck away) is a generic industry term that applies to the process where surplus material that is generated as a result of developing land for building or civil engineering projects is removed from the land to a facility regulated by the Environment Agency. The Duty of Care Regulations 1988 is the main legislation that ensures surplus waste materials are correctly dealt with, providing a robust audit trail that reassures clients that the waste materials in short, the Producer of the waste is required to engage the services of a Licensed Waste Carrier who in turn must deliver the material to a Permitted or Licensed facility approved and regulated by the Environment Agency. In general terms the surplus materials are categorised as:- – Inert e.g. clean uncontaminted soils or cleanbuilding rubble/concrete -Non-Hazardous e.g. wood, metals, other man made materials, vegetation etc -Hazardous – e.g.materials contaminated by oil,asbestos, fuel, japanese knotweed Aggregates are either Primary or Secondary materials used in developments to provide compliant foundation or sub base materials needed to construct buildings or roadways upon Primary Aggregates are typically quarried materials being used for the first time Secondary Aggregates refer to suitably compliant foundation or sub base materials that are sourced from recycling previously used materials that are remanufactured adhering to industry standards and protocols.
<urn:uuid:f748adb2-c0ab-49e1-b04d-ad8e91b54cf6>
CC-MAIN-2020-24
https://www.digway.co.uk/faqs/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00497.warc.gz
en
0.926422
290
2.921875
3
Creating the Tribblix ramdisk When you're running Tribblix off the live iso image, most of what you're using is actually just one file - the initial ramdisk loaded into memory. Putting together the ramdisk was one of the trickier areas of getting Tribblix working. It tok a while to work out exactly what needed to be in there. As part of the build, a minimalist OS is installed into a build area. The simplest approach is to put all of that into the ramdisk. That works, but can be pretty large - for a base build, you're looking at a 512M ramdisk. While this is fine for many modern systems, it's a significant constraint when installing into VirtualBox (because you can only assign a relatively small fraction of your available memory to the entire virtual instance). Besides, being efficient is a target for Tribblix. So what happens is that /usr, which is the largest part, and can get very large indeed, is handled separately. What ends up in the ramdisk is everything else, with /usr mounted later. However, there's a catch. There's a tiny amount of /usr that needs to be in the ramdisk to get /usr mounted. Part of this is intrinsic to the special mechanism that's used to mount /usr, and it took some experimentation to work out exactly what files are required. Other than /usr, the ramdisk contains everything that would be installed. The installation routine simply copies the running OS to disk (and then optionally adds further packages). So there's no fiddling around with what's on the ramdisk. (In OpenSolaris and OpenIndiana, some of the files are parked off in solarismisc.zlib and linked to. I don't need to do that, so solarismisc.zlib doesn't exist in Tribblix.) And because the contents of the installed system are taken straight off the ramdisk, the ramdisk contains both 32 and 64-bit files. Creating separate 32 and 64-bit ramdisks might make each ramdisk smaller, but would take up more space overall (because there is duplication) and makes the install much more complex. Thus, when grub boots, it uses $ISADIR to choose the right kernel but the boot archive is fixed. So how is the ramdisk built? It's actually very simple. - Use mkfile to create a file of the correct size, such as 192m - Use lofiadm to create a device containing the file - Use newfs to create a ufs file system on the device. Because we know exactly what it's for we can tune the free space to zero and the number of inodes - Mount that somewhere temporarily - Copy all the temporary install location to it, except /usr - Copy the handful of files from /usr into place - Drop an SMF repository into place. (I copy one from a booted system that's correctly imported.) - There are a few files and directories need by the live boot that need to be created - Unmount the file system and remove the lofi device, then gzip the file. - Then copy the compressed file into where you've told grub to look for the boot archive (/platform/i86pc/boot_archive)
<urn:uuid:3ee948b6-2969-4f86-ac32-e003693ce590>
CC-MAIN-2013-20
http://ptribble.blogspot.com/2012/11/creating-tribblix-ramdisk.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699056351/warc/CC-MAIN-20130516101056-00097-ip-10-60-113-184.ec2.internal.warc.gz
en
0.940889
688
2.5625
3
How to Draw a Blue Throated Macaw - Drawing Tutorials - How to Draw a Blue Throated Macaw Draw a circle & an oval. Draw outlines for the body & tail. Draw outline for head & beak. Draw eye & improve beak. Draw upper wing. Enhance the eyes, beak & head. Enhance the wing. Enhance the tail. Make necessary improvements to finish.
<urn:uuid:0d57e772-992d-4025-9ca9-f4eed4213e0c>
CC-MAIN-2020-29
https://www.drawingtutorials101.com/how-to-draw-a-blue-throated-macaw
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897168.4/warc/CC-MAIN-20200714145953-20200714175953-00190.warc.gz
en
0.790792
94
3.15625
3
Key to longevity? Sharing DNA info is necessary to extend human life, Google exec says Maris, who aims to digitize DNA, stressed during a Wall Street Journal technology conference in California that our genomes “aren't really secret,” urging those protective of their genetic information to loosen the reins a bit. Noting that genetic material is constantly left lying around in public, Maris addressed those who remain nervous about the digitization of DNA. “What are you worried about?” he said on Tuesday, adding that a person could easily gather information by fishing a used cup out of the trash and taking it to a lab for analysis. If such information could be shared without privacy and security fears getting in the way, Maris believes the lifespans of humans could be drastically improved. When asked about a previous comment he’d made that humans could eventually live to be 500 years old, Maris said that was a “conservative” estimate. Maris is rallying for humans' DNA information to become readily available so that scientists working at Google Ventures can accelerate their research and extend the quality and length of the human lifespan. "The reality is, the technology exists now to extend life and have people live healthier, happier lives," Maris said. "If we distributed the technology that we have already...without creating any new technology, we can double the lifespan of people on this planet...there's a lot of talk of the redistribution of wealth, but the redistribution of health is more interesting to me," he added. Maris also stressed that the technology should be available worldwide – not just made available to wealthy people. "If we live in a world where the technologies we're talking about are for rich white people in Silicon Valley, then we've failed," he said. The 40-year-old also stressed the need for health innovation over other areas such as transportation, saying that he values the ability to decode DNA over flying cars. Maris, who studied neuroscience, is passionate about investing in life sciences, a sector which makes up about 30 percent of Google Ventures' portfolio.
<urn:uuid:9c2c1f6e-2791-4ee6-bd25-8fe1fc5fff78>
CC-MAIN-2017-26
https://www.rt.com/usa/319313-dna-sharing-google-ventures/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323682.21/warc/CC-MAIN-20170628134734-20170628154734-00422.warc.gz
en
0.956947
434
2.5625
3
Fox skulls. Museum quality replicas are cast from original specimens in durable polyurethane resin. We have Bat-Eared Fox skull, Fennec Fox Skull, Gray Fox Skull, Island Fox Skull, Kit Fox Skull, South American Fox Skull, Cape Fox Skull, Sand (pale) Fox Skull. Most foxes live 2 to 3 years, but they can survive for up to 10 years or even longer in captivity. Foxes are generally smaller than other members of the family Canidae such as wolves, jackals, and domestic dogs. Fox-like features typically include an acute muzzle (a "fox face") and bushy tail. Other physical characteristics vary according to their habitat. For example, the Fennec Fox (and other species of foxes adapted to life in the desert, such as the Kit Fox) has large ears and short fur, whereas the Arctic Fox has small ears and thick, insulating fur.
<urn:uuid:9ba20bcc-9b75-4aff-9742-8a14229fed88>
CC-MAIN-2017-26
http://www.dinosaurcorporation.com/fox.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323870.46/warc/CC-MAIN-20170629051817-20170629071817-00023.warc.gz
en
0.939479
189
3.203125
3
A Beginners Guide to Ice Hockey - For a more detailed guide and information on kit and finding a local team click here. Ice Hockey is one of the fastest growing spectator sports in the UK. Each game consists of 3 x 20 minute periods, and there is a 15-minute break between the periods. Each team can have a maximum squad of 18 players, with 2 'apprentices'. Only 6 players from each team can be on the ice at any time, however players can swap as often as they want (usually every few minutes). Players and the puck can go anywhere on and over the ice surface. The puck is deemed out of play if it goes over the glass or if the referee loses sight of it, usually behind a player's skate against the boards. When the referee stops play, the timekeeper in turn stops the clock. Play, and the clock, starts again with a face off, the position of the face off depends on why/where play stopped. The clock counts down from 20:00 to 00:00 for each of the three periods of the game. A goal is scored when the whole of the puck completely crosses the goal line. The goal judge who sits behind the goal puts on a red light to signal the goal but the referee signals whether it is a goal, by pointing to the goal, or a washout by spreading his arms wide. A goal can be scored if the puck is deflected off another player or a skate, but cannot be deliberately kicked in, or deflected off a referee or linesman. Physical contact is allowed between players contesting for the puck. Rough contact, which could cause injury, is covered by various rules. There are several lines on an ice hockey rink, and these are either Red or Blue. The red line across the centre divides the ice into two halves. The blue lines divide the ice into thirds or zones. The centre zone is the neutral zone; the others are the attacking zone and the defensive zone. The semi-circle around the goal is the crease; attacking players cannot score if they are in the crease deliberately. HOCKEY'S TWO MAIN RULES: The netminders primary task is simple - keep the puck out of his own net. Offensively, he may start his team down the ice with a pass, but seldom does he leave the net he guards. These players try to stop the incoming play at their own blue line. They try a break-up passes, block shots, cover opposing boards and clear the puck from in front of their own goal. Offensively, they get the puck to their forwards and follow the play to the attacking zone, positioning themselves just inside their opponents blue line at the "points". The striker on the ice, the centre leads the attack by carrying the puck on offence. On defence, he tries to disrupt a play before it gets on his team's side of the ice. The Wings team-up with the Center, on the attack, to set up shots on goal. Defensively, they attempt to break-up plays by their counterparts and upset the shot attempts. Black and White striped shirt with Orange armbands. The referee supervises the game, calls the penalties, determines goals and handles face-offs at centre ice to start each period. Black and White striped shirt. Two are used. They call offside, offside passing, icing and handle all face-offs not occurring in centre ice. They do not call penalties but can recommend to the referee that a penalty be called. One sits off-ice behind each goal and indicates whether a goal has been scored by turning on a red light just above their station. The referee can ask his advice on disputed goals, but the referee has final authority and can overrule the goal judge. Players who break the rules may be penalised, they are sent to the penalty box for 2 or more minutes, leaving their team short of a player. The other team then has a power play. If they score within the 2 minutes, the player in the penalty box comes out. Once the play resumes, the game announcer will state the player's number and name, the length of the penalty, the name of the penalty and the game time when it occurred. A team plays shorthanded when one or more of its players are charged with a penalty (Powerplay). However, no team is forced to play more than two players below full strength (6), at any time. When a third penalty is assessed to the same team, the penalty is suspended until the first penalty expires. When the penalty is called on the goalie a teammate serves his time in the penalty box. (Two minutes). Called for tripping, hooking, spearing, slashing, charging, roughing, holding, elbowing and boarding. (Five minutes). Called for fighting or when minor penalties are committed with the deliberate attempt to injure. Major penalties for slashing, spearing, high sticking, elbowing, butt ending and crosschecking carry automatic game misconducts. (Ten minutes). Called for various forms of unsportsmanlike behaviour or when a player incurs a second major penalty in a game. This is a penalty against an individual and not a team, so a substitute is permitted. A free shot, unopposed, except the goalie, given to a player who is illegally impeded from behind while he has possession of the puck with no opponent between him and the goalie, except the goalie. A team, which commits the offence, is not penalized beyond a penalty shot, whether it succeeds or not. Whistle is delayed until the penalized team regains possession of the puck.
<urn:uuid:98377c30-e8b4-4782-9636-92788f7b93b4>
CC-MAIN-2020-10
https://guildfordflames.com/rules
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146643.49/warc/CC-MAIN-20200227002351-20200227032351-00089.warc.gz
en
0.960999
1,173
3
3
Apr 07, 2011 — 5 comments Talking about the relationship between our Main class and the root/time-line of our FLA. OK then so what Is 'this', 'root', 'stage' and 'parent'? by the end of this video you will have a basic understanding of how scope works - its only our first video in this topic. Polymorphism enables us to treat many object types as if they where one type of object. The idea behind it is to treat objects based on what they do instead of what they are. In the process we will discover the glue that will make it possible - we will create an interface and then integrate it into our logic and show case how it works. This tutorial will show you how to copy-paste motion from one object to another in Adobe Flash. Video tutorial showing how to use Photoshop and Flash CS6 to simulate movement (a flying airplane) by creating a moving background (clouds in the sky). Learn how to take advantage of join split to replace content within a string. This tutorial is based on a real user question and will help you isolate the issue of join/split when it doesn't work. Its time for us to take the complex math we thought had no point from high school and turn it into an animation. actually cos/sin are so useful as we can use them any time we need to track movement such as when you create a googly eye that follows the mouse (that is done using cos-sin) in this example we will use them to create a moon for our globe. In this tutorial you will learn how to create a gorgeous page roll image transition effect using masking in Adobe Flash. In this tutorial, you will learn how to create an outline animation effect for an image and then the image filling the outline using advance masking effects. In this tutorial, you will learn how to create a realistic 3D earth rotating effect using masking effect in Flash. Learn how to create a unique striped box transition effect using advanced masking effects in Flash. Help us out! More and more tutorials are submitted to Good-Tutorials each day. We could use your help with finding good tutorials. Mind lending a hand?
<urn:uuid:94d7424c-9f13-41ef-8e6a-abfeaa6e6000>
CC-MAIN-2014-35
http://www.good-tutorials.com/tutorials/flash?page=7
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920849.16/warc/CC-MAIN-20140909042639-00045-ip-10-180-136-8.ec2.internal.warc.gz
en
0.907202
454
3.078125
3
The initiator of the creation of an Autonomous Colony was the prominent activist of the international labor movement, the Dutch engineer C. J. Rutgers. Sebald Justus Rutgers (1879 - 1961) a Dutch internationalist, was a member of the Left Social Democratic Party Of Holland from 1909. He was a hydro-technical engineer. He lived from 1915 - 1918 in the USA, where he became close with immigrant Bolsheviks, and took part in the activity of the international "League of Socialist Propaganda" With the mandate of the League, he went through Japan to Vladivostok. He met with V.I. Lenin and was named the general inspector of waterways. He took part in the work of the first congress of the Comintern, was the secretary of the Anglo-American group of the Bolshevik party, and a member of the Communist Party from 1919 (His party term of service was set from 1899.) Rutgers drew up a project for the organization of a major industrial association comprising the Kuzbass and Nadezhdinski factories in the Urals. The skeleton of the cadre for the project was to be the American Union "Industrial Workers of the World" (IWW), which was built on anarcho-syndicalist principles. The "Industrial Workers of the World" (IWW) arose in the USA in 1905 as a counter-balance to the American Federation of Labor (AFL), which was conducting a policy of class collaboration. Traditional socialists as well as anarcho-syndicalist elements joined the IWW, but the latter soon came to predominate. The IWW considered moderate (_________) "direct action" the basic method of struggle - sabotage, strikes, and the general strike. The last was a particular article of faith for the union. The IWW considered that after the victory (with the help of the general strike) the working class would immediately move to the organization of a new free industrial society, in which the management of all economic life would take place in industrial unions. The union refused traditional political struggle, including electoral politics. Lenin took part in the decision to create the Autonomous colony. After meeting with the initiator, Rutgers, and with Bill Haywood and G. Calvert, he wrote a letter on 19 September 1921 to V. Kuibyshev in which he spoke about their intentions and plans, and turned his attention to the fact that "something on the order of an autonomous state trust of workers associations" was planned. In a 12 October memo to V. Molotov, accompanied by a draft decree of the Politburo on the question, Lenin expressed some doubts: "The question is difficult: Pro: if the Americans fulfill what they have promised, the value will be gigantic. Then we will not regret the 600,000 silver rubles. Contra: Will they complete it? Heywood is a semi-anarchist. He's more sentimental than businesslike. Rutgers has fallen into leftism. Calvert is the arch talker. We have no business guarantees. These are entertaining people. In an atmosphere of joblessness, they form a group of "prospectors of adventure" which ends in a squabble. But then we lose part of the 600,000 silver rubles that we have provided them." On 22 June 1921, the Council of Labor and Defense (STO) published a decree about the American industrial emigration, point 1 of which stated: "The development of individual industrial enterprises or groups of enterprises by means of turning them over to groups of American workers and to industrially developed peasants on a contractual basis, which guarantees them a certain degree of autonomy, is recognized as desirable." On November 1921, a contract was concluded between the STO and the American workers organized by the group (Heywood, Rutgers, Bayer, Barker), concerning the utilization of a series of enterprises in Siberia (in the Kuzbass and Tomsk) and in the Urals (Nedezhdenski Factory) Bill Heywood, "Big Bill", (1869 - 1928), a miner, was active in the workers' movement in the USA and in the international workers' movement. From 1901, he was a member of the Socialist Party, and later one of the leaders of its left wing. He was one of the founders and leaders of the IWW. He spoke out against militarism and war, and welcomed the October revolution. In order to escape political persecution, he left the USA. From 1921 he lived in Russia, actively participated in the creation of the Autonomous Industrial Colony (AIC) "Kuzbass". He worked in MOPR (_____________ ___________ ______ ______ _________ - International Organization to Help the Revolutionary Fighters), and was active as a journalist. In the course of establishing AIC Kuzbass from January 1922 to December 1923, 566 foreign citizens were brought into the workforce. The American cell of the Bolshevik party had 73 members. About 250 of the colonists who came to Kuzbass were members of the IWW, or were non-party. Thus quite a few non-party persons found themselves under the influence of anarcho-syndicalism. Even among those colonists who were party members, many fell under the influence of anarcho-syndicalist principles. The communist leadership of Kuzbass recognized that the anarcho-syndicalist ideology of the IWW even more strongly influenced many who were declared Bolsheviks. The noted anarcho-syndicalist Vladimir Shatov was authorized by the STO to direct AIC "Kuzbass" in 1921-1922. Vladimir Sergeivich Shatov (1887, Kiev - 1943) was active in the revolutionary movement from 1903. In 1907 he immigrated to the USA, where he was a member of the IWW in charge of the Russian section. In 1917 he returned to Russia, and took active part in the revolutionary movement and in the civil war. He remained an anarcho-syndicalist, and assumed responsible posts in the Red Army, in industry and in transport. He was repressed in 1937. At the same time in New York a special committee was formed for the transport of workers to Soviet Russia. Representatives of the Communist Party (Raize) and of the IWW (Cullen and Calvert) were members of the committee. The Americans who arrived in Russia met a warm reception from social organizations and the soviet people following on all the roads from Petrograd to Kemerovo. The colony received a great deal of local help from the very beginning of the organizational work. In Rutgers' words, they were able to make progress in the work "thanks to the sympathy of the local workers, but mainly, of the party and soviet organizations." The majority of the colonist members of the IWW came to the USSR with a sincere yearning to realize their ideas and lives there. Anarcho-syndicalist principles were introduced into the Autonomous colony as well. For the first time, the anarcho-syndicalists established an egalitarian system of wages in the enterprises, "speaking out against motives of material interest." Some of them spoke constantly for equalization of wages not in money, but in kind. When, according to a decree of the STO and the Siberian Workers and Peasants Inspection (rabkrin), piece-work was gradually introduced, it was strongly opposed by the members of the IWW. They saw in the action the repudiation of the principles of social justice. Another cause of dissatisfaction among the anarcho-syndicalists was the approach to "workers' democracy" in the colony. In the beginning the colonists tried to institute it. The advocates of "Industrial Democracy" in particular demanded that decision making on all questions be turned over to the workers' assembly, and repudiated the principle of one-man-management. In such ways the anarcho-syndicalists attempted to change the Autonomous colony into a self-managed anarcho-syndicalist association. Member of the management of the AIC, head of the émigrés of Kemerovo Bauer said: Thus we will demonstrate to your communists how it is possible to avoid "dictatorship", since in our relations in the future colony we assume the principle of "industrialism", subjecting ourselves, of course, to the communists and not attempting to violate the laws of your proletarian state." In a letter to V.I. Lenin on the first results of the work of the AIC "Kuzbass" in October 1922, Rutgers turned his attention to the view that "great care is needed in the establishment of qualifying and keeping current the workers who arrived from America. In addition, it necessary to direct special attention to the struggle with the conviction that direction of work in Russia can and will be realized by groups of workers through mass assemblies and commissions." As a result of the enthusiasm for work among the colonists, who were supported by the assistance of the central and local authorities, there was a notable increase in the productivity of labor. The Commission of the STO, which was monitoring the activities of the colony, confirmed that the enterprises of the AIC colony achieved a higher productive yield on labor then did the mines of the Kuzbass Trust. In the Kemerovo mine the extent of fundamental work was expanded. Growth in coal output continued. From 9,000 tons in February, output rose to 12,000 tons in August of 1923. On 23 October Gosplan (the State Planning Agency) dedicated additional resources to the development of he Kemerovo mine and coke factory. The management of the Autonomous colony rebuilt the furnace of the factory and installed a new pump, coking equipment and reservoir for benzol. In January of 1924 new electrification was prepared for launch, a laboratory was built as well as mechanical shops. On the 2nd of March the coke factory was put into service. The collective for the factory was composed primarily of soviet workers. In November of 1924 STO approved a decision to provide the AIC "Kuzbass" with the Kolchuginski, Prokolevski, and Kiselevski mines. Towards the end of 1924 the mines of the Kuznetski basin were recognized far beyond the borders of Siberia. Contracts were negotiated for the colony to provide the Ural factories, the Baltic Fleet and the Port of Archangel. In March of 1924 a contact was concluded with the Urals regarding the supply of the Kemerovo coke factory. The anarcho-syndicalist colonists reacted negatively to the measures of the Soviet government providing Russian enterprises on concession to foreign capital. Thus one of the Americans, Schwartz, declared at a meeting of a Bolshevik cell, that "to give concessions to private entrepreneurs is a harmful thing, since this means new chains of slavery for the workers, who will no longer interest themselves in the state and politics, will not support the Soviet power, but will go into the trade unions - the only place for them." A part of the colonists defended the organization of their own separate unions. The Bolshevik party organization of the Kuzbass, on the other hand, took the line of bringing the American and Russian workers closer together, and for the entry of the foreigners into the Russian unions. However both the difference in nationalities and the difference in ideas impeded this approach between the communists and the anarcho-syndicalists. At that time a compromise resolution was reached, that those members of the IWW, who could not agree to join "the Russian unions, had the right over the course of some time to take part in the work of the unions without official membership in them, so that they could have some advance acquaintance with their work." The application of anarcho-syndicalist principles in life, the autonomous status of the colony, the alternative character of the anarcho-syndicalist idea of socialism - all this created a certain uneasiness in the state party apparatus. Thus, one of the members of the Central Committee of the Profintern (the Trade Union International) expressed the apprehension that "the organization of the colony on a free foundation might lead a situation where those ideas of the American group, which protected and supported comrade Trotsky, compel us to send a military unit to suppress an uprising of the 'IWW', if Kemerovo is occupied by the Lumpenproletarian members of the 'IWW'." Similar concerns were also expressed by one of the communist leaders of the Kuzbass, who thought that Kemerovo might turn into an anarcho-syndicalist stronghold of the Kuzbass. The facts tell us, that between the communists and anarcho-syndicalists in the territory of the AIC "Kuzbass", a normal struggle of ideas was conducted, during simultaneous political and economic cooperation. The forces in this struggle, however, were unequal: behind the local communists (both Russian and American) the state and the complex administrative command structure stood ready to help. The ideas of state socialism and its practical application eventual triumphed. Part of the colonists returned to their homeland for this reason. The practice of Stalin's industrialization and "state of emergency" (_____________ ) could not fail to drive them out of the USSR. On the other hand, the principles and the special status of the AIC "Kuzbass" ended the arrangement of Party-State management. Under the new conditions of general industrialization in the country, STO of the USSR on 22 December 1926 declared the contract with the AIC "Kuzbass" nullified. Surviving successfully for more than 4 years, the Autonomous colony was liquidated from above. Part of the colonists went to the USA, another part remained to work in the enterprises of the Kuzbass. 1. E.M. Polyanskaya The Autonomous Industrial Colony of the Kuzbass, in works of the scientific conference on the history of the black metallurgy of the Kuzbass, ____r_v_, 1957. 2. Z.A. Krivosheeva From the history of the formation of the "Autonomous Industrial Colony Kuzbass" 1921-1923; From the History of Western Siberia, Issue 1, ____r_v_, 1956. 3. History of the Kuzbass, Part1-2. - ____r_v_, 1967 4. Theodore Dreiser, Ernita. 2 History of the Second International. - _oscow,, 1966. - v.2.- pp.160-162; 299-300 3 Lenin, Works (5th Russian Edition), v. 53, p. 203-204. 4 Lenin, ibid, v. 44. pp. 141-142 5 _ _______ _ ______: __. __________ _ __________. (With Lenin in our heart: Collection of Documents and Materials) ____rovo, 1976. - p.40 6 Ibid, p.57 7 History of the Kuzbass Part 1-2, ____rov_, 1967. - p.348; Lenin, Works, 5 ed. - v.44. - pp. 655-656 8 Center for documentation of recent history of Tomski Oblast(CDRHTO).; History of the Kuzbass. - Part 1-2. - p. 347; Outline of the history of the party organization of the Kuzbass. - ____rov_, 1973. - p.186. 9 CDRHTO. The complicated process of eliminating anarcho-syndicalist ideas from part of the colonists and the transition to communist positions is sketched in T. Dreiser's story "Ernita". 10 History of the Kuzbass, p.347 11 Ibid., p.350 12 Yu. A. Ivanov. Questions of the history of development of black metallurgy of the Kuzbass in the memories of contemporaries. ____rov_, 1970. - p. 206; E.A. Krivosheeva From the history of the forming of the "Autonomous Industrial Colony of the Kuzbass in From the History of Western Siberia, Issue 1. ____r_v_, 1966. - p.225. 14 With Lenin in Our hearts, p. 88. 15 History of the Kuzbass - Part 1-2. - p. 353 17 History of the Kuzbass - Part 1-2. - p. 353; E.A. Krivosheeva p.224-226
<urn:uuid:8fc816c1-726b-4b7b-98f6-9f6c0796d699>
CC-MAIN-2014-35
http://flag.blackened.net/revolt/russia/kuzbass_colony.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917663.12/warc/CC-MAIN-20140901014517-00292-ip-10-180-136-8.ec2.internal.warc.gz
en
0.957018
3,450
3.015625
3
Regular physical activity is crucial when it comes to maintaining -- and building -- muscle mass and bone density. Despite its many benefits, though, exercise is also associated with the development of some side effects, such as leg weakness. In most cases, exercisers can prevent or manage leg weakness by making changes to their pre- or post-exercise routine. Be sure to familiarize yourself with the causes of leg weakness after exercise to ensure optimal results in its treatment. Low Glycogen Stores Glycogen is a type of glucose that is stored in the liver and muscles, and is used for fuel during exercise. As with other forms of body fuel, glycogen stores can become depleted during physical activity -- especially when it lasts longer than 60 minutes or is performed at a high intensity. As glycogen stores decrease, exercisers may experience weakness in their legs and other muscle groups. Eating a small snack that contains healthy carbohydrates before an exercise session, or consuming glucose-based substances during extended or intense physical activity, can be an effective way to prevent glycogen depletion. Dehydration is also associated with leg weakness after exercise, reports the American College of Sports Medicine. In fact, muscles are composed of nearly 80 percent water, so it should come as no surprise that low fluid stores can lead to serious dysfunctions. As dehydration occurs, working muscles experience difficulty contracting in their usual pattern, thus resulting in leg weakness, cramps, and numbness or tingling. Drink at least 8 ounces of water before starting to exercise, and replace each pound of weight lost during activity with another 8 ounces, recommends the American Council on Exercise. In addition, consume at least half of your body weight in fluid ounces over the course of a day to meet minimal dietary requirements. Over-training syndrome occurs as a result of excessive physical activity and limited recovery time, and may produce leg weakness in individuals who rely excessively on this muscle group, including cyclists and runners. According to Rice University, over-training may be caused by decreases in testosterone, increases in muscular breakdown and changes in immune system function. In most cases, rest is the best form of treatment for individuals who are suffering from over-training syndrome. Depending on the severity of the over-training syndrome, exercisers may need a few days or several weeks to regain leg strength and return to previous levels of performance. Along with fluid, electrolytes -- like sodium and potassium -- play an important role in muscular contraction. When electrolyte stores are low, then, exercisers may experience weakness in their legs and other muscle groups. While most exercisers get all of the electrolytes that they need with a balanced diet, individuals who engage in extended or intense bouts of activity may need to supplement body stores. Consider the use of electrolyte replacement drinks, such as Gatorade or Powerade, if you exercise more than 60 minutes, live in a very warm climate, or perform high-intensity activity on a regular basis.
<urn:uuid:b93bc89a-3025-4c5f-ab82-b4dcb399811e>
CC-MAIN-2017-43
https://www.livestrong.com/article/278968-leg-weakness-after-exercise/
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823731.36/warc/CC-MAIN-20171020044747-20171020064747-00449.warc.gz
en
0.952623
599
3.1875
3
Jonathan Wilkins, marketing director of obsolete parts supplier, EU Automation discusses technologies that are revolutionising healthcare in the 21st century Doctor Crawford W. Long conducted the first surgical operation under anaesthetic in 1841 in Jefferson, Georgia. In 1841, general anaesthetic had not yet been invented, so Dr Long used diethyl ether, a chemical most commonly used to start internal combustion engines, in its place. The surgeon pressed an ether-soaked towel to the patient’s face to put him to sleep before removing a tumour from his neck. He billed the patient two dollars for the whole procedure. [Read more…] about Healthcare’s digital revolution: Technologies transforming medicine
<urn:uuid:25a638fe-d338-4b42-8ab8-f05c1f8b08aa>
CC-MAIN-2020-05
http://roboticsandautomationnews.com/2017/02/13/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250609478.50/warc/CC-MAIN-20200123071220-20200123100220-00496.warc.gz
en
0.913744
143
2.921875
3
First in first out: Inventory accounting system in which items purchased earliest are the first to be used to determine ending inventory and the cost of goods sold - Costs are similar to the physical flow of inventory - Results in higher net income in a period of rising prices when compared to the LIFO method - Inventory value is overstated in period of rising prices - Acceptable method under both GAAP & IFRS Accounting Flashcards includes a translate button for English, Chinese, and Spanish. Learn financial accounting using beautifully illustrated flashcards, coordinated lessons, and rich audio. Whether you are an aspiring CPA or IFRS expert, use this accountancy app to reach your goals. Even an aspiring chartered accountant or those reaching for the CPA Australia can benefit. Topics of accounting standards, equation, terms, ratios, and more are covered. « Back to Glossary Index
<urn:uuid:32c34532-cbc2-423f-9b5b-fbcde54e03cd>
CC-MAIN-2017-39
http://accountingplay.com/glossary/fifo_inventory/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687740.4/warc/CC-MAIN-20170921101029-20170921121029-00503.warc.gz
en
0.902775
181
2.578125
3
U.S. EPA applauds wildlife habitat project on Guam Release Date: 03/19/2012 Contact Information: Dean Higuchi, 808-541-2711, email@example.com Effort was started 25 years ago via ponds created for waste management (03/19/12) HONOLULU - The U.S. Environmental Protection Agency has received the management plan for Tristar Terminal Guam’s project to protect valuable habitat for the Mariana common moorhen, an endangered species of marsh bird. “EPA appreciates Tristar’s voluntary actions as good environmental stewards,” said Jared Blumenfeld, EPA’s Regional Administrator for the Pacific Southwest. “Their plan ensures that former oily waste ponds will be a vibrant habitat where the rare moorhens can thrive.” For over 25 years, EPA has been working with Guam EPA, the Guam Oil Refining Company (GORCO), and Shell Guam, Inc. to protect this man-made habitat. The habitat was inadvertently created by GORCO in 1979 when it constructed a series of open water surface ponds to treat petroleum wastes. The Mariana common moorhen were first attracted to the open water ponds when the treatment unit was closed in 1983, and over time they began using the ponds as a home. Tristar, as the current owner of the petroleum terminal, worked with EPA and the U.S. Fish & Wildlife Service to create the current voluntary management plan. Tristar will be maintaining the water level in the ponds and vegetation around the ponds to provide shelter and nesting material for the moorhens. They will also survey the moorhen population once every 3-5 years and control animal predators on-site, if necessary. The Mariana common moorhen (Gallinula chloropus guami), known locally as ‘pulattat,’ is one of the few endemic birds left on Guam. In 2004, it was estimated that there were approximately 90 Mariana common moorhen on Guam, 154 on Saipan, 41 on Tinian, and only two individuals on Rota. A 2007 count showed an impressive 33 moorhens at the Tristar facility alone. Follow the U.S. EPA's Pacific Southwest region on Twitter: http://twitter.com/EPAregion9 and join the LinkedIn group: http://www.linkedin.com/e/vgh/1823773/
<urn:uuid:6da9e16d-31b1-427e-89de-667edf3681c5>
CC-MAIN-2017-30
https://yosemite.epa.gov/opa/admpress.nsf/8b770facf5edf6f185257359003fb69e/b5cf000d11adc084852579c60061a604!OpenDocument&Start=1&Count=5&Expand=4
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00046.warc.gz
en
0.912183
515
2.625
3
- Enter a word for the dictionary definition. From The Collaborative International Dictionary of English v.0.48: Voider \Void"er\, n. 1. One who, or that which, voids, ?mpties, vacates, or annuls. [1913 Webster] 2. A tray, or basket, formerly used to receive or convey that which is voided or cleared away from a given place; especially, one for carrying off the remains of a meal, as fragments of food; sometimes, a basket for containing household articles, as clothes, etc. [1913 Webster] Piers Plowman laid the cloth, and Simplicity brought in the voider. --Decker. [1913 Webster] The cloth whereon the earl dined was taken away, and the voider, wherein the plate was usually put, was set upon the cupboard's head. --Hist. of Richard Hainam. [1913 Webster] 3. A servant whose business is to void, or clear away, a table after a meal. [R.] --Decker. [1913 Webster] 4. (Her.) One of the ordinaries, much like the flanch, but less rounded and therefore smaller. [1913 Webster]
<urn:uuid:5171f56c-8155-4ea3-92ca-2c820801a05d>
CC-MAIN-2014-15
http://www.crosswordpuzzlehelp.net/old/definition/voider
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00550-ip-10-147-4-33.ec2.internal.warc.gz
en
0.893489
265
2.578125
3
Foods that Can Discolor Your Teeth Certain foods and drinks have natural dyes in them that can stain your teeth. To keep your teeth pearly white, you should learn what these products are so that you can avoid them as much as possible. A healthy diet is one of the best ways to improve your oral health. Eating less processed food and more fruits and vegetables will keep your teeth strong and healthy. Some products contain natural coloring agents, which can stain your teeth permanently. A few of the main products that can stain your teeth include berries, soy sauce, coffee, tea, soda and curry. Once the coloring agents stain your teeth, whitening them becomes more difficult. Fortunately, you can visit our expert in teeth whitening in Culver City to have your smile whitened. Consuming hot foods is another way to discolor your teeth. When you eat hot foods, it opens the pores on your teeth and erodes your tooth enamel. The same thing also happens when you eat acidic food. Eating hot or acidic food occasionally is fine, but you should never do it on a daily basis. In fact, you should always let your food cool down before taking a bite. The other main product to avoid is tobacco. Although tobacco is not a food, it can still discolor your teeth. Smoking or chewing tobacco can turn your teeth yellow quicker than food. Those who already have discolored teeth should contact our Culver City cosmetic dentist to have the problem treated. Once the discoloration becomes severe, professional whitening is the only option.
<urn:uuid:be6ef90f-f046-4363-808f-2f619422e8b5>
CC-MAIN-2017-39
http://www.culvercitydentist.com/foods-that-can-discolor-your-teeth/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687447.54/warc/CC-MAIN-20170920194628-20170920214628-00463.warc.gz
en
0.957723
319
2.703125
3
Personalized learning can be referred to as an educational approach which aims to customize the learning for each student based on his/her needs, strengths, interests and skills. Under this educational approach, each student gets a customized learning plan for the best learning. According to the ideology of personalized learning, a classroom doesn’t fit everyone to get educated. It is obvious that the teachers could not lead and guide all the students through the same lessons. The teacher is allowed to guide each student through an individual journey. Under this program, the students may learn different skills in different places. However, the personalized learning plans to keep the students on the track to meet the standards like a high school diploma. It is a program under which the kids learn things in different ways and at different places. Each student gets a personalized learning plan after the assessment of how he learns, what his skills and what he knows. The students are allowed to work with their teachers to set long-term as well as short-term goals. It also helps the students to take ownership of their learning. The program also enforces the teachers to make sure that learning plans must meet the academic standards. Such learning is not a substitute for the program like Special Education as it lies under the general learning program. It helps the students to build self-advocacy skills. It provides a chance for the students to explore their interests. It offers pedagogy, curriculum and learning environment for meeting the needs of the individual student. It such a learning environment, the objectives, content, method, and pace all might vary. It supports student progress based on the subject. If you don’t want to be a part of a personalized learning program, availing Assignment Help Melbourne will be the best option to accomplish academic standards. Example of Personalized Learning: Personalized Learning Program by the NSW department of education There are a number of nations and territories that are guiding the schools to follow the personalized learning program in a distinct manner. Following is one of the examples ofa personalized learning program Personalized Learning by NSW department of education Personalized Learning Pathways is a program proposed by the NSW Department of Education. It is an active process in NSW. It is developed as the consultation process between the teachers, parents, and students for identifying, organizing and applying the personal approaches for personal learning and engagement. PLP can have short-term goals as well as long-term goals. The short-term goals are the steps to achieve long-term goals. This program is applicable for all the aboriginal students which is tailored to the student and reviewed or updated on a regular basis. The program recommends the schools and communities of NSW to develop a PLP which suits their local needs. It is a compulsory subject as Stage 1 of the students and applicable to the students lie under the age of 10. It helps to plan the future of the students and assist them to choose their subjects at the age of 11 and 12. As the per the criteria of the Education Department of NSW, the student must achieve at least C grade to successfully complete the PLP subject. The subject provides the document as the guide for the teachers to conduct the personalized learning pathways for the aboriginal students. It also provides the planning to teach, assessment criteria, supporting materials, forms, workshops and meetings, and policies. Planning to teach It provides the information and materials to teach this subject including the information essential for the assessment and teaching and material for the development of the learning and teaching programs relevant to the subject outline. If this doesn’t work, seeking ‘write my essay’ services from experts is a reliable option. Assessment of learning of the student To assess the performance of the students, level of achievement of the students is marked with grade between A to E for stage 1. To pass the subject, the student score at least C grade. It demonstrates the level of learning and the quality of their learning. These materials help the teachers in the development of teaching programs and resources. It includes: – - Exemplars of learning and assessment plan - Task sheets - Annotated student work in the written as well as non-written forms Workshops and meetings They are used to accommodate the teachers, students, and parents in the venue to provide the materials for personalized learning. It could be the workshop of two working days or seven working days using video conferencing. Policy Inclusion of NSW PLP - It should engage the students in the discussion of their aspiration goals - It should contain the SMART (Specific, Measurable, Achievable, Realistic and Time-bound) goals. - It should support the students to realize their life-long goals and aspirations. - It should record the academic goals and aspirations to account the spiritual, social, emotional and physical well-being of the students. - It should be developed as the partnership with the parents/ care-takers with the support of aboriginal staff and special school staff. - It should articulate the learning pathways for pursuing the student’s identified goals. - It should be easily accessible through the hard copied plans or online program for the students, families, and staff. - It should be owned by the student and engage the student in the development, monitoring and review processes. - It should include literacy and numeracy component for all the students those are lagging behind and should be negotiated with the student, teacher and parent/ care-taker. - It should include exceptional learning to fulfill the potential of the learners.
<urn:uuid:5a750b9a-6765-4b6b-a679-d54902491751>
CC-MAIN-2020-10
http://ewaynews.com/what-is-personalized-learning-concept-and-its-example/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146714.29/warc/CC-MAIN-20200227125512-20200227155512-00111.warc.gz
en
0.958931
1,131
3.421875
3
James Webb Space Telescope |Organization||NASA, with significant contributions from ESA and CSA| |Major contractors||Northrop Grumman |Launch date||October 2018 (planned)| |Launch site||Guiana Space Centre ELA-3 Kourou, French Guiana |Launch vehicle||Ariane 5 (planned)| |Mission length||5 years (design) 10 years (goal) |Mass||6,200 kg (13,700 lb)| |Orbit period||1 year| |Location||1.5 million km from Earth (Earth–Sun Lagrangian point L2 halo orbit) |Telescope style||Korsch (Three-mirror anastigmat)| |Wavelength||0.6 µm (orange) to 28.5 µm (mid-infrared)| |Diameter||6.5 m (21 ft)| |Collecting area||25 m2 (270 sq ft)| |Focal length||131.4 m (431 ft)| |NIRCam||Near IR Camera| |MIRI||Mid IR Instrument| |NIRISS||Near Infrared Imager and Slitless Spectrograph| |FGS||Fine Guidance Sensor| |Website||NASA United States ESA b Europe The James Webb Space Telescope (JWST), previously known as Next Generation Space Telescope (NGST), is a planned space observatory optimized for observations in the infrared, and a scientific successor to the Hubble Space Telescope and the Spitzer Space Telescope. The main technical features are a large and very cold 6.5-meter (21 ft) diameter mirror and four specialized instruments at an observing position far from Earth, orbiting the Earth–Sun L2 point. The combination of these features will give JWST unprecedented resolution and sensitivity from long-wavelength visible to the mid-infrared, enabling its two main scientific goals — studying the birth and evolution of galaxies, and the formation of stars and planets. In planning since 1996, the project represents an international collaboration of about 17 countries led by NASA, and with significant contributions from the European Space Agency and the Canadian Space Agency. It is named after James E. Webb, the second administrator of NASA, who played an integral role in the Apollo program. JWST's capabilities will enable a broad range of investigations across many subfields of astronomy. One particular goal involves observing some of the most distant objects in the Universe, beyond the reach of current ground and space based instruments. This includes the very first stars, the epoch of reionization, and the formation of the first galaxies. Another goal is understanding the formation of stars and planets. This will include imaging molecular clouds and star-forming clusters, studying the debris disks around stars, direct imaging of planets, and spectroscopic examination of planetary transits. The mission has a history of major cost overruns and was under review for cancellation by the United States Congress in 2011, after about $3 billion had been spent, and more than 75 percent of its hardware was either in production or undergoing testing. In November 2011, Congress reversed plans to cancel the JWST and instead capped additional funding to complete the project at $8 billion. - 1 Overview - 2 Development - 3 Mission - 4 Orbit - 5 Optics and instruments - 6 Construction and engineering - 7 Program status - 8 Images - 9 Partnership - 10 Notes - 11 References - 12 Further reading - 13 External links JWST originated in 1996 as the Next Generation Space Telescope (NGST) with an estimated cost of $500 million, based on generic planning for a successor to Hubble at least as early as 1993. It was renamed in 2002 after NASA's second administrator (1961–1968) James E. Webb (1906–1992), noted for playing a key role in the Apollo program and establishing scientific research as a core NASA activity. The telescope is a project of the National Aeronautics and Space Administration, the United States space agency, with international collaboration from the European Space Agency and the Canadian Space Agency, including contributions from fifteen nations. The prime contractor is Northrop Grumman. Europe's contributions were formalized in 2007 with an ESA-NASA Memorandum of Understanding that includes the Ariane-5 ECA launcher, NIRSpec instrument, MIRI Optical Bench Assembly, and manpower support for operations. The JWST will orbit around the Earth-Sun L2 Lagrange point, approximately 1,500,000 kilometres (930,000 mi) beyond the Earth. Objects near this point can orbit the Sun in synchrony with the Earth, which allows the JWST to use one radiation shield, positioned between the telescope and the Sun, to protect it from the Sun's heat and light and the small amount of additional infrared from the Earth. The telescope will be in a very large 800,000-kilometre (500,000 mi) radius halo orbit around L2, and so will avoid any part of Earth's shadow.[Note 1] From the JWST's position, the Earth will be very close to the Sun's position but not eclipse it, while the Moon will show a tiny crescent phase during its maximum angular distance from the Sun. In contrast to other proposed observatories, most of which have already been canceled or put on hold, including Terrestrial Planet Finder (2011), Space Interferometry Mission (2010), Laser Interferometer Space Antenna (2011), and the International X-ray Observatory (2011), the JWST telescope is the last big NASA astrophysics mission of its generation to see the light of day. With the cancellation of Project Constellation (2010) and the retirement of the Space Shuttle (2011), it is one of NASA's few remaining big space projects. The telescope's delays and cost increases can be compared to the Hubble Space Telescope. When it formally started in 1972, what came to be known as Hubble had a then estimated development cost of $300 million (or about $1 billion in 2006 constant dollars), but by the time it was sent into orbit in 1990, cost about four times that. In addition, new instruments and servicing missions increased the cost to at least $9 billion by 2006. A 2006 article in the journal Nature noted a study in 1984 by the Space Science Board, which estimated that a next generation infrared observatory would cost $4 billion (about $7 billion in 2006 dollars). Other major telescope concepts that are either canceled, studied, or not approaching launch include MAXIM (Microarcsecond X-ray Imaging Mission), SAFIR (Single Aperture Far-Infrared Observatory), SUVO (Space Ultraviolet-Visible Observatory), SPECS (Submillimeter Probe of the Evolution of Cosmic Structure), and the aforementioned canceled TPF, SIM, LISA, and IXO. JWST is the maturation of the Next Generation Space Telescope (NGST) plans. Some previously floated concepts include an 8-meter (26 ft) aperture, orbit of 3 astronomical units (1 AU is roughly the mean Earth–Sun distance) and NEXUS precursor telescope mission. A focus on the near to mid-infrared was preferred for three main reasons: high-redshift objects have their visible emissions shifted into the infrared, cold objects such as debris disks and planets emit most strongly in the infrared, and this band is very hard to study from the ground, or by existing space telescopes such as Hubble. JWST has a planned mass about half of Hubble, but its primary mirror (a 6.5 meter diameter gold-coated beryllium reflector) has a collecting area about five times larger (25 m2 vs. 4.5 m2). It uses about 3 grams of gold per mirror (that is 18 mirrors × 3 grams per mirror = 54 g in total). The JWST is oriented towards near-infrared astronomy, but can also see orange and red visible light as well as the mid infrared region, depending on the instrument. Early development work for a Hubble successor between 1989 and 1994 led to the Hi-Z telescope concept, a fully baffled[Note 2] 4-meter aperture infrared telescope going out to 3 AU in its orbit. The distant orbit helped reduce light noise from zodiacal dust. In the "faster, better, cheaper" era in the mid-1990s, NASA leaders pushed for a space telescope with low-cost and 8 meter primary mirror diameter. The result was plans for a NGST for $500 million, 8 meter aperture, and located at L2. By 2002, as the concept matured into more of a technical reality, it was reduced to 6 meters aperture and the cost was estimated at around $2.5 billion. Concepts for the design were fielded from: - Goddard Space Flight Center - Ball Aerospace - Lockheed Martin (Phase A winner) - TRW (Phase A winner and selected as prime contractor). In 2002, TRW was bought by Northrop Grumman. JWST is the formal successor to the Hubble Space Telescope (HST), but since its primary emphasis is on infrared observation, it is equally fair to consider it a successor to the Spitzer Space Telescope. In fact, JWST will far surpass both those telescopes, being able to see many more and much older stars and galaxies. Observing in the infrared is a key technique for achieving this, because it better penetrates obscuring dust and gas, allows observation of dim cooler objects, and because of cosmological redshift. Dust penetration: Compare the two images of the Carina Nebula taken with the HST (left margin). Though both images are of the same astronomical object, the top image was photographed utilizing the visible spectrum, whereas the bottom image was taken in the infrared, using the HST's WFC3 upgrade. Notice how many more stars can be counted almost anywhere in the bottom image (infrared spectrum) than in the same corresponding location of the top image (visible spectrum). This demonstrates the power of infrared observations to penetrate the obscuration due to gas and dust that blocks much of the scene in visible spectrum images, so that the stars lying behind the gas and dust become easier to see. Infrared astronomy can penetrate dusty regions of space, such as molecular clouds where stars are born, the circumstellar disks that give rise to planets, and the cores of active galaxies which are often cloaked in gas and dust. Cool objects: Furthermore, relatively cool objects (in this context meaning temperatures less than several thousand degrees) emit their radiation primarily in the infrared, as described by Planck's law. As a result, most objects that are cooler than stars are better studied in the infrared. This includes the clouds of the interstellar medium, the "failed stars" called brown dwarfs, planets both in our own and other solar systems, and comets and Kuiper belt objects. The distant universe: Looking beyond our own galaxy to more distant galaxy clusters, quasars, and gamma-ray bursts, the most distant objects viewable are also the "youngest," that is, they were formed during a time period closer in time to that of the Big Bang. We see them today because their light has taken billions of years to reach us. Because the universe is expanding, as the light travels it becomes red-shifted and these objects are therefore easier to see if viewed in the infrared. JWST's infrared capabilities are expected to let it see all the way to the very first galaxies forming just a few hundred million years after the big bang. In addition, JWST's advanced spectrographic instruments and a large mirror array will enable it to further probe Mars and the outer planets. The JWST's primary scientific mission has four main components: to search for light from the first stars and galaxies that formed in the Universe after the Big Bang, to study the formation and evolution of galaxies, to understand the formation of stars and planetary systems and to study planetary systems and the origins of life. All of these jobs can be done more effectively by analyzing near-infrared light rather than light in the visible part of the spectrum. For this reason the JWST's instruments will not measure visible or ultraviolet light like the Hubble Telescope, but will have a much greater capacity to collect infrared light. In its present design, the JWST will detect a range of wavelengths from 0.6 (orange light) to 28 micrometers (deep infrared radiation at about 100 K (−173 °C; −280 °F)). Due to a combination of redshift, dust obscuration, and the low temperatures of many of the sources to be studied, the JWST must be able to measure infrared light with a very high degree of precision. To ensure that infrared emissions coming from the telescope or its instruments do not interfere with these observations, the entire observatory must operate at a very low temperature. Moreover, it must be well shielded from radiation coming from the Sun, the Earth and the Moon. To accomplish this, the JWST incorporates a large metalized fan-fold sunshield, which will unfurl to block infrared radiation and allow the telescope to radiatively cool down to roughly 40 K (−233.2 °C; −387.7 °F). The telescope's location at the Sun-Earth L2 Lagrange point ensures that the Sun, Earth, and Moon all occupy roughly the same position relative to the telescope, and thus make the operation of this shield possible. The observatory is currently scheduled to be launched by an Ariane 5 from Guiana Space Centre Kourou, French Guiana into an L2 orbit with a launch mass of approximately 6.2 tonnes (6.1 long tons; 6.8 short tons). After a commissioning period of approximately six months the observatory will begin the science mission which is expected to last a minimum of five years. The potential for extension of the science mission beyond this period exists and the observatory is being designed accordingly. The orbit of the JWST will be an elliptical orbit (with a radius of 800,000 kilometers or 500,000 miles) around the semi-stable second Lagrange point, or L2. The Earth–Sun L2 point, about which the Webb telescope will orbit, is 1,500,000 kilometers (930,000 mi) from the Earth, around 4 times farther than the distance between the Earth to the Moon. At such a great distance, the Webb telescope would be more difficult to service after launch than the Hubble telescope. Nevertheless, a docking ring was added to the design in 2007 to facilitate this possibility, either by a robot or future crewed spacecraft such as the Orion MPCV. Normally, an object circling the Sun farther out than the Earth would take more than one year to complete its orbit. However, the balance of gravitational pull at the L2 point (in particular, the extra pull from Earth as well as the Sun) means that JWST will keep up with the Earth as it goes around the Sun. The combined gravitational forces of the Sun and the Earth can hold a spacecraft at this point, so that in theory it takes no rocket thrust to keep a spacecraft in orbit around L2. In reality, the stable point is comparable to that of a ball balanced upon a saddle shape. Along one direction any perturbation will drive the ball toward the stable point, while in the crossing direction the ball, if disturbed, will fall away from the stable point. Thus some station-keeping is required, but with little energy expended (only 2–4 m/s per year, from the total budget of 150 m/s). Because the JWST must be kept very cold to make accurate observations of distant astronomical objects, it has been designed with a large sunshield that blocks light and heat from the Sun. In order for such a shield to work properly, the Sun's rays must be constantly coming from the same direction. To achieve this outcome, JWST will be put into a relatively large "halo orbit" around L2. From the L2 point, the Earth constantly shades one third of the Sun's light as it periodically wobbles about the Earth-Moon barycenter; occasionally lunar eclipses will partially obscure more of the solar disk. However, the radius of the telescope's orbit around L2 will be so large that neither the Earth nor Moon will eclipse the Sun, allowing the shield to deal with a relatively constant sunlight environment. This was considered to be more important than attempting to utilize the Earth's shadow to block some of the sunlight, in an orbit nearer the exact L2 point. JWST's sunshield, made of polyimide film, has membranes coated with aluminum on one side and silicon on the other. The sunshield is designed to be folded twelve times so it will fit within the Ariane 5 rocket's 4.57 m × 16.19 m shroud. Once deployed at the L2 point, it will unfold to 12.2 m × 18 m. The sunshield was hand-assembled at Man Tech (NeXolve) in Huntsville, Alabama before it was delivered to Northrop Grumman in Redondo Beach, California for testing. Optics and instruments JWST's primary mirror is a 6.5-meter-diameter gold-coated beryllium reflector with a collecting area of 25 m2. This is too large for contemporary launch vehicles, so the mirror is composed of 18 hexagonal segments, which will unfold after the telescope is launched. Image plane wavefront sensing through phase retrieval will be used to position the mirror segments in the correct location using very precise micro-motors. Subsequent to this initial configuration they will only need occasional updates every few days to retain optimal focus. This is unlike terrestrial telescopes like the Keck which continually adjust their mirror segments using active optics to overcome the effects of gravitational and wind loading, and is made possible because of the lack of environmental disturbances to a telescope in space. JWST's optical design is a three-mirror anastigmat, which makes use of curved secondary and tertiary mirrors to deliver images that are free of optical aberrations over a wide field. In addition, there is a fast steering mirror, which can adjust its position many times a second to provide image stabilization. Ball Aerospace & Technologies Corp. is the principal optical subcontractor for the JWST program, led by prime contractor Northrop Grumman Aerospace Systems, under a contract from the NASA Goddard Space Flight Center, in Greenbelt, Maryland. Eighteen primary mirror segments, secondary, tertiary and fine steering mirrors, plus flight spares have been fabricated and polished by Ball Aerospace based on beryllium segment blanks manufactured by several companies including Axsys, Brush Wellman, and Tinsley Laboratories. As of June 2011, the first set of six fully completed mirror segments, including rigid supporting frames and cryogenic actuators, was undergoing final testing at NASA Marshall Space Flight Center, and testing all of the remaining mirrors was completed by December 2011, two months ahead of schedule. The Integrated Science Instrument Module (ISIM) contains four science instruments and a guide camera. - Near InfraRed Camera (NIRCam) is an infrared imager which will have a spectral coverage ranging from the edge of the visible (0.6 micrometers) through the near infrared (5 micrometers). NIRCam will also serve as the observatory's wavefront sensor, which is required for wavefront sensing and control activities. NIRCam is being built by a team led by the University of Arizona, with Principal Investigator Marcia Rieke. The industrial partner is Lockheed-Martin's Advanced Technology Center located in Palo Alto, California. - Near InfraRed Spectrograph (NIRSpec) will also perform spectroscopy over the same wavelength range. It is being built by the European Space Agency at ESTEC in Noordwijk, Netherlands. The leading development team is composed of people from Astrium, Ottobrunn and Friedrichshafen, Germany, and the Goddard Space Flight Center; with Pierre Ferruit as NIRSpec project scientist. The NIRSpec design provides 3 observing modes: a low-resolution mode using a prism, an R~1000 multi-object mode and an R~2700 integral field unit or long-slit spectroscopy mode. Switching of the modes is done by operating a wavelength preselection mechanism called Filter Wheel Assembly and selecting a correspondent dispersive element (prism or grating) using the Grating Wheel Assembly mechanism. Both mechanisms are based on the successful ISOPHOT wheel mechanisms of the Infrared Space Observatory. The multi-object mode relies on a complex micro-shutter mechanism to allow for simultaneous observations of hundreds of individual objects anywhere in NIRSpec's field of view. The mechanisms and their optical elements are being designed, integrated and tested by Carl Zeiss Optronics GmbH of Oberkochen, Germany, under contract from Astrium. - Mid-Infrared Instrument (MIRI) will measure the mid-infrared wavelength range from 5 to 27 micrometers. It contains both a mid-IR camera and an imaging spectrometer. MIRI is being developed as a collaboration between NASA and a consortium of European countries, and is led by George Rieke (University of Arizona) and Gillian Wright (UK Astronomy Technology Centre, Edinburgh, part of the Science and Technology Facilities Council (STFC)). MIRI features similar wheel mechanisms as NIRSpec which are also developed and built by Carl Zeiss Optronics GmbH under contract from the Max Planck Institute for Astronomy, Heidelberg. The completed Optical Bench Assembly of MIRI was delivered to Goddard in mid-2012 for eventual integration into the ISIM. - Fine Guidance Sensor (FGS), led by the Canadian Space Agency under project scientist John Hutchings (Herzberg Institute of Astrophysics, National Research Council of Canada), is used to stabilize the line-of-sight of the observatory during science observations. Measurements by the FGS are used both to control the overall orientation of the spacecraft and to drive the fine steering mirror for image stabilization. The Canadian Space Agency is also providing a Near Infrared Imager and Slitless Spectrograph (NIRISS) module for astronomical imaging and spectroscopy in the 0.8 to 5 micrometer wavelength range, led by principal investigator René Doyon at the University of Montreal. Because the NIRISS is physically mounted together with the FGS, they are often referred to as a single unit, but they serve entirely different purposes, with one being a scientific instrument and the other being a part of the observatory's support infrastructure. The infrared detectors for the NIRCam, NIRSpec, FGS, and NIRISS modules are being provided by Teledyne Imaging Sensors (formerly Rockwell Scientific Company). Construction and engineering NASA's Goddard Space Flight Center in Greenbelt, Maryland, is leading the management of the observatory project. The project scientist for the James Webb Space Telescope is John C. Mather. Northrop Grumman Aerospace Systems serves as the primary contractor for the development and integration of the observatory. They are responsible for developing and building the spacecraft element, which includes both the spacecraft bus and sunshield. Ball Aerospace has been subcontracted to develop and build the Optical Telescope Element (OTE). Goddard Space Flight Center is also responsible for providing the Integrated Science Instrument Module (ISIM). NASA is considering plans to add a grapple feature so future spacecraft might visit the observatory to fix gross deployment problems, such as a stuck solar panel or antenna. However, the telescope itself would not be serviceable, so that astronauts would not be able to perform tasks such as swapping instruments, as with the Hubble Telescope. Final approval for such an addition was to be considered as part of the Preliminary Design Review in March 2008. Most of the data processing on the telescope is done by conventional single board computers. The conversion of the analog science data to digital form is performed by the custom-built SIDECAR ASIC (System for Image Digitization, Enhancement, Control And Retrieval Application Specific Integrated Circuit). It is said[by whom?] that the SIDECAR ASIC will include all the functions of a 20-pound (9.1 kg) instrument box in a package the size of a half-dollar[quantify], and consume only 11 milliwatts of power. Since this conversion must be done close to the detectors, on the cool side of the telescope, the low power use of this IC will be important for maintaining the low temperature required for optimal operation of the JWST. Ground support and operations The Space Telescope Science Institute (STScI) in Baltimore, Maryland has been selected as the Science and Operations Center (S&OC) for JWST. In this capacity, STScI will be responsible for the scientific operation of the telescope and delivery of data products to the astronomical community. Data will be transmitted from JWST to the ground via NASA's Deep Space Network, processed and calibrated at STScI, and then distributed online to astronomers worldwide. Similar to how Hubble is operated, anyone, anywhere in the world, will be allowed to submit proposals for observations. Each year several committees of astronomers will peer review the submitted proposals to select the programs to observe in the coming year. The authors of the chosen proposals will typically have one year of private access to the new observations, after which the data will become publicly available for download by anyone from the online archive at STScI. A review of the program released in August 2011, said the cost for the telescope and 5 years of operations will be $8.7 billion with a planned launch in 2018. Of that price about $800 million is for the five years of operations. The Webb will be launched from Arianespace's ELA-3 launch complex at European Spaceport located near Kourou, French Guiana. The planned launch vehicle is an Ariane 5 ECA with the cryogenic upper stage. Notably, this review commended the JWST project for being in excellent technical shape with most flight hardware making good progress to completion. The delay and cost overruns are due to an unrealistic original budget and insufficient program management. In response, NASA instituted significant management changes in the JWST project, but the need for increased funding has led to a substantial mission delay. History of the program Cost growth revealed in spring 2005[when?] led to an August 2005 re-planning. The primary technical outcomes of the re-planning were significant changes in the integration and test plans, a 22-month launch delay (from 2011 to 2013), and elimination of system-level testing for observatory modes at wavelength shorter than 1.7 micrometers. Other major features of the observatory were unchanged. Following the re-planning, the program was independently reviewed in April 2006. The review concluded the program was technically sound, but that funding phasing at NASA needed to be changed. NASA re-phased its JWST budgets accordingly. In the 2005 re-plan, the life-cycle cost of the project was estimated at about US$4.5 billion. This comprised approximately US$3.5 billion for design, development, launch and commissioning, and approximately US$1.0 billion for ten years of operations. ESA is contributing about €300 million, including the launch, and the Canadian Space Agency about $39M Canadian. As of May 2007[update] costs were still on target. |2002||named JWST, 8 to 6 m| In January 2007, nine of the ten technology development items in the program successfully passed a non-advocate review. These technologies were deemed sufficiently mature to retire significant risks in the program. The remaining technology development item (the MIRI cryocooler) completed its technology maturation milestone in April 2007. This technology review represented the beginning step in the process that ultimately moved the program into its detailed design phase (Phase C). In March 2008, the project successfully completed its Preliminary Design Review (PDR). In April 2008, the project passed the Non-Advocate Review. Other passed reviews include the Integrated Science Instrument Module review in March 2009, the Optical Telescope Element review completed in October 2009, and the Sunshield review completed in January 2010. In April 2010, the telescope passed the technical portion of its Mission Critical Design Review (MCDR). Passing the MCDR signified the integrated observatory will meet all science and engineering requirements for its mission. The MCDR encompassed all previous design reviews. The project schedule underwent review during the months following the MCDR, in a process called the Independent Comprehensive Review Panel, which led to a re-plan of the mission aiming for 2015, but as late as 2018. By 2010 cost over-runs were impacting other programs, though JWST itself remained on schedule. By 2011, the JWST program was in the final design and fabrication phase (Phase C). As is typical for a complex design that cannot be changed once launched, there are detailed reviews of every portion of design, construction, and proposed operation. New technological frontiers have been pioneered by the program, and it has passed its design reviews. In the 1990s it was unknown if a telescope so large and light was possible. In April 2011, cryogenic testing of a six-mirror array began. This test is to ensure the mirrors perform to specifications at the temperatures they will encounter. Even with the funding for the telescope secured, the program status remains controversial while the telescope components are being completed. Reported cost and schedule issues |1997||2007||0.5 Billion USD| |1999||2007 to 2008||1| |2010||2015 to 2016||6.5| |Year||Cost (billion USD)| In June 2011, it was reported that the Webb telescope will cost at least four times more than originally proposed, and launch at least seven years late. Initial budget estimates were that the observatory would cost $1.6 billion and launch in 2011. NASA has now scheduled the telescope for a 2018 launch. A 2013 price estimate put the cost at $8.835 billion. Some scientists have expressed concerns about growing costs and schedule delays for the Webb telescope, which competes for scant astronomy budgets and thus threatens funding for other space science programs. A review of NASA budget records and status reports by journalists at Florida Today show the Webb observatory is plagued by many of the same problems that have plagued several other major NASA projects. Mistakes included: underestimates of the telescope’s cost that failed to budget for expected technical glitches, and failure to act on warnings that budgets were being exceeded, thus extending the schedule and increasing costs further. Proposed U.S. withdrawal On 6 July 2011, the United States House of Representatives' appropriations committee on Commerce, Justice, and Science moved to cancel the James Webb project by proposing an FY2012 budget that removed $1.9bn from NASA's overall budget, of which roughly one quarter was for JWST. This budget proposal was approved by subcommittee vote the following day; however, in November 2011, Congress reversed plans to cancel the JWST and instead capped additional funding to complete the project at $8 billion. The committee charged that the project was "billions of dollars over budget and plagued by poor management". The telescope was originally estimated to cost $1.6bn but the cost estimate grew throughout the early development reaching about $5bn by the time the mission was formally confirmed for construction start in 2008. In summer 2010, the mission passed its Critical Design Review with excellent grades on all technical matters, but schedule and cost slips at that time prompted US Senator Barbara Mikulski to call for an independent review of the project. The Independent Comprehensive Review Panel (ICRP) chaired by J. Casani (JPL) found that the earliest launch date was in late 2015 at an extra cost of $1.5bn (for a total of $6.5bn). They also pointed out that this would have required extra funding in FY2011 and FY2012 and that any later launch date would lead to a higher total cost. Because the runaway budget diverted funding from other research, the science journal Nature described the James Webb as "the telescope that ate astronomy". However, termination of the project as proposed by the House appropriation committee does not provide funding to other missions as the JWST line is simply terminated with the funding simply leaving astrophysics (and leaving the NASA budget) entirely. The American Astronomical Society has issued a statement in support of JWST, as did Maryland US Senator Barbara Mikulski. A number of editorials supporting JWST have appeared in the international press. Public displays and outreach A large telescope model has been on display at various places since 2005: in the United States at Seattle, Washington; Colorado Springs, Colorado; Greenbelt, Maryland; Rochester, New York; Manhattan, New York; and Orlando, Florida; and elsewehere at Paris, France; Dublin, Ireland; Montreal, Canada; Hatfield, United Kingdom; and Munich, Germany. The model was built by the main contractor, Northrop Grumman Aerospace Systems. In May 2007, a full-scale model of the telescope was assembled for display at the Smithsonian's National Air and Space Museum on the National Mall, Washington DC. The model was intended to give the viewing public a better understanding of the size, scale and complexity of the satellite, as well as pique the interest of viewers in science and astronomy in general. The model is significantly different from the telescope, as the model must withstand gravity and weather, so is constructed mainly of aluminum and steel measuring approximately 24×12×12 m (79×39×39 ft) and weighs 5.5 tonnes (12,000 lb). The model was on display in New York City's Battery Park during the 2010 World Science Festival, where it served as the backdrop for a panel discussion featuring Nobel Prize laureate John C. Mather, astronaut John Grunsfeld and astronomer Heidi Hammel. In March 2013, the model was on display in Austin, Texas for SXSW 2013. NASA, ESA and CSA have collaborated on the telescope since 1996. ESA's participation in construction and launch was approved by its members in 2003 and an agreement was signed between ESA and NASA in 2007. In exchange for full partnership, representation and access to the observatory for its astronomers, ESA is providing the NIRSpec instrument, the Optical Bench Assembly of the MIRI instrument, an Ariane-5 ECA launcher, and manpower to support operations. The CSA will provide the Fine Guidance Sensor and the Near-Infrared Imager Slitless Spectrograph plus manpower to support operations. - Participating countries - Although the Earth does not fully block the solar disc at the distance from the Earth to the Sun L2, which is just outside of the Earth's umbra, the JWST avoids even the penumbra. - "Baffled", in this context, means enclosed in a tube in a similar manner to a conventional optical telescope, which helps to stop stray light entering the telescope from the side. For an actual example, see the following link: Freniere, E.R. (1981). "First-order design of optical baffles". Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Radiation Scattering in Optical Systems 257. pp. 19–28. - "NASA JWST FAQ "Who are the partners in the Webb project?"". NASA. Retrieved 18 November 2011. - "JWST factsheet". ESA. 2013-09-04. Retrieved 2013-09-07. - "ESA JWST Timeline". Retrieved 13 January 2012. - "NASA – JWST – people". Retrieved 13 January 2012. - During, John. "The James Webb Space Telescope". The James Webb Space Telescope. National Aeronautics and Space Administration. Retrieved 2011-12-31. - John Mather (2006). "JWST Science". - Pachal, Peter (8 July 2011). "What We Could Lose if the James Webb Telescope Is Killed". PCMAG. - "Final Polishing Complete on Remaining Twelve Webb Mirrors (06.29.11)". - "NASA budget plan saves telescope, cuts space taxis". Reuters. 16 November 2011. - Berardelli, Phil (27 October 1997). "Next Generation Space Telescope will peer back to the beginning of time and space". CBS. - "HubbleSite – Webb: Past and Future". Retrieved 13 January 2012. - "About James Webb". NASA. Retrieved 15 March 2013. - "ESA – Europe's Contributions to the JWST Mission". Retrieved 13 January 2012. - "L2 Orbit". Space Telescope Science Institute. Retrieved 2012-01-16. - "About the Webb". NASA. - Reichhardt, Tony (March 2006). "US astronomy: Is the next big thing too big?". Nature 440, pp. 140–143. - "Astrononmy and Astrophysics in the New Millennium". NASA. - de Weck, Olivier L.; Miller, David W.; Mosier, Gary E. (2002). "Multidisciplinary analysis of the NEXUS precursor space telescope" (PDF). doi:10.1117/12.460079. - "James Webb: Near Infrared Telescope Has Mirrors of Gold". NASA JWST manufacturing. - "Advanced Concepts Studies – The 4 m Aperture "Hi Z" Telescope". NASA Space Optics Manufacturing Technology Center. - "STSCI JWST History 1994". - "STSCI JWST History 1996". Stsci.edu. Retrieved 2012-01-16. - ESA Science & Technology: Goddard Space Flight Center design for JWST. Sci.esa.int. Retrieved on 2013-08-21. - ESA Science & Technology: Ball Aerospace design for JWST. Sci.esa.int. Retrieved on 2013-08-21. - ESA Science & Technology: Lockheed-Martin design for JWST. Sci.esa.int. Retrieved on 2013-08-21. - ESA Science & Technology: TRW design for JWST. Sci.esa.int. Retrieved on 2013-08-21. - "TRW Selected as JWST Prime Contractor". STCI. 11 September 2003. Retrieved 13 January 2012. - James Webb vs Hubble - "IR Astronomy: Overview". NASA Infrared Astronomy and Processing Center. Retrieved 30 October 2006. - "Webb Science: The End of the Dark Ages: First Light and Reionization". NASA. Retrieved 9 June 2011. - Tiscareno, M. (2014). "James Webb Space Telescope's astounding view of the solar system". SPIE Newsroom. doi:10.1117/2.1201404.005406. - Maggie Masetti; Anita Krishnamurthi (2009). "JWST Science". NASA. Retrieved 14 April 2013. - Maggie Masetti; Anita Krishnamurthi (2009). "Why does JWST need to be at L2?". NASA. Retrieved 14 April 2013. - Gardner, p. 588. - Berger, Brian (23 May 2007). "NASA Adds Docking Capability For Next Space Observatory". Space.com. Retrieved 13 January 2012. - Michael Mesarch (31 March 1999). "STScI NGST Libration Point Introduction". NASA/GSFC Guidance Navigation and Control Center. - E.Canalias, G.Gomez, M.Marcote, J.J.Masdemont. "Assessment of Mission Design Including Utilization of Libration Points and Weak Stability Boundaries". Department de Matematica Aplicada, Universitat Politecnica de Catalunya and Department de Matematica Aplicada, Universitat de Barcellona. - Morring, Jr., Frank, Sunshield, Aviation Week and Space Technology, December 16, 2013, pp. 48-49 - "JWST Wavefront Sensing and Control". Space Telescope Science Institute. Retrieved 9 June 2011. - "JWST Mirrors". Space Telescope Science Institute. Retrieved 9 June 2011. - Gardner, table XV, p. 597 - "John Webb Telescope Recent accomplishments". NASA. Retrieved 13 May 2013. - Gardner, p. 560. - "James Webb Space Telescope Near Infrared Camera". STSI. Retrieved 24 Oct 2013. - "NIRCam for the James Webb Space Telescope". University of Arizona. Retrieved 24 Oct 2013. - Gardner, p. 574. - "JWST Current Status". STScI. Retrieved 5 July 2008. - Gardner, p. 578. - Atad-Ettedgui, Eli; et al (2008). High-precision cryogenic wheel mechanisms for the JWST NIRSpec instrument 7018. pp. 701821–701821–12. doi:10.1117/12.789663. ISSN 0277-786X. - Atad-Ettedgui, Eli; et al (2008). JWST NIRSpec mechanical design 7018. pp. 70181Y–70181Y–15. doi:10.1117/12.789858. ISSN 0277-786X. - Gardner, p. 580 - Oschmann, Jr., Jacobus M.; et al (2008). Design and development of MIRI, the mid-IR instrument for JWST 7010. pp. 70100T–70100T–10. doi:10.1117/12.790101. ISSN 0277-786X. - Atad-Ettedgui, Eli; et al (2008). Manufacturing and verification of ZnS and Ge prisms for the JWST MIRI imager 7018. pp. 701823–701823–14. doi:10.1117/12.789148. ISSN 0277-786X. - Oschmann, Jr., Jacobus M.; Fischer, et al (2008). The JWST MIRI double-prism: design and science drivers 7010. pp. 70103K–70103K–12. doi:10.1117/12.788672. ISSN 0277-786X. - Berger, Brian (23 May 2007). "NASA Adds Docking Capability For Next Space Observatory". Space. Retrieved 2 May 2009. - Craig Covault (21 January 2008). "Moon Stuck: Space leaders work to replace lunar base with manned asteroid missions". Aviation Week & Space Technology. p. 24. Retrieved 2 May 2009. - David Shiga (24 May 2007). "Hubble's successor could be fixed in space after all". NewScientist. Retrieved 2 May 2009. - "Possibility of future space vehicle visits to JWST". NASA. Retrieved 2 May 2009. - "FBO DAILY ISSUE OFOctober 30, 2002FBO #0332". - "Amazing Miniaturized 'SIDECAR' Drives Webb Telescope's Signal". NASA. 20 February 2008. Retrieved 22 February 2008. - Amos, Jonathan (22 August 2011). "JWST price tag now put at over $8bn". BBC. - Cowen, Ron (25 August 2011). "Webb Telescope Delayed, Costs Rise to $8 Billion". ScienceInsider. - The James Webb Space Telescope. Jwst.nasa.gov. Retrieved on 2013-08-21. - "Independent Comprehensive Review Panel Final Report". NASA. Retrieved 9 June 2011. - John Mather. "James Webb Space Telescope (JWST)" (PDF). National Academy of Science. Retrieved 5 July 2008. - "European agreement on James Webb Space Telescope’s Mid-Infrared Instrument (MIRI) signed" (Press release). ESA Media Relations Service. 9 June 2004. Retrieved 6 May 2009. - "Canadian Space Agency: Canada's Contribution to NASA's James Webb Space Telescope.". Canadian Corporate News. Retrieved 6 September 2008.[dead link] - Brian Berger. "NASA Adds Docking Capability For Next Space Observatory". Space News. Retrieved 5 July 2008. - "Nexus Space Telescope". MIT. - "JWST Passes NTAR". STScI. Retrieved 5 July 2008. - "NASA's Webb Telescope Passes Key Mission Design Review Milestone". NASA. Retrieved 2 May 2010. - Stephen Clark. "NASA says JWST cost crunch impeding new missions". Spaceflight Now. - "Next generation Space Telescope Marks Key Milestone". 18 April 2011. Retrieved 18 April 2011. - James Webb Space Telescope Status Report from Deputy Program Director Eric Smith (Feb. 6, 2012) – Planetary Radio | The Planetary Society - Simon Lilly "The Next Generation Space Telescope (NGST)". University of Toronto. 27 November 1998. - "Cosmic Ray Rejection with NGST". - "MIRI spectrometer for NGST". - "NGST Weekly Missive". 25 April 2002. - "NASA Modifies James Webb Space Telescope Contract". 12 November 2003. - "Problems for JWST". 21 May 2005. - "Refocusing NASA's vision". Nature 440, p.127. 9 March 2006. - Leone, Dan (7 November 2012). "NASA Acknowledges James Webb Telescope Costs Will Delay Other Science Missions". Space News. - [dead link] - McKie, Robin (9 July 2011). "Nasa fights to save the James Webb space telescope from the axe". London: The Guardian. - "Appropriations Committee Releases the Fiscal Year 2012 Commerce, Justice, Science Appropriations". US House of representatives Committee on Appropriations. 6 July 2011. - "US lawmakers vote to kill Hubble successor". SpaceDaily. 7 July 2011. - "Proposed NASA Budget Bill Would Cancel Major Space Telescope". Space.com. 6 July 2011. - "Independent Comprehensive Review Panel, Final Report". - "The telescope that ate astronomy". Nature. 27 October 2010. - "AAS Statement on the James Webb Space Telescope" - "Mikulski Statement On House Appropriations Subcommittee Termination of James Webb Telescope" - "Way Above the Shuttle Flight". The New York Times. 9 July 2011. - Harrold, Max (7 July 2011). "Bad news for Canada: U.S. could scrap new space telescope". The Vancouver Sun. - "Webb Slinger Heads To Washington". 8 May 2007. - "JWST at SXSW". - "NASA James Webb Space Telescope model lands at South by Southwest". - ESA Science & Technology: Europe's Contributions to the JWST Mission - Canadian Space Agency "Eyes" Hubble's Successor: Canada Delivers its Contribution to the World's Most Powerful Space Telescope – Canadian Space Agency - Jonathan P. Gardner; et al. (November 2006). "The James Webb Space Telescope". Space Science Reviews (Springer, Netherlands): 484–606. The formal case for the JWST science, plus some implementation. - JWST Primer (STSI, 2009) (.pdf) - JWST Glossary |Find more about James Webb Space Telescope at Wikipedia's sister projects| |Media from Commons| |News stories from Wikinews| |Textbooks from Wikibooks| - Official website (NASA) Official website (STScI), Official website (ESA) - James Web Space Telescope Mission Profile by NASA's Solar System Exploration - Progress Report (jwstsite.stsci.edu) (Mirrors, Instruments, Sunshield, Spacecraft Bus) - AIAA-2004-5986: JWST Observatory Architecture and Performance (.pdf) - AIAA-2006-5593: Development of JWST's Ground Systems Using an Open Adaptable Architecture (.pdf) - JWST gallery on flickr - JWST Mirrors overview at NASA - Ecclestone, Paul. "Space Camera (Mid Infrared Instrument)". Backstage Science. Brady Haran. - Video (86:49) - "Search for Life in the Universe" - NASA (July 14, 2014). Science instrument teams: - Steven V. W. Beckwith – "The Hubble-JWST Transition: A Policy Synopsis Papers per year" (31 July 2003) – STCI - E. P. Smith – "Infrared Astronomy and NGST" (2000) – NASA Goddard Space Flight Center - Cost overruns put squeeze on Hubble’s successor (2005) – New Scientist - About third mirror - James Webb Space Telescope: Project Meeting Commitments but Current Technical, Cost, and Schedule Challenges Could Affect Continued Progress Government Accountability Office - January 2014 - The Next Great Observatory: Assessing the James Webb Space Telescope: Hearing before the Committee on Science, Space, and Technology, House of Representatives, One Hundred Twelfth Congress, First Session, Tuesday, December 6, 2011
<urn:uuid:772edf70-1974-4b9b-b549-2e0bd50c060e>
CC-MAIN-2014-23
http://en.wikipedia.org/wiki/James_Webb_Telescope
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997880800.37/warc/CC-MAIN-20140722025800-00245-ip-10-33-131-23.ec2.internal.warc.gz
en
0.890307
10,274
3.421875
3
[man-uh-fohld] /ˈmæn əˌfoʊld/ of many kinds; numerous and varied: having numerous different parts, elements, features, forms, etc.: a manifold program for social reform. using, functioning with, or operating several similar or identical devices at the same time. (of paper business forms) made up of a number of sheets interleaved with carbon paper. being such or so designated for many reasons: a manifold enemy. something having many different parts or features. a copy or facsimile, as of something written, such as is made by manifolding. any thin, inexpensive paper for making carbon copies on a typewriter. Machinery. a chamber having several outlets through which a liquid or gas is distributed or gathered. Philosophy. (in Kantian epistemology) the totality of discrete items of experience as presented to the mind; the constituents of a sensory experience. Mathematics. a topological space that is connected and locally Euclidean. verb (used with object) to make copies of, as with carbon paper. of several different kinds; multiple: manifold reasons having many different forms, features, or elements: manifold breeds of dog something having many varied parts, forms, or features a copy of a page, book, etc a chamber or pipe with a number of inlets or outlets used to collect or distribute a fluid. In an internal-combustion engine the inlet manifold carries the vaporized fuel from the carburettor to the inlet ports and the exhaust manifold carries the exhaust gases away (in the philosophy of Kant) the totality of the separate elements of sensation which are then organized by the active mind and conceptualized as a perception of an external object (transitive) to duplicate (a page, book, etc) to make manifold; multiply Old English monigfald (Anglian), manigfeald (West Saxon), “various, varied in appearance, complicated; numerous, abundant,” from manig (see many) + -feald (see -fold). A common Germanic compound (cf. Old Frisian manichfald, Middle Dutch menichvout, German mannigfalt, Swedish mångfalt, Gothic managfalþs), perhaps a loan-translation of Latin multiplex (see multiply). Retains the original pronunciation of many. Old English also had a verbal form, manigfealdian “to multiply, abound, increase, extend.” Old English manigfealdlic “in various ways, manifoldly,” from the source of manifold (adj.). in mechanical sense, first as “pipe or chamber with several outlets,” 1884, see manifold (adj.); originally as manifold pipe (1857), with reference to a type of musical instrument mentioned in the Old Testament. A topological space or surface. [man-uh-fohl-der] /ˈmæn əˌfoʊl dər/ noun 1. a machine for making or copies, as of writing. [man-uh-fawrm] /ˈmæn əˌfɔrm/ adjective 1. shaped like a hand. [man-i-kin] /ˈmæn ɪ kɪn/ noun 1. a little man; dwarf; pygmy. 2. . 3. a model of the human body for teaching anatomy, demonstrating surgical operations, etc. [man-i-kin] /ˈmæn ɪ kɪn/ noun 1. a styled and three-dimensional representation of the human form used in window displays, as of clothing; dummy. 2. a wooden figure or […] [muh-nil-uh] /məˈnɪl ə/ noun 1. a seaport in and the capital of the Philippines, on W central Luzon. Abbreviation: Man. Compare . 2. . 3. . [loo-zon; Spanish loo-sawn] /luˈzɒn; Spanish luˈsɔn/ noun 1. the chief island of the Philippines, in the N part of the group. 40,420 sq. mi. (104,688 sq. km). Capital: Manila. […]
<urn:uuid:32d818b0-743b-449b-803a-03327512731e>
CC-MAIN-2017-34
http://definithing.com/define-dictionary/manifolded/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886126027.91/warc/CC-MAIN-20170824024147-20170824044147-00443.warc.gz
en
0.84072
940
3.390625
3
Free Newsletters - Space - Defense - Environment - Energy - Solar - Nuclear by Staff Writers Berkeley CA (SPX) Feb 11, 2013 An automated supernova hunt is shedding new light on the death sequence of massive stars-specifically, the kind that self-destruct in Type IIn supernova explosions. Digging through the Palomar Transient Factory (PTF) data archive housed at the Department of Energy's National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory (Berkeley Lab), astronomers have found the first causal evidence that these massive stars shed huge amounts of material in a "penultimate outburst" before final detonation as supernovae. A focused search for Type IIn SN precursor bursts, conducted by Eran Ofek of Israel's Weizmann Institute and the PTF team, led to this finding. Their results were published in the February 7, 2013 issue of Nature. PTF is an international collaboration that brings together researchers, universities, observatories and Berkeley Lab to hunt for supernovae and other astronomical objects. The Causal Link Eventually, that core collapses, releasing a tremendous amount of energy as neutrinos, magnetic fields and shock waves and destroying the star in the process. From Earth, this explosive event is observed as a supernova. If astronomers detect hydrogen, the event is classified as a Type II supernova. And if the hydrogen-emission line is narrow, the event is classified as a Type IIn (for "narrow"). In the case of Type IIn events, scientists suspected that the narrow emission line occurs as light from the event passes through a thin sphere of hydrogen that was already surrounding the star before it went supernova. Some believed that the dying star might have shed this shell of material before it self-destructed, but until recently there was no evidence to link such an outburst to an actual supernova. That's where PTF comes in. For almost four years, the PTF team has relied on a robotic telescope mounted on the Palomar Observatory's Samuel Oschin Telescope in Southern California to scan the sky nightly. As soon as observations were taken, the data traveled more than 400 miles to NERSC-via the National Science Foundation's High Performance Wireless Research and Education Network and the Department of Energy's Energy Sciences Network (ESnet)-where computers running software called the Real-Time Transient Detection Pipeline screened the data and identified events for astronomers to follow up on. NERSC also archived this data and allowed collaborators to access it over the Internet through a web-based science gateway, called DeepSky. On August 25, 2010 the PTF pipeline detected a Type IIn supernova half a billion light years away in the constellation Hercules. Shortly after, Ofek led a search of previous PTF scans of the same stellar neighborhood-using a high-quality pipeline developed by Mark Sullivan, of the University of Southampton-and found the supernova's likely precursor, a massive variable star that had shed a huge amount of mass only 40 days before the supernova was detected. They labeled the event, SN 2010mc. "After NERSC tools found SN 2010mc, we went back through the archives and found evidence of a previous outburst in the same location and knew that it blew some material out of the star before the final supernova," says Brad Cenko, a UC Berkeley postdoctoral researcher and co-author of the paper. "We've seen evidence of this happening before, but there have been only one or two cases where we've been able to conclusively say when the previous outburst happened." Ofek and the PTF team developed a scenario and tested it against competing theoretical ideas, using evidence from several sky surveys that were triggered to observe SN 2010mc once it was detected by the NERSC pipeline. They concluded that the "penultimate outburst" had blown off a hundredth of a solar mass in a shell expanding 2,000 kilometers per second, already 7 billion kilometers away from the supernova when it exploded. Earlier ejecta were detected 10 billion kilometers away, having slowed to a hundred kilometers per second. After the supernova explosion, high-velocity ejecta passing through shells of earlier debris left a record of varying brightness and spectral features. The observations pointed to the most likely theoretical model of what happened: turbulence-excited gravity waves drove successive episodes of mass loss, finally culminating in the collapse and explosion of the core. Because the stellar outburst occurred very shortly before the supernova, the astronomers suspected that the events were causally linked. Cenko notes that this could have important implications for what processes trigger a supernova. "I think it is a very interesting object we found, and the way we do our survey and the search at NERSC made it something we were in the unique position to find," says Peter Nugent, a Berkeley Lab senior staff scientist and member of the PTF collaboration. "Although the PTF project is no longer collecting data every night, we are still relying on NERSC resources to sift through our archival data," says Nugent. "This recent discovery shows us that there is still a lot that we can learn from the archival data at NERSC, and gives us some insights into how we may design future experiments to further investigate these events." The project is supported by DOE's Office of Science and by NASA. DOE/Lawrence Berkeley National Laboratory Stellar Chemistry, The Universe And All Within It |The content herein, unless otherwise known to be public domain, are Copyright 1995-2014 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:c51db10c-1f46-4cad-8a7c-65a524bc0a24>
CC-MAIN-2014-15
http://www.spacedaily.com/reports/A_massive_stellar_burst_before_the_supernova_999.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00523-ip-10-147-4-33.ec2.internal.warc.gz
en
0.93215
1,249
2.546875
3
After beginning in New York City, progressive, or cool, jazz developed primarily on the West Coast in the late 1940s and early 50s. Intense yet ironically relaxed tonal sonorities are the major characteristic of this jazz form, while the melodic line is less convoluted than in bop. Lester Young's style was fundamental to the music of the cool saxophonists Lee Konitz, Warne Marsh, and Stan Getz. Miles Davis played an important part in the early stages, and the influence of virtuoso pianist Lennie Tristano was all-pervasive. The music was accepted more gracefully by the public and critics than bop, and the pianist Dave Brubeck became its most widely known performer. Sections in this article: The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
<urn:uuid:bedf472f-8364-4e14-a407-001e1e68e45a>
CC-MAIN-2014-10
http://www.factmonster.com/encyclopedia/entertainment/jazz-progressive-jazz.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011231453/warc/CC-MAIN-20140305092031-00042-ip-10-183-142-35.ec2.internal.warc.gz
en
0.952676
181
3.328125
3
To Provide Professional & Concentration Equipment Services To Worldwide Reverse osmosis (RO) produces clean, great-tasting water and is widely regarded as one of the most efficient water filtering techniques. Many uses for RO machines exist, such as faucets, aquariums, whole-house filtration, and restaurant filtration. Whatever the initial quality of your water, there is probably a reverse osmosis water treatment machine that will work for you. The definition of reverse osmosis machines, their advantages, and their applications are described here. A ranking of the top reverse osmosis devices may also be found. As water is pushed under pressure through a semipermeable membrane during reverse osmosis, pollutants from unfiltered water, or feed water, are removed. To produce clean drinking water, water flows from the more concentrated (with more pollutants) side of the RO membrane to the less concentrated (with fewer contaminants). The term "permeate" refers to the freshwater generated. The trash, or brine, is the leftover concentrated water. Water is purified using reverse osmosis (RO), a technique that involves forcing water through a semi-permeable membrane in order to remove pollutants. The operation of a reverse osmosis treatment device is as follows: Larger particles including silt, sand, and dirt are first removed from the water by means of a pre-filter. Following that, a carbon filter eliminates chlorine and other organic impurities that might alter the water's flavor and smell. After that, the water is forced under intense pressure through a semi-permeable membrane to filter out dissolved salts, minerals, and other contaminants. Only water molecules may flow across the barrier; impurities are left behind. A post-filter is subsequently used to eliminate any last-remaining contaminants and enhance the water's flavor and aroma. Whenever needed, the cleaned water is then kept in a tank and often provided by a special faucet in the sink. A part of the water used in the RO process is released as wastewater, containing the contaminants that were filtered away. The effectiveness and layout of the system determine how much wastewater is produced. Many pollutants, including dissolved salts, minerals, germs, viruses, and other contaminants can be effectively eliminated via reverse osmosis. In addition to being utilized for industrial and commercial purposes including desalination and wastewater treatment, RO systems are frequently used to purify drinking water for homes. Cleaning the reverse osmosis (RO) water treatment machine is essential to maintaining its optimal performance and prolonging its lifespan. Here are the general steps to cleaning a reverse osmosis water treatment machine: 1. Shut off the water supply to the RO Machine, and turn off the power to the Machine. 2. Open the RO faucet and let the water in the Machine drain out. 3. Remove the pre-filter, post-filter, and RO membrane from the Machine. Be sure to follow the manufacturer's instructions for proper removal. 4. Prepare a cleaning solution by mixing warm water and a recommended cleaning agent. The cleaning agent can be purchased from the manufacturer or a local hardware store. 5. Soak the pre-filter, post-filter, and RO membrane in the cleaning solution. Follow the manufacturer's instructions for the appropriate soaking time. 6. Rinse the pre-filter, post-filter, and RO membrane with clean water. 7. Reinstall the pre-filter, post-filter, and RO membrane back into the Machine. Be sure to follow the manufacturer's instructions for proper reinstallation. 8. Turn on the water supply to the RO Machine, and turn on the power to the Machine. 9. Allow the Machine to flush out any remaining cleaning solution and air bubbles. This may take a few minutes. 10. Test the water quality and taste to ensure the Machine is working properly. It's recommended to clean an RO water treatment machine every six to twelve months, or more frequently if the water quality is poor. Be sure to consult the manufacturer's instructions for specific cleaning instructions for your machine. Are you looking for a reliable and effective way to improve the quality of your drinking water? A reverse osmosis water treatment machine might be just what you need. With a reverse osmosis system, you can remove a wide range of impurities from your water, including dissolved minerals, salts, bacteria, and viruses. If you're interested in learning more about how a reverse osmosis water treatment machine can benefit you, click here, and let's get you started.
<urn:uuid:5dc8f8bf-625e-4d2d-9e79-61e0927b85ac>
CC-MAIN-2023-23
https://www.sinopakmachinery.com/news/what-is-the-process-of-a-reverse-osmosis-water-treatment-machine.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644817.32/warc/CC-MAIN-20230529074001-20230529104001-00410.warc.gz
en
0.928446
955
2.953125
3
George Grosz (July 26, 1893 – July 6, 1959) was a German artist known especially for his savagely caricatural drawings of Berlin life in the 1920s. He was a prominent member of the Berlin Dada and New Objectivity group during the Weimar Republic before he emigrated to the United States in 1933. George Grosz was born Georg Ehrenfried Groß in Berlin, Germany, and grew up in the Pomeranian town of Stolp, where his mother became the keeper of the local Hussar's Officers' mess after his father died in 1901. At the urging of his cousin, the young Grosz began attending a weekly drawing class taught by a local painter named Grot. Grosz developed his skills further by drawing meticulous copies of the drinking scenes of Eduard Grutzner, and by drawing imaginary battle scenes. From 1909–1911, he studied at the Dresden Academy of Fine Arts, where his teachers were Richard Müller, Robert Sterl, Raphael Wehle, and Oskar Schindler. He subsequently studied at the Berlin College of Arts and Crafts under Emil Orlik. In November 1914 Grosz volunteered for military service, in the hope that by thus preempting conscription he would avoid being sent to the front. He was given a discharge after hospitalization for sinusitis in 1915. In 1916 he changed the spelling of his name to George Grosz as a protest against German nationalism and out of a romantic enthusiasm for America that originated in his early reading of the books of James Fenimore Cooper, Bret Harte and Karl May, and which he retained for the rest of his life. (His artist friend and collaborator Helmut Herzfeld changed his name to John Heartfield at the same time.) In January 1917 he was drafted for service, but in May he was discharged as permanently unfit. Grosz was arrested during the Spartakus uprising in January 1919, but escaped using fake identification documents; he joined the Communist Party of Germany (KPD) in the same year. In 1921 Grosz was accused of insulting the army, which resulted in a 300 German Mark fine and the destruction of the collection Gott mit uns ("God with us"), a satire on German society. Grosz left the KPD in 1922 after having spent five months in Russia and meeting Lenin and Trotsky, because of his antagonism to any form of dictatorial authority. Bitterly anti-Nazi, Grosz left Germany shortly before Hitler came to power. In June 1932, he accepted an invitation to teach the summer semester at the Art Students League of New York. In October 1932, Grosz returned to Germany, but on January 12, 1933 he and his family emigrated to America. Grosz became a naturalized citizen of the United States in 1938, and made his home in Bayside, New York. He taught at the Art Students League intermittently until 1955. In America, Grosz determined to make a clean break with his past, and changed his style and subject matter. He continued to exhibit regularly, and in 1946 he published his autobiography, A Little Yes and a Big No. In the 1950s he opened a private art school at his home and also worked as Artist in Residence at the Des Moines Art Center. Grosz was elected to the American Academy of Arts and Letters in 1954. Though he had US citizenship, he resolved to return to Berlin, where he died on July 6, 1959 from the effects of falling down a flight of stairs after a night of drinking. In 1960, Grosz was the subject of the Oscar-nominated short film George Grosz' Interregnum. In 2002, actor Kevin McKidd portrayed Grosz in a supporting role as an eager artist seeking exposure in a fictional film entitled Max, regarding Adolf Hitler's youth.
<urn:uuid:70cfd0cc-ce78-4f71-a362-58dcedcf97a7>
CC-MAIN-2017-30
http://367art.net/gallery/G/George_Grosz/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424575.75/warc/CC-MAIN-20170723162614-20170723182614-00095.warc.gz
en
0.982928
788
2.703125
3
Many people don’t understand the significance of skin cancer, and they may not understand that some types of skin cancer can be lethal if not treated, while others can cause disfigurement if they become large and need to be surgically removed. Skin cancer screening is part of a preventative program designed to detect early forms of skin cancer. People with specific types of symptoms, as well as with a family history of skin cancer or other presenting issues are at greater risk of developing cancer, and may have more frequent skin cancer screening as part of their regular healthcare tests. Skin Cancer Types There are three common types of skin cancers, but there are also several additional cancers that are not as commonly diagnosed. Each looks slightly different, but a doctor will complete additional tests to verify the specific type of cancer and recommend a course of treatment. Basal cell carcinomas are often seen as a smooth bump or lump that may have a shine to the surface. There may be blood vessel apparent under the skin in the area. For some patients, these raised areas may become open sores and bleed frequently and easily. Squamous cell carcinomas are typically scaly and dense looking. They usually have a pink or red color and, unlike basal cell carcinomas, these can become painful and very pronounced. While all types of carcinomas are serious and need immediate treatment, melanomas are the most worrisome. They are typically irregular shaped and colors areas of the skin that look like moles. Diagnosis and Treatment Through regular skin cancer screening, your dermatologist or primary care physician will check any irregularities on your skin or any changes in pigmentation or moles on the body. If irregularities are noticed, the doctor will recommend treatment. With cancers on the face a very specific surgical procedure, Mohs surgery, will be used to remove a minimal amount of tissue. The doctor will test to make sure all cancer cells are removed from the surrounding tissue. With this minimal surgical procedure, which is known as micrographic surgery, there is often no visible scar after the procedure. Prevention of Skin Cancer During your skin cancer screening, your doctor will typically talk about prevention of skin cancer. Prevention is critical as avoiding direct sun exposure, using sunscreens when outdoors, and avoiding being out when the sun’s UV rays are the strongest, can all help to prevent this type of cancer. If you have any questions, including the use of tanning beds and the risk of skin cancer, talk to your doctor. Also, mention any irregular areas of skin or any concerns you may have to your doctor as early detection and treatment is important. Be the first to like.
<urn:uuid:3b7e3880-2d44-441f-92ce-f9200c17b0ec>
CC-MAIN-2017-47
http://greathealthguide.com/why-your-doctor-may-recommend-skin-cancer-screening/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806856.86/warc/CC-MAIN-20171123180631-20171123200631-00064.warc.gz
en
0.961667
544
3.078125
3
A chin-up has a specific form . The movement begins with the arms extended above the head, gripping a fixed chin-up bar with a supinated grip (palms facing the exerciser). The body is pulled up until the bar approaches or touches the upper chest. The body is then lowered until the arms are straight, and the exercise is generally repeated. Chin-ups can be performed with a kip, where the legs and back impart momentum to aid the exercise, or from a dead hang, where the body is kept still. Performing the chin-up correctly can be tricky because of the natural tendency to do most of the work with the biceps rather than the lats. Initiating the pull with the shoulder blades helps avoid this problem. The exercise is most effective when the body is lowered down to a full extension. Chin-ups are often incorrectly referred to as pull-ups. The term pull-up is traditionally used when the exercise is performed with a pronated (palms facing away from the exerciser) grip. In some regions, like Scandinavia, "chins" serve as an umbrella term for both chin-ups and pull-ups. Chin-ups target the latissimus dorsi muscle, assisted by the brachialis, brachioradialis, biceps brachii , teres major, posterior deltoid, infraspinatus, teres minor, rhomboids, levator scapulae, middle and lower trapezius and pectoralis muscles. Chin-ups are thought to build width and thickness to one's back, as well as to promote growth of the biceps, brachialis, brachioradialis and pronator teres. - Sternal chinups — this variant employs a full range of motion, raising the sternum to the bar. The elbows are nearly directly below the shoulders this way. - Towel chin-ups — a towel is looped over the bar, and instead of the bar, the towel is gripped. - Weighted chin-ups — weight is added with dangling from a dipping belt, or via weighted belt or vest, ankle weights, chains, medicine ball between the knees, dumbbell between the feet or kettlebells on top of the feet. - One handed chin-ups — one hand grips the bar; the other hand holds the wrist/forearm of the gripping hand. It stresses the grip equally to a one-arm chin-up, but lessens the amount of work the biceps and lat of the gripping arm have to do compared to it. - One forearmed chin-ups — one hand grips the bar; the other hand holds the upper arm of the gripping hand between the elbow and shoulder. It stresses the grip and biceps equally to a one-arm chin-up, but lessens the amount of work the lat of the gripping arm has to do compared to it. - One arm chin-ups — one hand grips the bar; the other hand does not assist with the pull, it cannot touch the other arm. - Supine chin-ups — in the supine position (with the feet initially supported), the arms are held perpendicular to the body as the grip the bar; the chest is pulled towards the bar instead of the chin. This exercise is performed in the horizontal (transverse) plane, whereas other chin-up variations are performed in the vertical (coronal) plane. As a result, this variation recruits the trapezius and teres major muscles much more than a vertical chin-up would, and is more commonly known as the inverted row. See also: front lever. - Mountain Climber chin-ups — the bar is grasped while standing directly below the bar, palms facing each other but hands some distance apart. The head alternates going to the left and the right with each chin-up. Training and performanceEdit - number of repetitions without touching the floor - number of repetitions in a specified time interval (1/3/30 minutes, 1/6/12/24 hours) - number of repetitions with a total weight (body weight plus additional weight) Exercises for beginnersEdit A useful exercise for beginners is the negative chin-up, where one is assisted to the top position and executes a slow, controlled descent. This is useful for those not strong enough to perform a concentric chin-up, and can also be used to keep training at the same weight when one is too exhausted to continue performing the concentric portion of the exercise. Beginners who are not strong enough to perform a chin-up may make use of an assisted chin-up machine, where one stands on a bar with a counterweight to reduce the weight that one pulls up. These machines frequently also include a dip bar, allowing for assisted dipping. This keeps the exercise a closed-chain movement. Another machine, which is open-chain (the person remains stationary, the resistance moves) which mimics the movement and is also helpful to training is the lat pulldown. Unlike the counterweight machine, the lat pulldown can provide as much or more resistance as a normal chin-up or pull-up through use of a counterweight stack. The lifter locks a pad into place above their thighs (near to the hip) to prevent them from rising off the ground when the resistance provided by the counter-weight (lifted through a pulley mechanism) goes beyond their body's. - ↑ exrx.net defines Chin-Up - ↑ Types of chin-ups bars for home gyms - ↑ Video demonstration of a chin-up - ↑ Chin-ups Specific Training Program - ↑ BodyBuilding.com - Improving Chin-Up Performance - ↑ World Records for Chin-Ups and Pull-Ups
<urn:uuid:46731de3-ecda-4386-ac33-2057590b2f1c>
CC-MAIN-2017-34
http://bodybuilding.wikia.com/wiki/Chin-up
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105341.69/warc/CC-MAIN-20170819105009-20170819125009-00241.warc.gz
en
0.915579
1,209
3.09375
3
The views expressed in our content reflect individual perspectives and do not represent the official views of the Baha'i Faith. As he spake by the mouth of his holy prophets, which have been since the world began… – Luke 1:70. We did aforetime send messengers before thee: of them there are some whose story We have related to thee, and some whose story We have not related to thee… – Qur’an, 40:78. In the first six installments of this two-part series of essays, on The Five Ways of Knowing, we explored the classic human methods of recognizing truth. Now, in this continuation of that series, we’ll look at how to apply those methods in making one of the most important truth decisions anyone ever makes in life: how to identify a prophet of God. The English word prophet comes from the Ancient Greek word prophḗtēs, which means “advocate” or “one who speaks for God.” In both Hebrew and Arabic, the word navi or nabi, which literally means “spokesperson,” traditionally translates as “prophet.” In the Old Testament, God describes a prophet this way: “…and I will put My words in his mouth, and he shall speak unto them all that I shall command him.” – Deuteronomy 18:18. With this description as their guide, the early Jewish teachings said the navi represented the “mouth” or the “voice” of God. The root of the word navi means hollowness or openness. To receive and transmit transcendental wisdom, the ancients believed, a prophet had to provide an open channel, free of self. Throughout history, humans have followed the Faiths brought to them by the prophets. Every culture and society has felt their profound influence. Whether you call them sages, seers, mystics, teachers, messengers, prophets or manifestations of God, these pure souls bring the word, the voice, the vision and the inspiration of the Creator to humanity. They establish new religions that change the lives of millions upon millions of people. They create moral codes and spiritual laws that last for millennia. They provide us with examples of how to be human, and how to aspire to a higher existence, as well. Some prophets, their names now lost to us, have brought their messages to indigenous and tribal cultures with no written history. Some messengers inspired peoples so ancient that no records remain of them, their teachings or their societies. Some of these great teachers, however, left a lasting impression by creating entire civilizations and founding Faiths that have endured for thousands of years: O people! I swear by the one true God! This is the Ocean out of which all seas have proceeded, and with which every one of them will ultimately be united. From Him all the Suns have been generated, and unto Him they will all return. Through His potency the Trees of Divine Revelation have yielded their fruits, every one of which hath been sent down in the form of a Prophet, bearing a Message to God’s creatures in each of the worlds whose number God, alone, in His all-encompassing Knowledge, can reckon. – Baha’u’llah, Gleanings from the Writings of Baha’u’llah, p. 103. Most people only know about the existence of a few prophets, typically those who started major world religions, touched their own culture or influenced their friends and families. But many, many prophets have spoken to humanity across the span of our long history. The Jewish Talmud specifically names 48 male and seven female prophets, but a Jewish tradition says that throughout time there were twice as many prophets as the number of Jews who were banished from Israel—which would total more than a million prophets. In Christianity, the New Testament names at least a dozen prophets besides John the Baptist and Jesus Christ himself. (Several of these named prophets were women, including Anna and the daughters of Phillip.) In Islam, the Qur’an mentions 25 prophets by name; and an Islamic tradition (hadith) numbers the prophets at 124,000 throughout history. (Islam lists at least one named female prophet: Elizabeth or Alyassabat, the cousin of Mary and the mother of John the Baptist.) Baha’is also believe that the Creator has blessed humanity with many messengers and prophets: Baha’u’llah continually urges man to free himself from the superstitions and traditions of the past and become an investigator of reality, for it will then be seen that God has revealed his light many times in order to illumine mankind in the path of evolution, in various countries and through many different prophets, masters and sages. – Abdu’l-Baha, Divine Philosophy, pp. 8-9. Without question, though, a few of the prophets have had a major, permanent global influence. Demographers and pollsters estimate that somewhere between 80-90% of the world’s people now follow a Faith originated by one of these eight prophets: Krishna, Abraham, Moses, Buddha, Zoroaster, Christ, Muhammad and Baha’u’llah. Your ancestors probably came from one of these faith traditions—and if they didn’t, they most likely lived in a culture influenced and shaped by an indigenous prophet. The Baha’i Faith–founded through the teachings and the sacrifices of two new prophets, The Bab and Baha’u’llah—is the most recent of those worldwide Faiths. The Bab, whose title means “The Gate” and whose revelation lasted only six years before his death, said that his mission involved preparing the way for a universal messenger of God, in much the same way John the Baptist heralded the coming of Christ. The Faith of Baha’u’llah, whose title means “the Glory of God,” dawned soon after the death of The Bab, and has now expanded to every country on earth and become, after Christianity, the world’s second-most widespread belief system. In this series of six essays, we’ll examine the prophetic mission of Baha’u’llah, explore the reasons behind the beliefs of the Baha’is who follow this new prophet, and try to determine how to independently evaluate the truth of those beliefs. We’ll focus on these two central questions: What criteria can prove the truth of the claims of a prophet? And how do I respond to those truths if I come to believe them? Please follow along as we meet Baha’u’llah, try to understand why his followers see and revere him as a prophet of God, and find ways to determine, for ourselves, the truth of his teachings.
<urn:uuid:25605bb8-6839-41ce-8ba8-281273527bf5>
CC-MAIN-2023-50
https://bahaiteachings.org/how-many-prophets/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100427.59/warc/CC-MAIN-20231202140407-20231202170407-00283.warc.gz
en
0.960283
1,423
2.859375
3
The blessed John Duns Scotus (Iohannes Duns Scotus OFM, Jean Duns Scot, Giovanni Duns Scoto, Johannes Duns Skotus, Juan Duns Escoto, Iohannes Dunstonensis, Iohannes Scotus, John the Scot) , was one of the most important theologians and philosophers of the High Middle Ages (the others being Thomas Aquinas, William of Ockham and Bonaventura. He was nicknamed Doctor Subtilis for his penetrating and subtle manner of thought. Scotus' influence on Roman Catholic thought has been considerable. The doctrines for which he is best known are the univocity of being (that existence is the most abstract concept we have, applicable to everything that exists), the formal distinction, a way of distinguishing between different aspects of the same thing, and the idea of haecceity, a property supposed to be in each individual thing that makes it an individual. Scotus also developed a complex argument for the existence of God, and argued for the Immaculate conception of Mary. Little is known of Scotus' life. He was probably born at Duns, in the Borders. In 1291 he was ordained in Northampton, England. A note in Codex 66 of Merton College, Oxford, records that Scotus "flourished at Cambridge, Oxford and Paris. He began lecturing on Peter Lombard's Sentences at the prestigious University of Paris in the Autumn of 1302. Later in that academic year, however, he was expelled from the University of Paris for siding with then Pope Boniface VIII in his feud with Philip the Fair of France, over the taxation of church property. Scotus was back in Paris before the end of 1304, probably returning in May. He continued lecturing there until, for reasons which are still mysterious, he was dispatched to the Franciscan studium at Cologne, probably in October 1307. He died there in 1308; the date of his death is traditionally given as 8 November. He is buried in the Church of the Minorites in Cologne. His sarcophagus bears the Latin inscription: Scotia me genuit. Anglia me suscepit. Gallia me docuit. Colonia me tenet. (trans. "Scotia brought me forth. England sustained me. France taught me. Cologne holds me.") He was beatified by Pope John Paul II on March 20, 1993. According to an old tradition, Scotus was buried alive following his lapse into a coma. Scotus is generally considered to be a realist (as opposed to a nominalist) in that he treated universals as real. He attacks a position close to that later defended by Ockham, arguing that things have a common nature - for example the humanity common to both Socrates and Plato. Univocity of Being He followed Aristotle in asserting that the subject matter of metaphysics is "being qua being" (ens inquantum ens). Being in general (ens in communi), as an univocal notion, was for him the first object of the intellect. Metaphysics includes the study of the transcendentals, so called because they transcend the division of being into finite and infinite and the further division of finite being into the ten Aristotelian categories. Being itself is a transcendental, and so are the "attributes" of being — "one", "true", and "good" — which are coextensive with being, but which each add something to it. The doctrine of the univocity of being implies the denial of any real distinction between essence and existence. Aquinas had argued that in all finite being (i.e. all except God), the essence of a thing is distinct from its existence. Scotus rejected the distinction. We can conceive of what is is to be something, without conceiving it as existing. Scotus denied this. We should not make any distinction between whether a thing exists (si est) and what it is (quid est), for we never know whether something exists, unless we have some concept of what we know to exist The study of the Aristotelian categories belongs to metaphysics insofar as the categories, or the things falling under them, are studied as beings. (If they are studied as concepts, they belong instead to the logician.) There are exactly ten categories, according to orthodox Aristotelianism. The first and most important is the category of substance. Substances are beings in a primary sense, since they have an independent existence (entia per se). Beings in any of the other nine categories, called accidents, exist in substances. The nine categories of accidents are quantity, quality, relation, action, passion, place, time, position, and state (or habitus). Duns elaborates a distinct view on hylomorphism, with three important strong theses that differentiate him. He held: 1) that there exists matter that has no form whatsoever, or prime matter, as the stuff underlying all change, against Aquinas (cf. his Quaestiones in Metaphysicam 7, q. 5; Lectura 2, d. 12, q. un.), 2) that not all created substances are composites of form and matter (cf. Lectura 2, d. 12, q. un., n. 55), that is, that purely spiritual substances do exist, and 3) that one and the same substance can have more than one substantial form — for instance, humans have at least two substantial forms, the soul and the form of the body (forma corporeitas) (cf. Ordinatio 4, d. 11, q. 3, n. 54). He argued for an original principle of individuation (cf. Ordinatio 2, d. 3, pars 1, qq. 1-6), the "haecceity" as the ultimate unity of a unique individual (haecceitas, an entity's 'thisness'), as opposed to the common nature (natura communis), feature existing in any number of individuals. For Scotus, the axiom stating that only the individual exists is a dominating principle of the understanding of reality. For the apprehension of individuals, an intuitive cognition is required, which gives us the present existence or the non-existence of an individual, as opposed to abstract cognition. Thus the human soul, in its separated state from the body, will be capable of knowing the spiritual intuitively. Like other realist philosophers of the period (such as Aquinas and Henry of Ghent, Scotus recognised the need for an intermediate distinction that was not merely conceptual, but not fully real or mind-dependent either. Scotus argued for an formal distinction (distinctio formalis a parte rei), which holds between entities which are inseparable and indistinct in reality, but whose definitions are not identical. For example, the personal properties of the Trinity are formally distinct from the Divine essence. Similarly, the distinction between the 'thisness' or haecceity of a thing is intermediate between a real and a conceptual distinction. There is also a formal distinction between the divine attributes and the powers of the soul. Scotus was an Augustinian theologian. He is usually associated with voluntarism, the tendency to emphasize God's will and human freedom in all philosophical issues. The main difference between Aquinas' rational theology and that of Scotus' is that Scotus believes certain predicates may be applied univocally — with exactly the same meaning — to God and creatures, whereas Aquinas insisted that this is impossible, and that only analogical predication can be employed, in which a word as applied to God has a meaning different from, although related to, the meaning of that same word as applied to creatures. Duns struggled throughout his works in demonstrating his univocity theory against Aquinas' analogy doctrine. Existence of God The existence of God can be proven only a posteriori, through its effects. The Causal Argument he gives for the existence of God is particularly interesting and precise. It says that an infinity of things that are essentially ordered is impossible, as the totality of caused things that are essentially caused is itself caused, and so it is caused by some cause which is not a part of the totality, for then it would be the cause of itself; for the whole totality of dependent things is dependent, and not on anything belonging to that totality. The argument is relevant for Scotus' conception of metaphysical inquiry into being by searching the ways into which beings relate to each other. Perhaps the most influential point of Duns Scotus' theology was his defense of the Immaculate Conception of Mary. At the time, there was a great deal of argument about the subject. The general opinion was that it was appropriate, but it could not be seen how to resolve the problem that only with Christ's death would the stain of original sin be removed. The great philosophers and theologians of the West were divided on the subject (indeed, it appears that even Thomas Aquinas sided with those who denied the doctrine, though some Thomists dispute this). The feast day had existed in the East since the seventh century and had been introduced in several dioceses in the West as well, even though the philosophical basis was lacking. Citing Anselm of Canterbury's principle, "potuit, decuit, ergo fecit" (God could do it, it was appropriate, therefore he did it), Duns Scotus devised the following argument: Mary was in need of redemption like all other human beings, but through the merits of Jesus' crucifixion, given in advance, she was conceived without the stain of original sin. God could have brought it about (1) that she was never in origin sin, (2) she was in sin only for an instant, (3) she was in sin for a period of time, being purged at the last instant. Whatever of these was more excellent should probably be attributed to Mary . This apparently careful statement provoked a storm of opposition at Paris, and suggested the line 'fired France for Mary without spot' in the famous poem "Duns Scotus's Oxford", by Gerard Manley Hopkins. This argument appears in Pope Pius IX's declaration of the dogma of the Immaculate Conception. Pope John XXIII recommended the reading of Duns Scotus' theology to modern theology students. The authenticity of Scotus' logical works has been questioned. Some of the logical and metaphysical works originally attributed to him are now known to be by other authors. There were already concerns about this within two centuries of his death, when the sixteenth-century logician Jacobus Naveros noted inconsistencies between these texts and his commentary on the Sentences, leading him to doubt whether he had written any logical works at all . The Questions on the Prior Analytics (In Librum Priorum Analyticorum Aristotelis Quaestiones) were also discovered to be mistakenly attributed . Modern editors have identified only four works as authentic: the commentaries on Porphyry's Isagoge, on Aristotle's Categories, On Interpretation (in two different versions), and on Sophistical Refutations, probably written in that order. These are called the parva logicalia. These are dated at around 1295, when Scotus would have been in his late twenties, working in Oxford. Scotus is considered one of the most important Franciscan theologians. - Pseudo Joh. Duns Scotus: Tractatus de Formalitatibus, Strasbourg, Bibl. Nat. & Univ. 292 (an. 1475) - Notabilia Scoti in Libros Topicorum:>> see article of Andrews (1998) in bibliography below - For an overview of manuscripts, see first of all the introductions to the latest edition of Scotus’ Opera Omnia. The earliest surviving manuscript of Scotus' work is Ms Bruxelles, Bibl. Royale 2908n, tentatively dated to around 1325. - Opera Omnia, ed. Lucas Wadding, 12 Vols (Lyon: Durand, 1639; Reprint by G. Olms, Hildesheim, 1968-1968) - Opera Omnia, ed. L. Vives, 26 Vols. (Paris, 1891-1895; Reprint by Westmead, Franborough and Hants: Gregg International Publishers, 1969). - Opera Omnia, studio et cura Commissionis Scotisticae ad fidem codicum edita. XXI Vols, ed. C. Balic, H. Schalück, P. Modric et al. (Vatican City, 1950- ). The following volumes of this new critical edition of Scotus’ theological works have appeared so far: - Vol. 1: De Ordinatio I. Duns Scoti disquisitio historico-critica, Ordinatio, prologus, edited by C. Balic, M. Bodewig, S. Buselic, P. Capkun-Delic, I. Juric, I. Montalverne, S. Nanni, B. Pergamo, F.Prezioso, I. Reinhold, and O. Schäfer (Città del Vaticano: Typis Polyglottis Vaticanis, 1950). - Vol. 2: Ordinatio I, dist. 1–2, edited by C. Balic, M. Bodewig, S. Buselic, P. Capkun-Delic, I. Juric, I. Montalverne, S. Nanni, B. Pergamo, F. Prezioso, I. Reinhold, and O. Schäfer (Città delVaticano: Typis Polyglottis Vaticanis, 1950). - Vol. 3: Ordinatio I, dist. 3, edited by C. Balic, M. Bodewig, S. Buselic, P. Capkun-Delic, B. Hechich, I. Juric, B. Korosak, L. Modric, I. Montalverne, S. Nanni, B. Pergamo, F. Prezioso, I. Reinhold, and O. Schäfer (Città del Vaticano: Typis Polyglottis Vaticanis,1954). - Vol. 4: Ordinatio I, dist. 4–10, edited by C. Balic, M. Bodewig, S. Buselic, P. Capkun-Delic, B. Hechich, I. Juric, B. Korosak, L. Modric, S. Nanni, I. Reinhold, and O. Schäfer (Città del Vaticano:Typis Polyglottis Vaticanis, 1956). - Vol. 5: Ordinatio I, dist. 11–25, edited by C. Balic, M. Bodewig, S. Buselic, P. Capkun-Delic, B. Hechich, I. Juric, B. Korosak, L. Modric, S. Nanni, I. Reinhold, and O. Schäfer (Città del Vaticano:Typis Polyglottis Vaticanis, 1959). - Vol. 6: Ordinatio I, dist. 26–48, edited by C. Balic, M. Bodewig, S. Buselic, P. Capkun-Delic, B. Hechich, I. Juric, B. Korosak, L. Modric, S. Nanni, I. Reinhold, and O. Schäfer (Città del Vaticano:Typis Polyglottis Vaticanis, 1963). - Vol. 7: Ordinatio II, dist. 1–3, edited by C. Balic, C. Barbaric, S. Buselic, B. Hechich, L. Modric, S. Nanni, R. Rosini, S. Ruiz de Loizaga, and C. Saco Alarcón (Città del Vaticano: Typis Polyglottis Vaticanis, 1973). - Vol. 8: Ordinatio II, dist. 4–44, edited by B. Hechich, B. Huculak, J. Percan, and S. Ruiz de Loizaga (Città del Vaticano: Typis Vaticanis, 2006). - Vol. 9: Ordinatio III, dist. 1–17, edited by B. Hechich, B. Huculak, J.Percan, and S. Ruiz de Loizaga (Città del Vaticano: Typis Vaticanis, 2006). - Vol. 10: Ordinatio III, dist. 26–40, edited by B. Hechich, B. Huculak, J. Percan, and S. Ruiz de Loizaga (Città del Vaticano: Typis Vaticanis, 2007). - Vol. 16: Lecturaprol. – I, dist. 1–7,edited by C. Balic, M. Bodewig, S. Buselic, P. Capkun-Delic, B. Hechich, I.Juric, B. Korosak, L. Modric, S. Nanni, I. Reinhold, and O. Schäfer (Città delVaticano: Typis Polyglottis Vaticanis, 1960). - Vol. 17: Lectura I, dist. 8–45, edited by C. Balic, C. Barbaric, S. Buselic, P. Capkun-Delic, B. Hechich, I. Juric, B.Korosak, L. Modric, S. Nanni, S. Ruiz de Loizaga, C. Saco Alarcón, and O. Schäfer (Città del Vaticano: Typis Polyglottis Vaticanis, 1966). - Vol. 18: Lectura II, dist. 1–6, edited by L. Modric, S. Buselic, B. Hechich, I. Juric, I. Percan, R. Rosini, S. Ruiz de Loizaga, and C. Saco Alarcón (Città del Vaticano: Typis Polyglottis Vaticanis, 1982). - Vol. 19: Lectura II, dist. 7–44, edited by Commissio Scotistica (Città del Vaticano: Typis Polyglottis Vaticanis,1993). - Vol. 20: Lectura III, dist. 1–17, edited by B. Hechich, B. Huculak, J. Percan, S. Ruiz de Loizaga, and C. Saco Alarcón (Città del Vaticano: Typis Vaticanis, 2003). - Vol. 21: Lectura III, dist. 18–40, edited by B. Hechich, B. Huculak, J. Percan, S. Ruiz de Loizaga, and C. Saco Alarcón (Città del Vaticano: Typis Vaticanis, 2003). - Opera Philosophica (numerous volumes) St. Bonaventure (St. Bonaventure, New York, 1997-2006). The following volumes have appeared: - B. Ioannis Duns Scotus. Quaestiones in Librum Porphyrii Isagoge; Quaestiones super Praedicamenta Aristotelis, edited by R. Andrews, G. Etzkorn, G. Gál, R. Green, T. Noone, and R. Wood, Opera Philosophica 1 (St. Bonaventure, N.Y.: The Franciscan Institute Press, 1999). - B. Ioannis Duns Scotus. Quaestiones in libros Perihermenias Aristotelis; Quaestiones Super Librum Elenchorum Aristotelis, edited by Robert R. Andrews, O. Bychkov, S. Ebbesen, G. Gál, R. Green, T. Noone, R. Plevano, A. Traver. Theoremata, edited by M. Dreyer, H. Möhle, and G. Krieger, Opera philosophica 2 (St. Bonaventure, N.Y.: Franciscan Institute Press; Washington, D.C.: The Catholic University of America Press, 2004). - B. Ioannis Duns Scotus. Quaestiones super libros Metaphysicorum Aristotelis, Libri I–V, edited by G. Etzkorn, R. Andrews, G. Gál, R. Green, F. Kelly, G. Marcil, T. Noone, and R. Wood, Opera Philosophica 3 (St. Bonaventure, N.Y.: The Franciscan Institute Press, 1997). - B. Ioannis Duns Scotus. Quaestiones super libros Metaphysicorum Aristotelis, Libri VI–IX, edited by G. Etzkorn, R. Andrews, G. Gál, R. Green, F. Kelly, G. Marcil, T. Noone, and R. Wood, Opera Philosophica 4 (St. Bonaventure, N.Y.: The Franciscan Institute Press, 1997). - B. Ioannis Duns Scotus. Quaestiones super secundum et tertium De anima, edited by C. Bazán, K. Emery, R. Green, T. Noone, R. Plevano, A. Traver, Opera philosophica 5 (Washington, D.C.: The Catholic University of America Press; St. Bonaventure, N.Y.: Franciscan Institute Press, 2006). - Opera Omnia. Editio Minor, I: Opera Philosophica, ed. Giovanni Lauriola, Centro Studi Personalisti ‘Giovanni Duns Scoto’ Quaderno 11 (Bari, 1998). - Opera Omnia. Editio Minor, II/1: Opera Theologica, ed. Giovanni Lauriola, Centro Studi Personalisti ‘Giovanni Duns Scoto’ Quaderno 12 (Bari, 1998). - Opera Omnia. Editio Minor, III/1: Opera Theologica, ed. Giovanni Lauriola, Quaderni scotistici, 16 (Alberobello: Editrice AGA, 2001). - Giovanni Scoto. Omilia sul prologo di Giovanni, ed. M. Cristiani (Vicenza, 1987). Cf. Rivista di storia e letteraturea religiosa 24 (1988), 595-598. - Quodlibeta, transl. as God and creatures : the quodlibetal questions with an introduction, notes and glossary by Felix Alluntis and Allan B. Wolter. Imprint Princeton ; London : Princeton University Press, 1975. xxxiv,549p ; 25cm. - Bos, E.P., (ed.). John Duns Scotus (1265-1308) Renewal of Philosophy. Acts of the Third Symposium organized by the Dutch Society for Medieval Philosophy Medium Aevum. Elementa, 72. Amsterdam: Rodopi, 1998. - Frank, W. and Wollter, A. Duns Scotus, Metaphysician, Purdue University Press, 1995. - Gracia, J.E. & Noone, T., A Companion to Philosophy in the Middle Ages, Blackwell 2003. - Grenz, Stanley J., The Named God And The Question Of Being: A Trinitarian Theo-ontology, Blackwell 2005. - "The Death of Blessed Scotus", according to Canon Joseph Bonello and Eman Bonnici. - Honderich, T., (ed.) The Oxford Companion to Philosophy, article "Duns Scotus", Oxford 1995. - Ingham, M.B., & Mechthild Dreyer, The Philosophical Vision of John Duns Scotus: An Introduction. Washington DC: Catholic University of America Press 2004. - Kretzmann,N., A. Kenny, & J. Pinborg, Cambridge History of Later Medieval Philosophy Cambridge: 1982. - Vos., A. The Philosophy of John Duns Scotus. Edinburgh: Edinburgh University Press, 2006. - Williams, Thomas, (ed.), The Cambridge Companion to Duns Scotus. Cambridge University Press 2003.
<urn:uuid:42bc0eb9-6a0c-4b00-8b09-e909502a4eaa>
CC-MAIN-2023-23
http://www.mywikibiz.com/Duns_Scotus
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00495.warc.gz
en
0.872112
5,319
3.078125
3
Political activists of the so-called “religious Right” in the United States never tire of preaching that their country was founded as “a Christian democracy.” But they are wrong on both counts. When Benjamin Franklin was leaving the first Continental Congress, he was asked by one of many anxious patriots waiting outside the courthouse, “What have you given us?” Franklin replied, “A republic, if you can keep it.” The difference might seem trivial or even non-existent to narrow-minded persons for whom democracy and dictatorship are the only conceivable forms of government. Yet, the very word, “democracy,” does not occur once in the Bill of Rights, the US Constitution, or any state constitution. It was mentioned often by America’s Founding Fathers, but invariably as a synonym for “mob rule,” and, along with obsolescent monarchy, an evil to be avoided. “a species of demagoguery, wherein clever charlatans, making promises as enticing as they are impossible to fulfil, win for themselves unwarranted power and wealth, persuading gullible people to discard their liberties for a secret tyranny masquerading as public freedom.” Particularly in the writings of Thomas Jefferson, the historic models held up for emulation did not include Greek democracy, but the Venetian and Roman republics. The difference between these examples most important to men like Paine and Jefferson was the concept of citizenship. Anyone born in a democratic state automatically becomes a citizen with all the privileges that entails, including the right to vote. In a republic, one is not born a citizen, but may only become one when he or she reaches adulthood; can demonstrate at least a fundamental grasp of the workings of their government, and is either going to school or gainfully employed. In modern America, all that remains of these basic requirements is a restriction against voting until one’s eighteenth year. Foreigners must, in fact, pass tests proving their basic comprehension of the Constitution before becoming US citizens, which makes them more knowledgeable, discerning voters than native-born Americans, who are supposed to receive the same kind of rigorous Constitutional education, but rarely, if ever, do. In demanding at least some qualifications for citizenship, America’s Founding Fathers believed that responsible leaders could only by chosen by a competent electorate. Today, however, such notions are shunned as “elitist” in most countries described as “democratic.” Yet more shocking to bible-beating conservatives, if they were to learn the awful truth, is that the United States was not founded by Christians, at least of the kind they would approve. Instead, that country’s constitutional republic was conceived, fought for and built almost entirely by deists. While the majority of Americans, then as now, were at least nominally Christian, most of their leaders were not. George Washington, John Hancock, Patrick Henry, Paul Revere and virtually all of their intellectual compatriots were deists. The term is not generally familiar today, but signifies a person who believes in a universal, compassionate Intelligence that made and orders Creation, manifests its will through natural law, but requires no religious dogma to be understood, only the faculty of reason with which every human is endowed. Referring to the church of his day, Paine wrote, “The Christian theory is little else than the idolatry of the ancient mythologists, accommodated to the purposes of power and revenue… My own mind is my own church.” Like his fellow deists, who made a clear distinction between church and state, he was convinced that freedom meant being able to speak one’s mind on all subjects, religious as well as political. He did not “condemn those who believe otherwise. They have the same right to their belief as I have to mine.” Nor were the deists anti-Christian. They concluded that Christianity had at its theological core the same mystical truth found in every genuine spiritual conception; namely, the perennial philosophy of compassion for all sentient beings as the means by which the human soul develops. This recognition, however, deeply offended mainstream Christians, who insisted their brand of faith alone was correct, all others being heretical at best or demonic at worst. As an example of the extremes these defenders of the One True Religion went to demonstrate their piety, hob-nails initialled “T.P.” were sold by the thousands to Londoners who could walk all day on the name of Thomas Paine. His treatment in the land he had done so much to free was more harsh. When he walked through the streets of his hometown in Bordentown, New Jersey, doors and window shutters were pointedly banged shut as he passed by, while cries of “Devil!” followed him everywhere. Modern American Christian crusaders would be even more alarmed to learn that not only was their country founded by deists, but its capitol deliberately designed as a metaphor for Freemasonry. In his profoundly researched book, The Secret Architecture of our Nation’s Capital (London: Century Books, Ltd., 1999), author David Ovason offers abundant evidence to show that Washington, D.C. was built by Freemasons who incorporated their arcane, even heretical ideas in the White House, the Washington Monument, the Library of Congress, the Post Office, the Capitol Dome, the Federal Trade Commission Building, the Federal Reserve Building, even Pennsylvania Avenue itself. But what is, or was, Freemasonry? Like any idea or organisation that persists over time, Freemasonry deviated from its initial purpose until, in the end, it bore only slight, outward resemblance to its origins. By way of comparison with a group alleged without much real foundation to have been Freemasonry’s precursor, the Knights Templar was founded in the early 12th century, ostensibly for guarding pilgrim routes to Jerusalem with a few soldiers sworn to poverty and abstinence, but grew to become a virtually autonomous army richly equipped and armed, finally blossoming into an economic entity so potent it called down on itself the murderous envy of a French king. So too, Freemasonry began in 1717 as a fraternity dedicated to humanitarian, deistic principles for Englishmen unhappy with the royal powers-that-be, and so were forced to operate with discretion. By the time early Americans were ready to part ways with the Mother Country, Freemasonry had spread to their shores and was embraced by many revolutionaries as an expression of opposition to everything British, including the Church of England. The secret order continued to grow in membership and prestige, until it was infiltrated and perverted from its high-minded ideals by Spartacus Weishaupt, a demented power-freak who wanted a respectable vehicle for subversion and insurrection. Separated by a vast ocean from the facts, even Thomas Jefferson was fooled by Weishaupt’s duplicity. Henceforward, the “Free and Accepted Masons” were lumped together with Communists as the secretive enemies of Western Civilisation, and outlawed in most European countries. Even in the United States, though they were never banned, the Freemasons were under suspicion by the Federal Bureau of Investigation for many years, and condemned by several congressmen. Thus criminalised or under suspicion, their popularity went into a long decline, until today their once numerous, now largely abandoned lodge buildings, some still bearing masonic emblems, testify to an aging, dwindling following. It is wrong, therefore, to parallel the Freemason George Washington, for example, with the likes of Adam Weishaupt, anymore than it is to equate George Washington with George Bush. “The very struggle for independence seems to have been directed by the Masonic brotherhood,” Ovason writes, “and, some historians insist, had even been started by them.” Indeed, the War for Independence began in a warehouse owned by a Mason, and a majority of the revolutionaries who undertook the Boston Tea Party of 1773 were Masons. The most famous American Mason was George Washington himself, although some biographers not altogether happy with Freemasonry have tried to minimalise his association with it. In fact, however, he was the first Master of the Alexandria, Virginia lodge (Number 22) from April, 1788 until December the following year. It was this lodge number that was carried before him on a masonic standard, as Washington, leading ranks of fellow Masons all wearing their emblematic aprons, walked in procession to the founding of the American capital, in 1793. The event was commemorated in a pair of bronze panels designed in 1868. They portray him laying the cornerstone surrounded by masonic symbols, including the square and trowel. Washington was still Master Mason when inaugurated as the first President of the United States on 30 April 1789. After his death ten years later, he was laid to rest at his Mount Vernon estate in a masonic funeral, during which all save one of the pallbearers were members of his own lodge. Ovason observes in a companion volume (The Secret Symbols of the Dollar Bill, CA: HarperCollins, 2004) that Washington’s masonic significance was not only expressed in the city to which he gave his name: “The portrait of George Washington, at the centre of the dollar bill, is highly symbolic.” The President’s image is centrally framed by the last letter in the Greek alphabet, an Omega, for “completion”, or the Ultimate, and implying that the foremost Founding Father represented the apogee of human values. His appearance on the one-dollar bill is by no means the only non-Christian symbol found here. Especially cogent is the illustration of a truncated pyramid surmounted by a radiant delta enclosing a single eye beneath the words Annuit Coeptis. A motto on a scroll near the base reads, Novus Ordo Seclorum. Both were derived from the great Roman writer, Virgil. In his classic epic, the Aeneid, he directs a prayer for assistance to Jupiter, king of the gods: Audacibus annue coeptis, or “Favor our daring undertaking!” Novus Ordo Seclorum, “a New Order for the ages,” was taken from one of his famous Ecologues – Magnus ab integro seclorum nascitur ordo, or, “The great series of ages is born anew.” “The idea of a truncated pyramid was Masonic,” Ovason writes. It is certainly “pagan,” and generally understood to mean stability and virtue in the 18th century. According to President William McKinley, the twenty fifth president of the United States and himself a Mason, it also meant strength and duration. But these obvious characterisations only represent the figure’s exoteric aspect. Far less well recognised, the pyramid depicted on the one-dollar bill, unlike any in the Nile Valley, has seventy two stones. This amount is hardly circumstantial, because it has been revered by mystics as one of the most sacred of all numerals. Since Pythagorean times, in the 7th century BCE, and millennia earlier still in ancient Egypt, 72 has represented the ways of writing and pronouncing the name of the Almighty, not the Christian or even Old Testament Yahweh, but God as represented by the Sun, as it moves through space and time. Ovason explains, “Due to the phenomenon called precession, the Sun appears to fall back against the stars. This rate of precession is one degree every seventy two years.” In other words, the dollar bill’s seventy two stones signify the deist conception of the Supreme Being as rooted in the pre-Christian, non-Biblical Ancient World. The single-eyed triangle radiating energy above the truncated pyramid is another Egyptian image, the Utchat, or Udjat, the all-seeing eye of Ra, a sun-god and the divine king of heaven. Esoterically, the Utchat was identified with Maat, the moral law pervading all Creation. Its appearance hovering above the apex of the dollar bill pyramid not only reinforces the solar symbolism of that sacred structure, but embodies the principle of Maat America’s Founding Fathers sought to inculcate in the constitutional republic they designed. But the esoteric, deistic, even “pagan” Freemasonry of America’s Founding Fathers is most apparent in the arcane influence that Ovason traces throughout the design and construction of the US capital. These early Americans did not weave this occult symbolism through their country’s foremost city for clubbish reasons, but because their iconological signs were the emblems of a new civilisation they wanted to create in the New World. For New Dawn’s 92nd issue, Jason Jeffrey described in “Washington, D.C.: A Masonic Plot?” how the White House is located at the apex of a five-pointed star – the ancient geometric seal of King Solomon, with which he conjured supernatural powers – formed by the intersections of Massachusetts, Rhode Island, Vermont and Connecticut Avenues with K Street NW. But the significance of this urban pentagram is overshadowed by what Ovason has identified as the city’s chief orientation to Virgo. He writes that central Washington, D.C. has twenty public zodiacs, with Virgo prominent in each one. The founding of Federal City, as it was previously known, laying the cornerstones of the President’s House, in the wing of the Capitol and the foundation stone of the Washington Monument, all were timed to coincide with the appearance of this astrological figure. Ovason shows that the White House, Capitol building and Washington Monument form a strangely imperfect “Federal Triangle” that only makes sense when we realise it identically resembles a configuration made by the stars – Arcurtus, Spica and Regulus – that bracket Virgo. On evenings from August 10th to the 15th, as the Sun sets over Pennsylvania Avenue, the Constellation Virgo appears in the sky above the White House and the Federal Triangle. At that same moment, the setting Sun appears precisely above the apex of a stone pyramid in the Old Post Office tower, which is just wide enough to occlude the solar disc. According to the 19th century Freemason, Ross Parsons, “The Assumption of the Virgin Mary is fixed on the 15th of August, because at that time the Sun is so entirely in the constellation of Virgo that the stars of which it is composed are rendered invisible in the bright effulgency of his rays.” Formal ground-breaking ceremonies for the National Archives Building were conducted under Virgo. Two years later, three planets were in Virgo for the official laying of the structure’s cornerstone. The Federal Reserve Building is replete with a five-petaled design motif, the symbol of Virgo. The great clock at the Library of Congress is depicted with a comet in Virgo. Because of its centralised location, Ovason believes that “the Library of Congress was sited in this position and its symbolic program established precisely in order to demonstrate the profound arcane knowledge of the Masonic fraternity which designed Washington, D.C. … the city was surveyed, planned, designed and built largely by Masons.” Indeed, no less than twenty one memorial stones with lapidary inscriptions from various masonic lodges line the inside shaft of Washington Monument. But why would they incorporate so many references to Virgo in their capital? As Ovason points out, the construction of Washington, D.C. “marked one of those rare events in history when a city was planned and built for a specific purpose.” He fails to mention, however, that nearly two hundred years before, the first permanent European settlement in North America foreshadowed Virgo’s ceremonial centre on the Potomac River. In 1606, Sir Francis Bacon established Jamestown in Virginia, ostensibly named after Elizabeth, the Virgin Queen. But the emblem he chose, and which survives today as the state seal, is the image of Pallas Athene, Parthenos, the virgin goddess of Greek myth, the divine patroness of civilisation. Bacon, as Greg Taylor observed in the same issue of New Dawn, prefigured the Freemasons with his vision of a practical utopia based on individual liberty and social responsibility – essentially the same ideals that formed the basis of the US Constitution. When America’s Founding Fathers came to compose that document, they perpetuated the identical sacred virginal symbolism initiated at Jamestown. Repeated symbolism in the architecture, astrological timing and the very lay-out of Washington, D.C. to Virgo-Virgin Athene represent homage to the Eternal Feminine, Goethe’s “ewige Weiblicher,” which, in his “Faust,” leads us onward – “zieht uns hinan.” The Freemasons who envisioned and constructed the capital of the United States did so to put their new country in accord with that pure (“virginal”) energy they believed to actually exist as the demiurge of Creation. They worshipped that energy, personified in the goddess, a concept that was anathema to the patriarchal Christians of their time (and ours?). As Ovason concludes, “A city which is laid out in such a way that it is in harmony with the heavens is a city in perpetual prayer.” Given so great a distance the present occupants of Washington, D.C. have strayed from the original intentions of its designers, the US capital needs all the prayers it can get! By Frank Joseph, New Dawn Magazine;
<urn:uuid:0668a457-3405-42d7-adc4-d43ae6987ee6>
CC-MAIN-2020-05
https://humansarefree.com/2014/02/the-arcane-origins-of-america.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594391.21/warc/CC-MAIN-20200119093733-20200119121733-00375.warc.gz
en
0.968683
3,733
2.953125
3
January is radon awareness month Radon is 2nd leading cause of lung cancer JACKSONVILLE, Fla. – Here is a shocking fact. The second leading cause of lung cancer is radon. In the United States, the EPA estimates that about 21,000 lung cancer deaths each year are radon related and in Canada that number stands at approximately 3,000. Radon, a dangerous gas, is colorless, odorless, tasteless and radioactive. It is formed by the breakdown of uranium, a natural radioactive material found in soil, rock and groundwater. Nearly 1 out of every 15 homes in the United States and Canada is estimated to have an elevated radon level. It typically moves up through the ground to the air above and into your home through cracks and other holes in the foundation. Your home traps radon inside, where it can build up. Any home may have a radon problem - this means new and old homes, well-sealed and drafty homes, and homes with or without basements since this secret killer comes from the ground not from construction materials. How Radon Can Get Into Your Home 1. Cracks in Solid Floors 2. Construction Joints 3. Cracks in Walls 4. Gaps in Suspended Floors 5. Gaps Around Service Pipes 6. Cavities Inside Walls 7. The Water Supply Radon testing is the only way to know if you and your family are at risk from radon. Pillar To Post Home Inspectors conduct a short term test using a continuous monitor to provide a snapshot of the home to see if it has elevated levels of radon. Testing takes approximately 2-3 days and results are provided and interpreted and the report is sent directly to the client. Recommendations will then be made for a mitigation system. Even owners of condominiums, houses built on slabs, and other situations need to check on the air quality and the presence of radon in their living quarters.
<urn:uuid:9afa79b1-acc8-4453-9a9b-66e274c8faab>
CC-MAIN-2020-24
https://www.news4jax.com/news/2016/01/07/january-is-radon-awareness-month/
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00241.warc.gz
en
0.934776
410
3.15625
3
Neutrino physics is full of unusual characters. There was Ettore Majorana, who disappeared in 1938 without a trace, taking his savings with him. No record of him has ever been found, though there have been numerous disputed sightings of him throughout the years. Then there was Bruno Pontecorvo. Suspected of slipping nuclear secrets out of England, he vanished while on vacation in Italy in 1950 and reappeared five years later singing the praises of his new homeland: the Soviet Union. Strangest of all, though, is the neutrino itself. It is electrically neutral, making it invisible to particle detectors, and bizarrely lightweight, at most 0.0004 percent the weight of the next-lightest particle, the electron. Although it is the most numerous massive particle in the universe, it is so slippery that it can pass through a light year of lead as if it wasn’t there. And then there is the matter of the shape-shifting. Neutrinos come in three flavors: electron, muon, and tau, each named for the charged particle with which it is associated. But the flavors are not pure essences—each is made up of a different combination (or superposition) of three ingredients, or mass states. How they get mass is a physicist’s version of the Zen koan pondering the sound of one hand clapping. These mass states behave not as simple dumbbells of differing weights, but as waves of differing lengths. Because the waves do not line up with each other perfectly, at different points the height of one mass state will vary with respect to that of the other two. That means that sometimes the combination of mass states will most resemble the recipe for an electron neutrino, while at other times it will look like that of a muon neutrino. As a result, neutrinos appear to oscillate among the three flavors as they travel. No other fundamental particles do this. “Only the neutrinos can change from one type to another,” says André de Gouvêa of Northwestern University in Evanston, Illinois. More than a quirk of nature, this ability to mutate on the fly points to some deep questions in physics, and potentially, some important answers. Neutrino mutation would not be possible if it weren’t for the particle’s minuscule mass. Because each of the three known mass states is so small and its associated quantum wavelength is so long, the waves corresponding to each state can remain largely in sync, with only small offsets, over cosmic distances. This allows neutrinos to flicker between different flavors in an ephemeral state of multiplicity. If their masses were larger and their wavelengths shorter, the waves would quickly become so out of phase that this knife-edge balance between different flavors would collapse, forcing the neutrinos into one type or the other. “The different flavors would separate from each other,” says de Gouvêa. “They would have a very binary behavior.” The fact that neutrinos don’t, thanks to their puny mass states, makes sense according to the rules of quantum mechanics, but it is still mind-bending, says neutrino researcher Jason Koskinen of the University of Copenhagen. “I still haven’t wrapped my head around this,” he admits. There is just one snag: Neutrinos weren’t supposed to have any mass at all. “We built our standard model around the idea that neutrinos are massless,” says Janet Conrad of the Massachusetts Institue of Technology (MIT). The fact that they have mass, however small, is a big problem. The standard model is physicists’ best idea of how particles and forces interact—a spectacularly strong edifice whose construction was completed in 2012 with the discovery of its last missing particle, the Higgs boson. “Neutrino oscillation is the only confirmed physics right now that can be done outside the standard model,” says Koskinen. The reason that neutrino mass is so tricky has to do with how any particle gets its mass. Other elementary particles with mass come in two mirror versions—one left- and one right-handed—that correspond to the direction of their spin. Each version can interact with a different force of nature, and both “hands” seem to be required to give particles mass, thanks to their interaction with an invisible quantum “ether” that suffuses all of space: the Higgs field, whose signature particle is the Higgs boson. The Higgs field acts a bit like a mirror, turning a particle with one spin into its mirror opposite. “The idea is that every once in a while, a left-handed particle will hit the Higgs field and convert to a right-handed particle,” says de Gouvêa. “The net effect is that it looks like a particle with mass.” It is so rare that it is typically expected to occur on timescales much longer than the age of the universe. Neutrinos, by contrast, interact only with the one-handed weak nuclear force (and technically gravity, but the strength of this force compared to the others is negligible). And indeed, only left-handed neutrinos have been observed. If neutrinos have no mirror reflection, they should have no mass, according to the standard model, so how they get mass is a physicist’s version of the Zen koan pondering the sound of one hand clapping. “Many particle physicists who work on the subject get confused about it,” says de Gouvêa. One possibility is that neutrinos do have a reflection, but one that only they can see. That is, there are right-handed neutrinos, but their presence has not been detected because they are even more aloof than their southpaw counterparts and have no mass. “That particle doesn’t participate in any force,” says de Gouvêa of the purported right-handed neutrino. “It literally does not interact with anything, except with the left-handed neutrino to give it a mass.” How neutrinos gain their mass is a mystery whose solution promises to spill over the boundaries of neutrino physics itself, and into one of the biggest questions of cosmology: Why is there more matter in the universe than antimatter? According to the standard model, equal amounts of matter and antimatter should have been made after the big bang. When matter and antimatter meet, they instantly and completely annihilate each other. So the big bang should have led in quick succession to a great conflagration. The fact that we are here today shows that some process tipped the scales to leave behind more matter. “How did equality evolve into inequality?” asks Boris Kayser, a neutrino theorist at Fermilab in Batavia, Illinois. “Matter and antimatter have to behave differently.” Many physicists suspect neutrinos played a role in this imbalance—but if they do, it’s unlikely that they get their mass the way other particles do (interacting with the standard Higgs field through a right-handed version of themselves). Fortunately, there’s a loophole, raised nearly 80 years ago by the enigmatic Majorana. Instead of invoking a separate right-handed matter neutrino, the neutrino anti-particle (the antineutrino) could act as a mass partner for its lefty counterpart. After all, the antineutrino is right-handed. For this to work, though, neutrinos would need to be their own antiparticles. That means that if two neutrinos ever met each other, they would instantly annihilate. One way to test whether this is happening is to look for radioactive particle decays that should leave behind signs of two antineutrinos but don’t—presumably because the antineutrinos, being their own antiparticles, had annihilated immediately after forming. With the exception of one controversial result reported about a decade ago, this signature, known as neutrinoless double beta decay, has yet to be seen. That doesn’t mean the process (two neutrons decaying to produce two protons and two electrons) doesn’t exist: It is so rare that it is typically expected to occur on timescales much longer than the age of the universe. But not always. Statistically, the decay could occur on timescales detectable in the lab. “If somehow we were told we can only look for neutrino masses in one way and only one way, then neutrinoless double beta decay would probably be the highest priority,” de Gouvêa says. Several new hunts, including the Italian CUORE and the Canadian SNO+ experiments, aim to scrutinize the radioactive decays of elements such as tellurium for the telltale absence of antineutrinos. If neutrinos are not their own antiparticles, neutrinoless beta decay would never happen. Instead, the two neutrons would leave behind two protons, two electrons and two antineutrinos. In that case, the difference between the number of matter and antimatter leptons—that is, neutrinos, electrons, muons, and tau particles—would be zero both before and after the decay. If neutrinos are their own antiparticles, however, two leptons (the electrons) would be left standing after the decay—and no antileptons. The net result would be an increase in the quantity of matter leptons, at the expense of their antimatter counterparts. Similar processes operating in the early universe might provide just the ticket to explain the universe’s disparity between matter and antimatter. That is a promising direction for cosmologists interested in the nature of the universe. But, it also means some new physics is needed to explain how neutrinos get their mass; the usual interaction among left-handed, right-handed, and Higgs particles would not work. One idea is that neutrinos have their own Higgs field, a mirror that reflects only neutrinos and no other particles. “It’s like the neutrinos require their own Higgs boson,” says de Gouvêa. Chian-Shu Chen of the National Center for Theoretical Sciences in Hsinchu, Taiwan, and Ya-Juan Zheng of National Taiwan University in Taipei calculate that it is possible that signs of this new Higgs boson could appear at the Large Hadron Collider in Switzerland. “We expect [the] neutrino mass mechanism could have the chance to be revealed within the reach of the LHC,” says Chen. But he acknowledges that it would be “very lucky” if that happened, since physicists would ordinarily expect the new particles to be produced at much higher energies than the LHC can reach. Alexei Smirnov of the Max-Planck Institute for Nuclear Physics in Heidelberg, Germany, agrees. “I would call this activity ‘searching under the lamp,’ ” he says. “There is no other serious motivation for this construction but making something observable at the LHC.” Pontecorvo’s defection contributed to his wife’s nervous breakdown. Another possibility is to add one or more extra types of neutrino that would be even less sociable than an ordinary one. This is similar to the idea of simply adding a right-handed neutrino, except in this case, the extra neutrino interacts with itself to provide its own mass. It is referred to as a massive “sterile” neutrino, since it can only affect other particles gravitationally. “The left-handed guys have their own mass, and the right-handed guys have their own mass,” says Rabindra Mohapatra of the University of Maryland, College Park. If a “sterile” neutrino exists, it should have a mass that is inversely proportional to that of the ordinary neutrino, as if the two neutrino types were on opposite sides of a seesaw. And that could help explain a puzzling gap in the distribution of the masses of fundamental particles, says Mohapatra. The quarks that make up protons and neutrons are about 10 times as massive as the electron, but the electron is at least 250,000 times as massive as the next lightest particle, the neutrino. “We were always worried about the fact that the neutrino mass seems to be much smaller than the electron mass,” says Mohapatra. In the seesaw mechanism, which Mohapatra helped originate 35 years ago, the extremely low mass of ordinary neutrinos can be explained if there are very heavy steriles as well. The seesaw mechanism could possibly produce exotic charged particles that would appear in the detritus of proton collisions at the LHC. Finding evidence of massive sterile neutrinos would be exciting “because it would tell us that neutrino masses are evidence for some other independent source of mass” for fundamental particles besides the ordinary Higgs field, says de Gouvêa. Such a discovery would get at the heart of the origin of mass, one of the most basic questions in physics. As for Pontecorvo, the man who first suggested that neutrinos might shape-shift, his own life was nothing if not a seesaw. He soon came to regret having defected to the Soviet Union. “After a few years, I understood what an idiot I was,” Pontecorvo told a reporter in 1992, a year before his death. But it was too late. His defection prevented him from traveling abroad for many years, contributed to his wife’s nervous breakdown, and, ironically, shut him out from the nuclear reactor research he had probably left the United Kingdom to pursue, says Pontecorvo biographer Simone Turchetti, a science historian at the University of Manchester. “This really is a story of a man living two completely different lives in two completely different worlds,” Turchetti says—rather like the particle he studied. Maggie McKee is a freelance science writer focusing mainly on astronomy and physics. Previously an editor at New Scientist and Astronomy magazines, she lives near Boston with her husband.
<urn:uuid:758c96b2-2de1-4ea2-8e6d-425f08ad8127>
CC-MAIN-2017-26
http://nautil.us/issue/14/mutation/this-shape_shifter-could-tell-us-why-matter-exists
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320386.71/warc/CC-MAIN-20170625013851-20170625033851-00402.warc.gz
en
0.953487
3,063
2.9375
3
So, Europe has a brand-new species of hitherto-undiscovered mammal, the Cypriot mouse Mus cypriacus . That’s great, but what has interested me in particular is the claim made in many articles that the Cypriot mouse is ‘the first new European mammal to be discovered in more than 100 years’ (go, for example, here or here). Sad to say, this quote wasn’t invented by journalists, but apparently comes right from the mammalogists who described the species. One internet article on the discovery states that it ‘overturns the widely held belief that every living species of mammal had been identified in Europe’, and goes on to state that ‘it was generally assumed that the European biodiversity had been entirely picked over by the natural history pioneers of the 19th century’. Well, ok, something can be a ‘widely held belief’ and still be pretty much untrue, but while one might expect that Europe is a well known place where few new species are found nowadays, these statements – like most media statements pertaining to the rarity of recently discovered species – are wildly inaccurate. Sure, there aren’t as many new mammals coming out of 21st century Europe as there are frogs coming out of Sri Lanka or whatever, but the fact remains that Europe – the most well-explored and intensively studied continent of them all – most certainly has produced new mammal species within the last 100 years, including within recent decades. What’s more, it hasn’t produced one or two new species, but 32 of them! Sorry Mus cypriacus, but you ain’t that special. You will know from previous blog posts that during the last few decades a large number of tropical rodents have been named and described (see New, obscure, and nearly extinct rodents of South America and Giant furry pets of the Incas). And so it is with Europe, and among rodents we start with mice. Remember here that we’re only interested in those species that have been named over the past 100 years. Five European mice have been named within the last 100 years, four of which are obscure, and one of which is well known and well studied. Firstly, we have the Cretan spiny mouse Acomys minous Bate, 1906, a cold-adapted island endemic (Bate’s publication is sometimes given as 1905, in which case this isn’t a ‘100 year’ mammal). The second species, the Western house mouse Mus domesticus Schwartz & Schwartz, 1943, is anything but poorly known, and though not recognized as distinct from M. musculus Linnaeus, 1758 until 1943, it can hardly be regarded as a recently discovered species. As the common name suggests, the Western house mouse is the house mouse species of western Europe (as well as northern Africa). It is replaced in Scandinavia and eastern Europe by M. musculus, and around the Mediterranean coast it is sympatric with the Algerian mouse M. spretus. The third ‘100 year’ European mouse species was first described from Allgäu in Germany: it’s the Alpine wood mouse Apodemus alpicola Heinrich, 1952, now known to occur in the Alps of Switzerland, Liechenstein, Austria and Italy as well as those of Germany. Though first named as a new species, it later became regarded as a high-altitude subspecies of the Yellow-necked mouse A. flavicollis. A 1989 study demonstrated that it should be recognised as a distinct species again. Also belonging to the genus Apodemus is the Mount Hermon field mouse A. iconicus Heptner, 1948. This species has a complex nomenclatural history that I won’t cover in full here, but of special interest is that a new species named from Israel in 1989 (A. hermonensis Filippucci et al., 1989) is now thought to be a junior synonym of A. iconicus: there are also a few older names (Mus sylvaticus var. tauricus Pallas, 1811, M. s. tauricus Barrett-Hamilton, 1900 and M. s. witherbyi Thomas, 1902) that some mammalogists regard as senior synonyms of A. iconicus, and if this is correct then A. iconicus is not a ‘100 year’ mammal. Though best associated with Israel and Turkey, A. iconicus has recently been added to the definitive European list as it’s now known to occur on Rhodes and Bozcaada (Kryštufek & Mozetič Francky 2005). Finally among mice, there is the recently discovered and poorly known Balkan short-tailed mouse Mus macedonicus Petrov & Ruzic, 1983. Exactly as obscure as some of these mice are various ‘100 year’ vole species. One of them is comparatively well known however, and indeed is the best known recently discovered European mammal: the Bavarian pine vole Microtus bavaricus Konig, 1962 of the Bavarian and Italian Alps. Ironically, the reason the species is ‘best known’ is because it was thought to have become extinct: there was an absence of sightings after its discovery, and in 1980 a hospital was constructed on the location where it formerly occurred. However, the species was rediscovered by Friederike Spitzenberger in 2004 at a location in Austria. Four other Microtus voles have been named within the last 100 years. Cabrera’s vole Microtus cabrerae Thomas, 1906 is a poorly known, endangered Spanish species. Far better studied is the Sibling vole Microtus rossiaemeridionalis Ognev, 1924, a species that occurs from Finland southward to Greece, and also occurs in eastern Asia (see adjacent image). Between 30 and 70 years ago it was accidentally introduced to Svalbard, and in some years large numbers of the species occur there. Though originally named in the 1920s, the name M. rossiaemeridionalis was forgotten about in the following decades. The discovery in the late 1960s that a population originally assumed to be part of the Common vole M. arvalis actually merited distinction then led to the naming of the new species M. subarvalis Meyer et al. 1972, and it was this population that then proved to be the same thing as M. rossiaemeridionalis. The third species, the Tatra pine vole Microtus tatricus Kratochvíl, 1952, was first described from Slovakia but is now known to occur in Poland, Rumania and Ukraine. Finally, the Balkan pine vole Microtus felteni Malec & Storch, 1963 is of special interest with regard to recently named European mammals in that it is endemic to the former Yugoslavian province of Macedonia, an area where there are a further two endemic mammals: the Balkan or Stankovic’s mole T. stankovici and the Balkan short-tailed mouse Mus macedonicus. Of the two, the former was only named in 1931 and the latter in 1983, so Macedonia has proved a ‘hot-spot’ for new European mammals. Another ‘100 years’ vole is the highly distinctive Balkan snow vole or Martino’s snow vole Dinaromys bogdanovi (Martino, 1922), originally named as a species of Microtus but awarded its own genus in 1955. Occurring in Croatia, Bosnia and Herzegovina, it may also be present in Albania and Greece. Fossils show that it formerly occurred more widely in Europe. Finally among voles, there is the Southern water vole Arvicola sapidus Miller, 1908, an endangered species of France, Spain and Portugal. Finally among rodents, we come to another obscure and poorly known species, Roach’s mouse-tailed dormouse Myomimus roachi (Bate, 1937). First described from Israel as a fossil, it was discovered in living form in Bulgaria in 1960 and in Turkey in 1991. Several other species of this genus are known, all from eastern Asia, all named during the 20th century [the adjacent image shows one of these, M. personatus of Turkmenistan, Uzbekistan and Iran]. Moving now to lipotyphlans, or insectivorans or whatever you want to call them, we find that several species have been named within the last 100 years. Europe’s shrew species belong to three genera, Sorex (the long-tailed or red-toothed shrews), Crocidura (the white-toothed shrews) and Neomys (the Old World water shrews), and what’s interesting it that the ‘100 year’ species belong to all three of these. The new Sorex species are the Spanish or Iberian shrew S. granarius Miller, 1910, the Taiga or Even-toothed shrew S. isodon Turov, 1924 and the Appenine shrew S. samniticus Altobello, 1926. Though first described as a subspecies of the Common shrew S. araneus, the Spanish shrew is strongly distinct genetically and in having a particularly unusual short skull. The Taiga shrew occurs from Norway to as far east as Siberia and Sakhalin Island; it is a large, drab species with a broad braincase and particularly narrow snout. Though named in the 1920s it was later regarded as a subspecies of the Dusky shrew S. sinalis, a Chinese species, until Hoffmann (1987) showed that it should have remained as a species. The Appenine shrew is endemic to Italy, and while formerly regarded by some as conspecific with the Common shrew, it is quite different, having a much shorter tail for example. Moving now to white-toothed shrews, we find that four European species have been named since the 1950s. Shrews have proved very good at colonizing islands, and only within recent decades have mammalogists started to properly describe and differentiate the island endemic white-toothed shrews of the European islands. Crete has its own recently-named white-toothed shrew, the Cretan white-toothed shrew C. zimmermanni Wettstein, 1953, while Pantelleria Island off Italy is home to C. cossyrensis Contoli, 1989. The Pantelleria shrew is controversial, with various studies indicating that it is a subspecies of the Greater white-toothed shrew (C. russula). During the 1980s two new white-toothed shrews were named from the Canary Islands: the Canary shrew C. canariensis Hutterer et al., 1987 of Fuerteventura, Lanzarote and Lobos, and the Osorio shrew Crocidura osorio Molina & Hutterer, 1989 of Gran Canaria. The only other extant endemic mammal of the Canary Islands, the bat Plecotus teneriffae, was named in 1907 as a subspecies and given species status in 1985, so the islands have proved an important place for the discovery of new European mammals. Incidentally, there were other endemic mammals on the Canary Islands until recently, but they are today extinct. While its discovery falls outside of the last 100 years, of interest is that Sicily’s endemic white-toothed shrew was only named in 1900: the Sicilian shrew C. sicula Miller, 1900. Though it has since been demoted to subspecific status, it’s also worth noting that the white-toothed shrew of the Isles of Scilly, Crocidura suaveolens cassiteridum, was originally named as a distinct species (C. cassiteridum) in 1924 (Hinton 1924) [see adjacent image]. This shrew isn’t unique to the Isles of Scilly, as it also occurs on Jersey and Sark, and given that it belongs to a species otherwise restricted to southern Europe it is usually thought of as an introduction from the Mediterranean region. Presumably it made the crossing in fodder or bedding for domestic animals. Incidentally, the Hinton who named the Scilly shrew is Martin Alister Campbell Hinton (1883-1961), former Keeper of Zoology at London’s Natural History Museum, and perhaps best known nowadays as possible perpetrator of the Piltdown hoax. Finally among shrews, there is the Neomys species Miller’s water shrew Neomys anomalus Cabrera, 1907, also known as the Mediterranean or Cabrera or Southern water shrew. In contrast to the better-known Neomys species, N. anomalus is less well adapted for life in water, with a less well-developed tail keel and fewer fringes on the borders of its hind feet, and it differs in the shape of its lower jaw, in penis morphology, and in other characters from the other European Neomys species. Among ‘100 year’ European lipotyphlans, it’s not all just shrews. Three new European mole species have been named since 1906: the Levant mole Talpa levantis Thomas, 1906, the Iberian mole T. occidentalis Cabrera, 1907, and the Balkan or Stankovic’s mole T. stankovici Martino & Martino, 1931. A fourth species, the Roman mole T. romana Thomas, 1902 was named 104 years ago. While all of these taxa were originally named as distinct species, they later became sunk into the synonymy of other species (yet more examples of laissez-faire lumping: see The many babirusa species: laissez-faire lumping under fire again), only to be resurrected during the 1990s. The Levant mole, an animal known from Bulgaria, Greece, Turkey and the adjacent part of the Caucasus, was mostly regarded as a subspecies of the Mediterranean mole T. caeca, until a revision of 1993, and the Iberian mole was similarly widely regarded as a Mediterranean mole subspecies until 1993. Similarly, the Balkan mole was regarded during recent decades as a subspecies of the Roman mole T. romana. Finally, we come to bats. While most European bat species were formally named in the 1800s and before, new taxa continue to be discovered, with several species named this century. Many people might immediately think of the two pipistrelle species dubbed informally the 45 and 55 kHz pipistrelles: in 1993 it was discovered that the ‘species’ Pipistrellus pipistrellus actually consisted of two distinct species, both of which differed in the echolocation frequencies of their calls, and which were later shown to differ in genetics, morphology and behaviour (Barlow et al. 1997, Davidson-Watts & Jones 2006). However, while the many differences between these two species have only recently been acknowledged, both were originally named during the 1700s and 1800s: the 45 kHz pipistrelle is P. pipistellus (Schreber, 1774) while the 55 kHz pipistrelle is P. pygmaeus Leach, 1825. Consequently, neither bat can be considered a ‘100 year’ discovery. However, vesper bats have yielded several bona fide new European species within the last 100 years, though as we shall see a few of them are of controversial status. The most recently named of them are the two long-eared bats Plecotus microdontus Spitzenberger et al. 2002 from Austria and P. sardus Mucedda et al., 2002 from Sardinia, though P. microdontus has since been regarded by some as synonymous with the Brown long-eared bat P. auritus. Also recently named is the Alpine long-eared bat P. alpinus Kiefer & Veith, 2001, named for a specimen collected in France in 2001 (Kiefer & Veith 2001). Additional specimens are known from Greece, Liechtenstein, Austria, Croatia and Switzerland, so there is every indication that the species is widespread. The Croatian specimen was collected in 1972 and the specimen from Liechtenstein in 1961: a reminder that the actual ‘discovery’ date of a species often doesn’t match the time when it becomes technically named and/or described. Yet another recently recognised species, P. macrobullaris Kuzjakin, 1965, was named for long-eared bats from Switzerland and Austria supposedly intermediate between the Brown long-eared bat and Grey long-eared bat P. austriacus, but shown by Spitzenberger et al. (2001) to be worthy of species status. P. macrobullaris is now known from Croatia and elsewhere. To confuse matters further, recent work (see Juste et al. 2004) indicates that both P. microdontus and P. alpinus are synonymous with P. macrobullaris [adjacent image shows a long-eared bat. And no, I have no idea what species it is]. Several new long-eared bat subspecies have also been named within the last few decades, and new data has caused some of them to be newly elevated to species level. Within P. auritus, the subspecies P. a. hispanicus (later reidentified as a subspecies of P. austriacus) was named in 1957, P. a. kolombatovici in 1980, and P. a. begognae in 1990. Genetic studies have shown that P. a. kolombatovici is distinct enough to be regarded as a full species (Mayer & von Helverson 2001, Spitzenberger et al. 2001), though the animal labelled as P. a. kolombatovici by Spitzenberger et al. (2001) later turned out to be P. alpinus. Another form first named as a subspecies of P. auritus, P. a. teneriffae Barret-Hamilton, 1907, was recognised as worthy of species status in 1985. Though they started their taxonomic histories as subspecies, both P. kolombatovici and P. teneriffae can therefore be stated to have been discovered within the last 100 years. Another new vesper bat, this time a mouse-eared bat, is in the ‘100 years’ club, but it seems unlikely to be a valid species. It’s the Nathaline bat Myotis nathalinae Tupinier, 1977, described for two specimens from Ciudad Real in Spain. However, it’s highly similar genetically and morphologically to Daubenton’s bat M. daubentonii (Tupinier 1977). Indeed Bogdanowicz (1990) found that the skull morphology of M. nathalinae fell within the range of variation exhibited by M. daubentonii populations, and therefore argued against the idea that it should be regarded as a valid species, while genetic samples of M. nathalinae have also fallen within the range of variation exhibited by M. daubentonii (Mayer & von Helverson 2001). Other studies have produced the same result, so bat workers generally regard M. nathalinae as a subspecies of M. daubentonii. A second new mouse-eared bat, M. alcathoe von Helverson et al., 2001, is morphologically and genetically distinct, and noteworthy in being Europe’s smallest mouse-eared bat, and the one with the most high-pitched echolocation calls. First reported from Greece and Hungary, in 2003 it was reported from Slovakia. So, so far it’s all been rodents, insectivores and bats: exactly those groups of mammals you’d expect to contain recently-discovered species. Indeed, that is about it. There is, however, a ‘100 year’ European lagomorph: the Broom hare Lepus castroviejoi Palacios, 1977 of the Catabrian Mountains of north-west Spain, a species regarded as merely a population of the European hare L. europaeus until 1976 (Palacios 1977). This poorly known hare is obscure and has been widely overlooked, in fact it’s missing from several (post-1977!) field guides on European mammals. There does appear to be widespread acceptance of its specific status, however, even though there is some indication that the species hybridizes with the Mountain hare L. timidus (Melo-Ferreira et al. 2005). It’s pretty clear then that the Cypriot mouse is most certainly not ‘the first new mammal species to be found in Europe in over a century’, and I’m amazed that such a claim has been made. Despite the message that journalists write into their stories all the time, the discovery of new species is a routine thing, not an extraordinary one, and that goes even for mammals, and even for Europe. Don’t get me wrong: the Cypriot mouse is still a very interesting and significant discovery, but it is clearly not the major scientific event that has been implied by some. To conclude, those European mammals named within the past 100 years – excluding the Cypriot mouse – are as follows. I might have missed some, in which case please let me know [UPDATE: list ammended as of 18-11-2006. Thanks to those who have provided new data]. As noted, a few species are of dubious status, and have been marked with **. - Cretan spiny mouse Acomys minous Bate, 1906 - Western house mouse Mus domesticus Schwartz & Schwartz, 1943 - Alpine wood mouse Apodemus alpicola Heinrich, 1952 - Mount Hermon field mouse A. iconicus Heptner, 1948 - Balkan short-tailed mouse Mus macedonicus Petrov & Ruzic, 1983 - Bavarian pine vole Microtus bavaricus Konig, 1962 - Cabrera’s vole M. cabrerae Thomas, 1906 - Sibling vole M. rossiaemeridionalis Ognev, 1924 - Tatra pine vole M. tatricus Kratochvíl, 1952 - Balkan pine vole M. felteni Malec & Storch, 1963 - Balkan snow vole or Martino’s snow vole Dinaromys bogdanovi (Martino, 1922) Southern water vole Arvicola sapidus Miller, 1908 - Roach’s mouse-tailed dormouse Myomimus roachi (Bate, 1937) - Spanish or Iberian shrew Sorex granarius Miller, 1910 - Taiga or Even-toothed shrew S. isodon Turov, 1924 - Appenine shrew S. samniticus Altobello, 1926 - Cretan white-toothed shrew Crocidura zimmermanni Wettstein, 1953 - Pantelleria Island shrew C. cossyrensis Contoli, 1989 ** - Canary shrew C. canariensis Hutterer et al., 1987 - Osorio shrew C. osorio Molina & Hutterer, 1989 - Miller’s water shrew Neomys anomalus Cabrera, 1907 - Levant mole Talpa levantis Thomas, 1906 - Iberian mole T. occidentalis Cabrera, 1907 - Balkan or Stankovic’s mole T. stankovici Martino & Martino, 1931 - Alpine long-eared bat Plecotus alpinus Kiefer & Veith, 2001 ** - P. microdontus Spitzenberger et al. 2002 ** - P. kolombatovici (Dulic, 1980) - P. teneriffae Barret-Hamilton, 1907 - P. macrobullaris Kuzjakin, 1965 - Nathaline bat Myotis nathalinae Tupinier, 1977 ** - M. alcathoe von Helverson et al., 2001 - Broom hare Lepus castroviejoi Palacios, 1977 For the latest news on Tetrapod Zoology do go here Refs - - Barlow, K. E., Jones, G. & Barratt, E. M. 1997. Can skull morphology be used to predict ecological relationships between bat species? A test using two cryptic species of pipistrelle. Proceedings of the Royal Society of London B 264, 1695-1700. Bogdanowicz, W. 1990. Geographic variation and taxonomy of Daubenton’s bat, Myotis daubentoni, in Europe. Journal of Mammalogy 71, 205-218. Davidson-Watts, I. & Jones, G. 2005. Differences in foraging behaviour between Pipistrellus pipistrellus (Schreber, 1774) and Pipistrellus pygmaeus (Leach, 1825). Journal of Zoology 268, 55-62. Hinton, M. A. C. 1924. On a new species of Crocidura from Scilly. Annals and Magazine of Natural History 14, 509-510. Hoffmann, R. S. 1987. A review of the systematics and distribution of Chinese red-toothed shrews (Mammalia: Soricinae). Acta Theriologica Sinica 7, 100-139. Juste, J., Ibáñez, C., Muñoz, J., Trujillo, D., Benda, P., Karatş, A. & Ruedi, M. 2004. Mitochondrial phylogeography of the long-eared bats (Plecotus) in the Mediterranean Palaearctic and Atlantic Islands. Molecular Phylogenetics and Evolution 31, 1114-1126. Kiefer, A. & Veith, M. 2001. A new species of long-eared bat from Europe (Chiroptera: Vespertilionidae). Myotis 39, 5-16. Kryštufek, B. & Mozetič Francky, B. 2005. Mt. Hermon field mouse Apodemus iconicus is a member of the European mammal fauna. Folia Zoologica 54, 69-74. Mayer, F. & von Helversen, O. 2001. Cryptic diversity in European bats. Proceedings of the Royal Society of London B 268, 1825-1832. Melo-Ferreira, J., Boursot, P., Suchentrunk, F., Ferrand, N. & Alves, P. C. 2005. Invasion from the cold past: extensive introgression of mountain hare (Lepus timidus) mitochondrial DNA into three other hare species in northern Iberia. Molecular Ecology 14, 24-59. Palacios, F. 1977. Descripcion de una nueva especie de liebre (Lepus castroviejoi) endémica de la cordillera Cantabrica. Doñana Acta Vertebrata 3, 205-223. Spitzenberger, F., Piálek, J. & Haring, E. 2001. Systematics of the genus Plecotus (Mammalia, Vespertilionidae) in Austria based on morphometric and molecular investigations. Folia Zoologica 50, 161-172. Tupinier, Y. 1977. Description d'une Chauve-souris nouvelle: Myotis nathalinae nov. sp. (Chiroptera, Vespertilionidae). Mammalia 41, 327-340.
<urn:uuid:003f26f7-c34a-441c-a919-5938812d3b1c>
CC-MAIN-2013-48
http://darrennaish.blogspot.com/2006_10_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164034375/warc/CC-MAIN-20131204133354-00086-ip-10-33-133-15.ec2.internal.warc.gz
en
0.9285
5,918
2.875
3
Some brain systems become less coordinated with age even in the absence of Alzheimer's disease, according to a new study from Harvard University. The results help to explain why advanced age is often accompanied by a loss of mental agility, even in an otherwise healthy individual. The study was led by Jessica Andrews-Hanna, a doctoral candidate in the Department of Psychology in the Faculty of Arts and Sciences at Harvard. "This research helps us to understand how and why our minds change as we get older, and why some individuals remain sharp into their 90s, while others' mental abilities decline as they age," says Andrews-Hanna. "One of the reasons for loss of mental ability may be that these systems in the brain are no longer in sync with one another." Previous studies have focused on the specific structures and functions within the brain, and how their deterioration might lead to decreased cognitive abilities. However, this study examined the way that large-scale brain systems that support higher-level cognition correlate and communicate across the brain, and found that in older adults these systems are not in sync. In particular, widely separated systems from the front to the back of the brain were less correlated. The human brain can be divided into major functional regions, each responsible for different kinds of "applications," such as memory, sensory input and processing, executive function or even one's own internal musing. The functional regions of the brain are linked by a network of white matter conduits. These communication channels help the brain coordinate and share information from the brain's different regions. White matter is the tissue through which messages pass from different regions of the brain. Scientists have known that white matter degrades with age, but they did not understand how that decline contributes to the degradation of the large-scale systems that govern cognition. "The crosstalk between the different parts of the brain is like a conference call," said Jessica Andrews-Hanna, a graduate student in Buckner's lab and the lead author of the study. "We were eavesdropping on this crosstalk and we looked at how activity in one region of the brain correlates with another." The researchers studied 55 older adults, approximately age 60 and over, and 38 younger adults, approximately age 35 and younger. They used a neuroimaging technique called fMRI to obtain a picture of activity in the brain. The results showed that among the younger people, brain systems were largely in sync with one another, while this was not the case with the older individuals. Among the older individuals, some of the subjects' brains systems were correlated, and older individuals that performed better on psychometric tests were more likely to have brain systems that were in sync. These psychometric tests, administered in addition to the fMRI scanning, measured memory ability, processing speed and executive function. Among older individuals whose brain systems did not correlate, all of the systems were not affected in the same way. Different systems process different kinds of information, including the attention system, used to pay attention, and the default system, used when the mind is wandering. The default system was most severely disrupted with age. Some systems do remain intact; for example, the visual system was very well preserved. The study also showed that the white matter of the brain, which connects the different regions of the brain, begins to lose integrity with age. One of the challenges to studying the aging brain is that the early signs of Alzheimer's disease are very subtle, and it is difficult to distinguish between the early stages of Alzheimer's disease and normal aging. In order to ensure that the researchers were only looking at healthy aging brains, the researchers used a PET scanning process to identify the presence of amyloid, a chemical present in individuals with Alzheimer's. When the presence of this chemical was detected, individuals were not included in the study. In this way, the researchers ensured that they were looking at a healthy aging brain. "Understanding why we lose cognitive function as we age may help us to prolong our mental abilities later in life," says Buckner. "The results of this study help us to understand how the aging brain differs from the brain of a younger individual." This research was published in the Dec. 6 issue of Neuron. Other researchers involved in this study include Justin Vincent, a graduate student in the Department of Psychology at Harvard and Randy Buckner, Harvard professor of psychology and an investigator with the Howard Hughes Medical Institute. Co-authors also include Andrew Snyder, Denise Head and Marcus Raichle of Washington University in St. Louis and Cindy Lustig of the University of Michigan. The research was funded by the National Institutes of Health, the Alzheimer's Association, and the Howard Hughes Medical Institute. Cite This Page:
<urn:uuid:0fdaf88d-facb-44aa-8318-96276a14fbcd>
CC-MAIN-2014-10
http://www.sciencedaily.com/releases/2007/12/071205122554.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651908/warc/CC-MAIN-20140305060731-00087-ip-10-183-142-35.ec2.internal.warc.gz
en
0.972755
961
3.15625
3
Talking to Your Kids About the Dangers of Smoking Talking to your children about the dangers of smoking is a discussion that needs to happen. Unfortunately, it isn’t exactly an easy conversation to start – and finish – with your children. Luckily, the dentists at Ponte Vedra Pediatric Dentistry & Orthodontics are here to help. We understand just how difficult and intimidating the “no smoking” conversation can be for parents. That is why we have gathered some tips parents find helpful when they prepare for the discussion regarding the dangers of smoking. Start the Discussion Regarding the Dangers of Smoking at an Early Age Many parents believe they should only discuss the dangers of smoking with their child once they reach an age where their child could be tempted to smoke. Unfortunately, if you wait until a child is 13, 14, or 15 to talk to them about the dangers of smoking it could be too late. On the other hand, if you start the discussion at an age when they are too young, you run the risk of overwhelming them or causing them to disengage and not discuss the topic at a later date. Many healthcare experts believe that around the age of 5 or 6 is the appropriate time to start the discussion about not smoking. Children are able to understand what is being discussed and even ask occasional questions. When you do start the discussion at this age, keep it age appropriate. There isn’t a need to go into detail about oral cancer and all the complex dangers associated with smoking. Just bring up the topic, answer any questions, and move on. When your child is older, you can start expanding upon the specific dangers of smoking. Keep the Conversation Open Discussing the dangers of smoking with your child, especially if they are between the ages of 10 and 16, can result in the discovery of interesting information. Children may reveal to their parents that they have tried smoking or been tempted to try. If this happens, it is important to remain calm and keep an open mind. If your child reveals that he or she is tempted by friends to try smoking, explore that topic further. Talk to them about ways they can respond to peer pressure, ask them if they would like to explore new opportunities that will keep them busy, and listen to them about why they might have been tempted to smoke. Should your child reveal that he or she has tried smoking, do not overreact. Explore what lead them to try it, learn more about how many times they have tried smoking, and discuss the dangers of smoking with them. It is also important to realize children make mistakes and smoking a single cigarette isn’t the end of the world, but it could lead to dangerous habits in the future. Make the Conversation Personal Children respond better when they can relate to what is being discussed. While discussing the dangers of smoking, personalize the conversation. Talk to your child about how smoking would impact their activities, health, and even put their dental health in jeopardy. Talk to them about what they find appealing about it and what they don’t find appealing about it. The more personalized the conversation, the more likely your child will be to understand the risks of smoking and not want to try it. Remember Every Child is Different Of course, every child is different in how he or she reacts to serious topics. Some children are easy-going and accept simple explanations while others may be resistant to talking to you about this topic. If you are having difficulty starting the discussion with your child about not smoking, the doctors at Vedra Pediatric Dentistry & Orthodontics may be able to help. Our experienced pediatric dentists have dealt with thousands of children ranging in age from infants to teens. We can help parents learn how to approach discussing the dangers of smoking with their child. In some cases, we can even get the conversation started by bringing up the topic during a child’s regular, preventative checkup. Call our office today to schedule an appointment for your child for a preventative checkup or to ask our dental staff for advice on how to start the conversation about not smoking.
<urn:uuid:0a0e23ec-daee-4214-8d98-2b9f9ddf8c6b>
CC-MAIN-2023-23
https://kidspv.com/blog/talking-to-kids-dangers-of-smoking/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644571.22/warc/CC-MAIN-20230528214404-20230529004404-00355.warc.gz
en
0.963303
843
3.21875
3
Aquarium Releases Sharks Off Sydney Beach for Study The two-year-old wobbegong, or carpet sharks, measuring up to 80 cms in length, are bottom-dwelling sharks and regarded as harmless but can grow to three metres (10 feet) in length. The study will provide an insight into the feasibility of releasing aquarium-bred sharks to restock populations in local areas, as well as the role marine parks can play in protecting species, said Sydney Aquarium Conservation Fund Coordinator Claudette Rechtorik. By monitoring the sharks, marine scientists will learn more about their growth patterns and behaviour and how long they spend in protected waters. "Shark populations are being depleted because of practices such as over-fishing, shark-finning and the use of shark nets at beaches, so we're keen to raise awareness about the need to protect sharks, particularly those which are found mainly in Australian waters like wobbegongs," Rechtorik said. (Reporting by Michael Perry; Editing by Paul Tait)
<urn:uuid:05ca4386-0ead-4cf2-86e0-4be31421758d>
CC-MAIN-2014-15
http://www.planetark.org/dailynewsstory.cfm/newsid/50253/story.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00576-ip-10-147-4-33.ec2.internal.warc.gz
en
0.949287
215
2.921875
3
- Collection Home - Property Rights of the Collection - History of the Collection - Editorial Comment - Chronology of the Life of Louis Dembitz Brandeis - How to Request Inter-Library Loan Copies of the Collection Microfilm Reels - The Brandeis Family Tree - Writings by Louis D. Brandeis - Writings About Louis D. Brandeis BUSINESS--A PROFESSION Chapter 21 THE LIVING LAW An address delivered before the Chicago Bar Association, January 3, 1916. The history of the United States, since the adoption of the constitution, covers less than 128 years. Yet in that short period the American ideal of government has been greatly modified. At first our ideal was expressed as "A government of laws and not of men." Then it became "A government of the people, by the people, for the people."' Now it is "Democracy and social justice." In the last half century our democracy has deepened. Coincidentally there has been a shifting of our longing from legal justice to social justice, and—it must be admitted—also a waning respect for law. Is there any causal connection between the shifting of our longing from legal justice to social justice and waning respect for law? If so, was that result unavoidable? Many different causes contributed to this waning respect for law. Some related specifically to the lawyer, some to the courts and some to the substantive law itself. The lessening of the lawyer's influence in the community came first. James Bryce called attention to this as a fact of great significance, already a generation ago. Later criticism of the efficiency of our judicial machinery became widespread. Finally, the law as administered was challenged—a challenge which expressed itself vehemently a few years ago in the demand for recall of judges and of judicial decisions. Many different remedies must be applied before the ground lost can be fully recovered and the domain of law extended further. The causes and the remedies have received perhaps their most helpful discussion from three lawyers whom we associate with Chicago: Professor Roscoe Pound, recently secured for Harvard, who stands preeminently in service in this connection; Professor Wigmore; and Professor Freund. Another Chicago Professor, who was not a lawyer but a sociologist, the late Charles R. Henderson, has aided much by intelligent criticism. No court in America has in the last generation done such notable pioneer work in removing the causes of criticism as your own Municipal Court under its distinguished Chief Justice, Harry Olson. And the American Judicature Society, under the efficient management of Mr. Herbert Harley, is stimulating thought and action throughout the country by its dissemination of what is being done and should be done in aid of the reform of our judicial system. The important contribution which Chicago has made in this connection makes me wish to discuss before you a small part of this large problem. The Challenge of Existing Law. The challenge of existing law is not a manifestation peculiar to our country or to our time. Sporadic dissatisfaction has doubtless existed in every country at all times. Such dissatisfaction has usually been treated by those who govern as evidencing the unreasonableness of law breakers. The line "No thief e'er felt the halter draw with good opinion of the law," expresses the traditional attitude of those who are apt to regard existing law as "the true embodiment of everything that's excellent." It required the joint forces of Sir Samuel Romilly and Jeremy Bentham to make clear to a humane, enlightened and liberty loving England that death was not the natural and proper punishment for theft. Still another century had to elapse before social science raised the doubt whether theft was not perhaps as much the fault of the community as of the individual. Earlier Challenges. In periods of rapid transformation, challenge of existing law, instead of being sporadic, becomes general. Such was the case in Athens twenty-four centuries ago, when Euripides burst out in flaming words against "the trammelings of law which are not of the right." Such was the case also in Germany during the Reformation, when Ulrich Zasius declared that "All sciences have put off their dirty clothes, only jurisprudence remains in its rags." And after the French Revolution another period of rapid transformation, another poet-sage, Goethe, imbued with the modern scientific spirit, added to his protest a clear diagnosis of the disease: "Customs and laws, in every place Like a disease, an heirloom dread, Still trace their curse from race to race, And furtively abroad they spread. To nonsense, reasons self they turn; Beneficence becomes a pest; Woe unto thee, thou art a grandson born! As for the law, born with us, unexpressed That law, alas, none careth to discern." The Industrial Revolution. Is not Goethe's diagnosis applicable to the twentieth-century challenge of the law in the United States? Has not the recent dissatisfaction with our law as administered been due, in large measure, to the fact that it had not kept pace with the rapid development of our political, economic and social ideals? In other words, is not the challenge of legal justice due to its failure to conform to contemporary conceptions of social justice? Since the adoption of the federal constitution, and notably within the last fifty years, we have passed through an economic and social revolution which affected the life of the people more fundamentally than any political revolution known to history. Widespread substitution of machinery for hand labor (thus multiplying a hundredfold man's productivity), and the annihilation of space through steam and electricity, have wrought changes in the conditions of life which are in many respects greater than those which had occurred in civilized countries during thousands of years preceding. The end was put to legalized human slavery—an institution which had existed since the dawn of history. But of vastly greater influence upon the lives of the great majority of all civilized peoples was the possibility which invention and discovery created of emancipating women and of liberating men called free from the excessive toil theretofore required to secure food, clothing and shelter. Yet, while invention and discovery created the possibility of releasing men and women from the thraldom of drudgery, there actually came, with the introduction of the factory system and the development of the business corporation, new dangers to liberty. Large publicly owned corporations replaced small privately owned concerns. Ownership of the instruments of production passed from the work-man to the employer. Individual personal relations between the proprietor and his help ceased. The individual contract of service lost its character, because of the inequality in position between employer and employee. The group relation of employee to employer with collective bargaining became common, for it was essential to the workers' protection. Legal Science Static. Political as well as economic and social science noted these revolutionary changes. But legal science—the unwritten or judge-made laws as distinguished from legislation—was largely deaf and blind to them. Courts continued to ignore newly arisen social needs. They applied complacently 18th century conceptions of the liberty of the individual and of the sacredness of private property. Early 19th century scientific half-truths, like "The survival of the fittest," which translated into practice meant "The devil take the hindmost," were erected by judicial sanction into a moral law. Where statutes giving expression to the new social spirit were clearly constitutional, judges, imbued with the relentless spirit of individualism, often construed them away. Where any doubt as to the constitutionality of such statutes could find lodgment, courts all too frequently declared the acts void. Also in other countries the strain upon the law has been great during the last generation, because there also the period has been one of rapid transformation; and the law has everywhere a tendency to lag behind the facts of life. But in America the strain became dangerous, because constitutional limitations were invoked to stop the natural vent of legislation. In the course of relatively few years hundreds of statutes which embodied attempts (often very crude) to adjust legal rights to the demands of social justice were nullified by the courts, on the grounds that the statutes violated the constitutional guaranties of liberty or property. Small wonder that there arose a clamor for the recall of judges and of judicial decisions and that demand was made for amendment of the constitutions and even for their complete abolition. The assaults upon courts and constitutions culminated in 1912. They centered about two decisions: the Lochner case (Lochner v. New York, 198 U. S. 45), in which a majority of the judges of the Supreme Court of the United States had declared void a New York law limiting the hours of labor for bakers, and the Ives case (Ives v. South Buffalo Ry. Co., 201 N. Y. 271), in which the New York Court of Appeals had unanimously held void its accident compensation law. The Two Ritchie Cases. Since 1912, the fury against the courts has abated. This change in the attitude of the public toward the courts is due not to any modification in judicial tenure, not to amendment of the constitutions, but to the movement, begun some years prior to 1912, which has more recently resulted in a better appreciation by the courts of existing social needs. In 1895 your Supreme Court held in the first Ritchie case (Ritchie v. People, 155 Ill. 98) that the eight-hour law for women engaged in manufacturing was unconstitutional. In 1908 the United States Supreme Court held in Muller v. Oregon (Muller v. Oregon, 208 U. S. 412) that the Women's Ten-Hour Law was constitutional. In 1910 your Supreme Court held the same in the second Ritchie case (W. C. Ritchie & Co. v. Wagman, 244 Ill. 509.) The difference in decision in the two Ritchie cases was not due to the difference between a ten-hour day and an eight-hour day, for the Supreme Court of the United States has since held (as some state courts had held earlier) that an eight-hour law also was valid; and your Illinois Supreme Court has since sustained a nine-hour law. In the two Ritchie cases the same broad principles of constitutional law were applied. In each the right of a legislature to limit (in the exercise of the police power) both liberty of contract and use of property was fully recognized. But in the first Ritchie case the court, reasoning from abstract conception, held a limitation of working hours to be arbitrary and unreasonable; while in the second Ritchie case, reasoning from life, it held the limitation of hours not to be arbitrary and unreasonable. In other words,—in the second Ritchie case it took notice of those facts of general knowledge embraced in the world's experience with unrestricted working hours, which the court had in the earlier case ignored. It considered the evils which had flowed from unrestricted hours, and the social and industrial benefit which had attended curtailed working hours. It considered likewise the common belief in the advisability of so limiting working hours which the legislatures of many states and countries evidenced. In the light of this evidence as to the world's experience and beliefs, it proved impossible for reasonable judges to say that the legislature of Illinois had acted unreasonably and arbitrarily in limiting the hours of labor. The Two Night-Work Cases. Decisions rendered by the Court of Appeals of New York show even more dearly than do those of Illinois the judicial awakening to the facts of life. In 1907, in the Williams case (People v. Williams, 189 N. Y. 131), that court held that an act prohibiting night work for women was unconstitutional. In 1915, in the Schweinler case (People v. Charles Schweinler Press, 214 N. Y. 395) it held that a similar night-work act was constitutional. And with great clearness and frankness the court set forth the reason: "While theoretically we might [then] have been able to take judicial notice of some of the facts and of some of the legislation now called to our attention as sustaining the belief and opinion that night work in factories is widely and substantially injurious to the health of women, actually very few of these facts were called to our attention, and the argument to uphold the law on that ground was brief and inconsequential. "Especially and necessarily was there lacking evidence of the extent to which, during the intervening years, the opinion and belief have spread and strengthened that such night work is injurious to women; of the laws as indicating such belief, since adopted by several of our own states and by large European countries, and the report made to the legislature by its own agency, the factory investigating commission, based on investigation of actual conditions and the study of scientific and medical opinion that night work by women in factories is generally injurious, and ought to be prohibited… "So, as it seems to me, in view of the incomplete manner in which the important question underlying this statute—the danger to women of night work in factories—was presented to us in the Williams case, we ought not to regard its decision as any bar to a consideration of the present statute in the light of all the facts and arguments now presented to us and many of which are in addition to those formerly presented, not only as a matter of mere presentation, but because they have been developed by study and investigation during the years which have intervened since the Williams decision was made. There is no reason why we should be reluctant to give effect to new and additional knowledge upon such a subject as this, even if it did lead us to take a different view of such a vastly important question as that of public health or disease than formerly prevailed. Particularly do I feel that we should give serious consideration and great weight to the fact that the present legislation is based upon and sustained by an investigation by the legislature deliberately and carefully made through an agency of its own creation, the present factory investigating commission." Eight years elapsed between the two decisions. But the change in the attitude of the court had actually come after the agitation of 1912. As late as 1911, when the court in the Ives case (Ives v. South Buffalo Ry. Co., 201 N. Y. 271) held the first accident compensation law void, it refused to consider the facts of life, saying: "The report [of the commission appointed by the legislature to consider that subject before legislating] is based upon a most voluminous array of statistical tables, extracts from the works of philosophical writers and the industrial laws of many countries, all of which are designed to show that our own system of dealing with industrial accidents is economically, morally, and legally unsound. Under our form of government, however, courts must regard all economic, philosophical, and moral theories, attractive and desirable though they may be, as subordinate to the primary question whether they can be moulded into statutes without infringing upon the letter or spirit of our written constitutions. In that respect we are unlike any of the countries whose industrial laws are referred to as models for our guidance. Practically all of these countries are so-called constitutional monarchies in which, as in England, there is no written constitution, and the Parliament or law-making body is supreme. In our country the federal and state constitutions are the charters which demark the extent and the limitations of legislative power; and while it is true that the rigidity of a written constitution may at times prove to be a hindrance to the march of progress, yet more often its stability protects the people against the frequent and violent fluctuations of that which, for want of a better name, we call ‘public opinion.’" On the other hand in July, 1915, in the Jensen case (Jensen v. Southern Pacific Co., (N. Y.), 109 N. E. R. 600), the court, holding valid the second compensation law (which was enacted after a constitutional amendment), said: "We should consider practical experiences, as well as theory, in deciding whether a given plan in fact constitutes a taking of property in violation of the constitution. A compulsory scheme of insurance to secure injured workmen in hazardous employments and their dependents from becoming objects of charity certainly promotes the public welfare as directly as does an insurance of bank depositors from loss." The Struggle Continues. The court reawakened to the truth of the old maxim of the civilians, Ex facto jus oritur. It realized that no law, written or unwritten, can be understood without a full knowledge of the facts out of which it arises and to which it is to be applied. But the struggle for the living law has not been fully won. The Lochner case has not been expressly overruled. Within six weeks the Supreme Judicial Court of Massachusetts, in supposed obedience to its authority, held invalid a nine-hour law for certain railroad employees (Commonwealth v. B. & M. R. R. (Mass.), 110 N. E. R. 264.) The Supreme Court of the United States, which by many decisions had made possible in other fields the harmonizing of legal rights with contemporary conceptions of social justice, showed by its recent decision in the Coppage case (Coppage v. Kansas, 236 U. S. 1) the potency of mental prepossessions. Long before it had recognized (see 219 U. S. 570) that employers "and their operatives do not stand upon an equality"; that "the legislature being familiar with local conditions is primarily the judge of the necessity of such enactments" (see 219 U. S. 569); and that unless a "prohibition is palpably unreasonable and arbitrary we are not at liberty to say that it passes beyond the limitation of a state's protective authority" (see 238 U. S. 452.) And in the application of these principles it had repeatedly upheld legislation limiting the right of free contract between employer and employee. But in the Adair case (Adair v. United States, 208 U. S. 161), and again in the Coppage case (supra), it declared unconstitutional a statute which prohibited an employer from requiring as a condition of his securing or retaining employment that the workman should not be a member of a labor union. Without considering that Congress or the Kansas legislature might have had good cause to believe that such prohibition was essential to the maintenance of trade-unionism, and that trade-unionism was essential to securing equality between employer and employee, our Supreme Court of the United States declared that the enactment of the anti-discrimination law was an arbitrary and unreasonable interference with the right of contract. The Business Men's Protest. The challenge of existing law does not, however, come only from the working classes. Criticism of the law is widespread among business men. The tone of their criticism is more courteous than that of the working classes, and the specific objections raised by business men are different. Business men do not demand recall of judges or of judicial decisions. Business men do not ordinarily seek constitutional amendments. They are more apt to desire repeal of statutes than enactment. But both business men and working men insist that courts lack understanding of contemporary industrial conditions. Both insist that the law is not "up to date." Both insist that the lack of familiarity with the facts of business life results in erroneous decisions. In proof of this, business men point to certain decisions under the Sherman Law, and certain applications of the doctrine of contracts against public policy—decisions like the Dr. Miles Medical Co. case (Dr. Miles Medical Co. v. Park & Sons Co., 220 U. S. 409), in which it is held that manufacturers of a competitive trade-marked article cannot legally contract with retailers to maintain a standard selling price for their article, and thus prevent ruinous price cutting. Both business men and working men have given further evidence of their distrust of the courts and of lawyers by their efforts to establish non-legal tribunals or commissions to exercise functions which are judicial (even where not legal) in their nature, and by their insistence that the commissions shall be manned with business and working men instead of lawyers. And business men have been active in devising other means of escape from the domain of the courts, as is evidenced by the wide-spread tendency to arbitrate controversies through committees of business organizations. An Inadequate Remedy. The remedy so sought is not adequate, and may prove a mischievous one. What we need is not to displace the courts, but to make them efficient instruments of justice; not to displace the lawyer, but to fit him for his official or judicial task. And indeed the task of fitting the lawyer and the judge to perform adequately the functions of harmonizing law with life is a task far easier of accomplishment than that of endowing men, who lack legal training, with the necessary qualifications. The training of the practicing lawyer is that best adapted to develop men not only for the exercise of strictly judicial functions, but also for the exercise of administrative functions quasi-judicial in character. It breeds a certain virile, compelling quality, which tends to make the possessor proof against the influence of either fear or favor. It is this quality to which the prevailing high standard of honesty among our judges is due. And it is certainly a noteworthy fact that in spite of the abundant criticism of our judicial system, the suggestion of dishonesty is rare; and instances of established dishonesty are extremely few. The All-Round Lawyer. The pursuit of the legal profession involves a happy combination of the intellectual with the practical life. The intellectual tends to breadth of view; the practical to that realization of limitations which are essential to the wise conduct of life. Formerly the lawyer secured breadth of view largely through wide professional experience. Being a general practitioner, he was brought into contact with all phases of contemporary life. His education was not legal only, because his diversified clientage brought him, by the mere practice of his profession, an economic and social education. The relative smallness of the communities tended to make his practice diversified not only in the character of matters dealt with, but also in the character or standing of his clients. For the same lawyer was apt to serve at one time or another both rich and poor, both employer and employee. Furthermore—nearly every lawyer of ability took some part in political life. Our greatest judges, Marshall, Kent, Story, Shaw, had secured this training. Oliver, in his study of Alexander Hamilton, pictured the value of such training in public affairs: "In the vigor of his youth and at the very summit of hope, he brought to the study of the law a character already trained and tested by the realities of life, formed by success, experienced in the facts and disorders with which the law has to deal. Before he began a study of the remedies he had a wide knowledge of the conditions of human society…With him…the law was…a reality, quick, human, buxom and jolly, and not a formula, pinched, stiff, banded and dusty like a royal mummy of Egypt." Hamilton was an apostle of the living law. The Specialist. The last fifty years have wrought a great change in professional life. Industrial development and the consequent growth of cities have led to a high degree of specialization—specialization not only in the nature and class of questions dealt with, but also specialization in the character of clientage. The term "corporation lawyer" is significant in this connection. The growing intensity of professional life tended also to discourage participation in public affairs, and thus the broadening of view which comes from political life was lost. The deepening of knowledge in certain subjects was purchased at the cost of vast areas of ignorance and grave danger of resultant distortion of judgment. The effect of this contraction of the lawyers’ intimate relation to contemporary life was doubly serious, because it came at a time when the rapidity of our economic and social transformation made accurate and broad knowledge of present-day problems essential to the administration of justice. The judge came to the bench unequipped with the necessary knowledge of economic and social science, and his judgment suffered likewise through lack of equipment in the lawyers who presented the cases to him. For a judge rarely performs his functions adequately unless the case before him is adequately presented. Thus were the blind led by the blind. It is not surprising that under such conditions the laws as administered failed to meet contemporary economic and social demands. The True Remedy. We are powerless to restore the general practitioner and general participation in public life. Intense specialization must continue. But we can correct its distorting effects by broader education—by study undertaken preparatory to practice—and continued by lawyer and judge throughout life: study of economics and sociology and politics which embody the facts and present the problems of today. "Every beneficent change in legislation," Professor Henderson said, "comes from a fresh study of social conditions, and social ends, and from such rejection of obsolete laws to make room for a rule which fits the new facts. One can hardly escape from the conclusion that a lawyer who has not studied economics and sociology is very apt to become a public enemy." Your former townsman, Charles R. Crane, told me once the story of two men whose lives he would have cared most to have lived. Once was Bogigish, a native of the ancient city of Ragusa off the coast of Dalmatia,—a deep student of law, who after gaining some distinction at the University of Vienna and in France, became Professor at the University of Odessa. When Montenegro was admitted to the family of nations, its Prince concluded that, like other civilized countries, it must have a code of law. Bogigish's fame had reached Montenegro,—for Ragusa is but a few miles distant. So the Prince begged the Czar of Russia to have the learned jurist prepare a code for Montenegro. The Czar granted the request, and Bogigish undertook the task. But instead of utilizing his great knowledge of laws to draft a code, he proceeded to Montenegro, and for two years literally made his home with the people,—studying everywhere their customs, their practices, their needs, their beliefs, their points of view. Then he embodied in law the life which the Montenegrins lived. They respected that law, because it expressed the will of the people. Go to next chapter. Return to the table of contents.
<urn:uuid:cac2399c-43df-47e6-982f-27a7b9029c7b>
CC-MAIN-2014-35
http://www.law.louisville.edu/library/collections/brandeis/node/223
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829393.78/warc/CC-MAIN-20140820021349-00282-ip-10-180-136-8.ec2.internal.warc.gz
en
0.971164
5,405
2.5625
3
The Coanda Effect has been discovered in1930 by the Romanian aerodynamicist Henri-Marie Coanda (1885-1972). He has observed that a steam of air (or a other fluid) emerging from a nozzle tends to follow a nearby curved surface, if the curvature of the surface or angle the surface makes with the stream is not too sharp. The Coanda UAV, propelled by an electric engine, uses the Coanda effect to take off vertically, fly, hover and land vertically ( VTOL ). There is no big rotor like on an helicopter and the flight is very stable and safe for the surrounding. More info at: jlnlabs.online.fr/gfsuav/index.htm
<urn:uuid:6c134ecb-ac9e-40a9-bf5d-8caf709a9dcd>
CC-MAIN-2017-43
http://diydrones.com/video/coanda-effect-uav
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821189.10/warc/CC-MAIN-20171017125144-20171017144810-00043.warc.gz
en
0.906898
149
3.484375
3
NMDA RECEPTOR ACTIONS ARE ESSENTIAL TO The NMDA Receptor plays a critical role in cognitive and psychological functions. The NMDA (N-Methyl D-Aspartate) Receptor NMDA-R is co-activated by Glutamate and Glycine. Glycine acts as a co-factor to safely activate the receptor. Glutamate activity in excess can produce excitotoxicity causing damage and death to neurons NMDA is a Receptor in the Glutamate Neurotransmitter System (GNS) NMDA-Rs have a role to play in diverse and complex processes of mind and brain including: Memory Formation and Long Term Potentiation (LTP) Excitotoxicity and Stroke Outcome Ego and Frontal Lobe Function Pain and Opiate Tolerance Alcoholism and Drug Abuse Schizophrenia and Depression Age-Related Memory Decline Dementia and Alzheimers “Long Term Potentiation (LTP) Leads to Structural and Functional Changes of the Synapse that Make Neurotransmission More Efficient.” Stahl S. Stahl’s Essential Psychopharmacology: Neuroscientific Basis and Practical Applications. 3rd Ed. 2008, NMDA-R abnormalities contribute to cognitive and psychological deficits that are challenging to treat.
<urn:uuid:306f0f60-3dc9-4740-b044-69da559ebbc8>
CC-MAIN-2020-16
https://www.profrontal.com/science
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371612531.68/warc/CC-MAIN-20200406004220-20200406034720-00070.warc.gz
en
0.788104
304
3.078125
3
About 50 participants in the morning discussion partook in breakout sessions in the afternoon. The sessions were designed to tackle the following issues: - What are the key research efforts by region for the next 5–10 years? - What are the technologies (for example, data and models) that can be used for fire management and what are the barriers to adoption? - Integrating science into management and policy - Stakeholder engagement The participants were divided into four groups. Each group discussed three of the four session topics. At the conclusion of the rotations, participants reconvened to discuss the outcomes of the afternoon’s conversations. While discussing key research efforts, participants were asked to keep in mind the following questions: - How can fire management practices be improved through learning from regional differences in fire ecology? - How might research priorities change by region in response to climate change? - How might climate change affect management by region? The objective was to generate 10 pressing questions for fire science to pursue in the next 10 years. To provide clarity to the parameters of the discussion, regions were defined broadly based on eco-climatological principles, such as Southwest, Northwest, Boreal, Southeast, Midwest, Mountain West, and Northeast. Dar Roberts moderated this breakout topic. He provided a synthesis of the 10 questions that emerged from the conversations of the three groups that rotated through that session. - How best can the impacts and efficacy of steps taken to mitigate the risk of fire and to make the public aware of fire risk be assessed? - How can large-scale bark beetle mortality be managed and how might that management affect fire? - How can remote measurements be used to understand the behavior and effects of large, active fires? - How can incident teams obtain good intelligence on fire location and movement? - Given that succession and adaptation in wildland occur over a long period of time, how can the temporal scale of these processes be captured when funding cycles are only a few years in length and the careers of scientists are short in comparison to the ecosystem changes they study? - Is there an opportunity for a long-term ecological research network related to fire? - How can fire–vegetation–fuels–climate feedbacks be understood from model to empirical scales and in top-down and bottom-up control approaches? - How can science and research be leveraged to address changes in where fire is happening and where it has been perceived as not happening (for example, in the Southeast)? - How can risks and opportunities be bounded when planning for future changes in fire regimes and their consequences? - How can research assist managers to use wildfire and prescribed burns to achieve desirable outcomes? In addition to managing fire better and helping communities live with fire, many participants thought that finding answers to these questions would help address issues related to wildland fire and climate change, such as determining when to manage burned areas for restoration versus letting the habitat change to a new ecosystem, providing adequate habitat for endangered plant and animal species, and anticipating and planning for changes to hydrology in burned landscapes. The participants of this breakout session highlighted areas of research deserving attention: - At the wildland–urban interface (WUI), research on structural fuel management and on improving design or retrofits for structures to make them more resistant to embers. - Improved tools for predicting smoke dispersal and understanding the physical and mental health effects of smoke on people. - Better climate models that can project conditions 30–50 years in the future as well as models that can scale down to the level of national and regional climates, even down to the level of fire regimes. Such tools would help forecast climate variability, improve predictions of how ecosystems respond to climate change, and help researchers to understand what fire regimes may work in different areas under future climate conditions. - More research focused on the human dimensions of fire, including smoke, fuels treatment, and stakeholder engagement. - Research to better understand and to be able to compare outcomes from different fire management approaches, such as box and burn and fuel breaks. Such research would not only provide information about the cost effectiveness of fire management approaches but would also provide a better understanding of the effects of management practices post-fire on ecosystems and endangered species. Participants also mentioned that more data need to be collected on drought metrics (for example, snowpack and precipitation), global circulation metrics, large fires, and fuels to create data sets that can be used for long-term (30–50 years) predictions. Making such data widely available to researchers would help advance knowledge about fire quicker. Many participants also thought that knowledge would advance faster if there were more interactions among different disciplines involved with fire (for example, modeling, remote sensing, ecology, and management). Participants who rotated through the technology session were asked to answer these questions: - What are the technologies available for assessing fire risk and fire danger/hazard? - What are the technologies available for near real-time fire detection, fire monitoring, and short-term fire spread prediction? - What are the technologies available for mapping and managing post-fire conditions? - Where are the gaps and barriers in adopting these technologies for operational use? The goal of their conversations was to pinpoint the top five technologies that could be promoted for use in operational fire and resource management. Planning committee member Anupma Prakash served as the moderator for the session focused on technology. She presented to the afternoon participants the five top technologies to be promoted for use in operational fire and resource management that surfaced from the afternoon’s discussions. - Unmanned aerial systems (“drones”). Drones can be put to use for different purposes during fire emergencies, including data collection, communication, and firefighting. Drones can collect data on fire conditions and can use infrared sensors when smoke obscures optical instruments. They can monitor conditions for firefighter safety and monitor landscapes after fire has occurred. Unlike planes, drones can be flown at night, which would increase the amount of data gathered on a given fire. Drones could also be used to deliver fire retardant, perhaps even at night when wind speeds are often lower. Real-time information is critical to fighting fire, but cell phones often do not work in remote wildland fire locations. Drones could help close this communication gap. - Imaging at different spectral scales. Different technologies capture images of the land in varying levels of detail. Their effectiveness depends on the landscape; some are more useful in forested areas while others are better at piercing through cloud cover. Multiple technologies—such as multispectral and hyperspectral imaging, synthetic aperture radar, and LIDAR—could be used together to capture different spatial and temporal scales in order to characterize the landscape pre-fire and post-fire. In particular, technologies using different spectral scales would help address the challenge of assessing the subsurface effects of fire. - Long-term, field-based calibration and validation of data for quality assessment. These would reduce uncertainty in the data used by managers to make decisions and provide better data to feedback into models. - Common terminology and data-capture techniques. The development of common terms would improve the usability of data sets. Investment in low-cost data collection and analysis tools for field settings would increase the data available to fire operations. Better organized data sets would help avert the problem of too much information that sometimes confronts fire operations. - Social media and big data. Mining such data sources, including apps that allow people to identify a fire’s location, could help improve early fire detection, increase fire monitoring, and improve understanding of stakeholder needs (for communities as well as fire managers). Monitoring hashtags, for example, would convert community members into additional observers. Other areas where participants thought better technology could help were fuel characterization (in particular, fuel moisture), mapping model predictions of lightning to areas with high fuel loads, and using smoke chemistry and spread to better understand fire behavior and the fuels related to the observed behavior. The importance of mapping unburned areas following a fire to understand the habitat and vegetation differences between burned and unburned land and investigating post-fire effects to understand future fire risks also emerged from these discussions. Some participants said this mapping should be done quickly and often after fires so that post-fire changes—such as flooding, landslides, and sediment runoff—can be fed back into models to help predict conditions that may be expected in a future fire. Several participants mentioned that 3-dimensional mapping of vegetation would be particularly helpful for assessing fuel conditions on the landscape; at the present time, producing such maps is cost prohibitive. However, if those maps were available, they would only be useful if a companion tool were developed for their use in fire management decisions. To develop such a tool, how fire spreads first needs to be better understood, one participant said; gaining a better understanding of fire spread will require more experimentation with fires fed by different types of fuel. In the session about integrating science into management and policy, planning committee member Rod Linn asked participants to consider the following questions: - What is known in terms of fire science and fire and fuels management that should be used and what are the barriers to that use? - What does the fire and fuels management and policy community need from the fire science community? What does the former think the latter should be focusing on? What are the challenges for managers/operators in using/applying the fire science available? How can the science best be made available to the end users? - What are the differences in the science needs for prescribed fire usage versus wildfire management? - What are good examples of proactive management approaches? From the discussion generated by these questions, the participants were to highlight three best practices for integrating science with policy and three additional best practices that would be desirable. Linn summarized the points made in the afternoon when the participants reconvened from the breakout sessions. In terms of the three best management practices for integrating science into policy, the participants suggested (1) the existing Fire Science Exchange Networks (previously Fire Science Consortia), (2) integration of scientists on the fire line, which would give them credibility with fire managers, and (3) finding those fire managers who are receptive to science. The reach of the Fire Science Exchange Networks needs to expand, many participants thought, because at present they do not come into contact with enough fire managers. Another advantage of the exchange networks is that they are regionally based (15 across the United States, including Alaska and Hawaii), so they are attuned to the fire history and regime in a particular area. More engagement between the exchange networks and state-level officials and private landowners would also be useful, a participant observed. Including scientists on the fire line would provide fire managers with a resource to help them determine which ones of the many scientific and fire management tools available have the most utility in a given fire situation. Additional best practices that participants thought desirable were (1) more opportunities for in-person relationships to be built and maintained between fire scientists and fire managers, (2) better integration of managers at the start of research projects because this would improve the project design and increase the likelihood that the research results will be applied, (3) more research exploring social and institutional science to overcome the cultural barriers that prevent the translation of science into management practices, and (4) continuing education to keep fire managers receptive to science and to help different generations of fire managers maintain a common understanding of the state of the science. Many participants acknowledged that building relationships between scientists and managers and integrating managers into project design takes time and energy, but they thought such steps would have tremendous payoff in terms of developing outputs from scientific research that would be useful to fire managers at a local level. With regard to translating science into practice, multiple forms of communication (e.g., webinars, text messages, and social media) may be needed to engage managers of different ages. The suggestion for continuing education generated some discussion among the participants. One participant noted that continuing education should be a priority for all resource managers, not just fire managers. It was also suggested that there should be credits or incentives for researchers to work with managers. Someone responded that this type of interaction is increasingly being required of researchers by grant managers; however, the amount of emphasis on that interaction may vary by agency. A Forest Service employee shared that the Forest Service gives equal weight in terms of career accomplishments to technology transferred from laboratory to field application and a peer-reviewed published paper. Steelman noted that co-production of knowledge between researchers and managers is critical for the credibility, legitimacy, and saliency of said knowledge. She added that scientists want their work to be credible, local managers want legitimacy of the work to understand why they should use the science, and information needs to be salient for decision makers to help them take action. Unfortunately, many of the institutions and incentives that govern these three actors are poorly constructed to accomplish these objectives, Steelman concluded. Linn also reported from the breakout discussions that a number of participants noted that it is important for the fidelity of the science of prescribed fires to be high because there is more scrutiny and responsibility for prescribed fires than for wildfires. He thought that more scientific research needs to be conducted on prescribed fires because they can be most easily manipulated by researchers and managers. In their discussions about engaging with stakeholders, participants particularly focused on the following questions: - What is the role of co-management in differentiating and addressing “good fires” versus “bad fires”? - What are the critical social, political, and economic challenges associated with differentiating, labeling, and responding to “good fires” versus “bad fires” in the WUI? - What are best practices in stakeholder engagement that can facilitate more flexible fire management of “good fires”? How can the public and policymakers be brought into this conversation? Participants sought to pinpoint 3–5 key challenges to differentiating and labeling good fires versus bad fires and to suggest 3–5 best practices or strategies for working among diverse stakeholders to create conditions that allow for more flexible fire management when appropriate. Workshop presenter Toddi Steelman moderated this discussion and presented the summary findings to all the participants. She noted that breakout participants did not like the polarization of fire into the categories of good and bad. A fire that could be bad in the short term may be good in the long term. Good is often equated with favorable political outcomes or connected to ecological conditions, whereas bad is associated with the loss of businesses or structures and with loss of life. Rather than good and bad, fires need to be thought of more in terms of risk management and the tradeoffs associated with different management decisions. Those decisions need to incorporate a temporal component, considering short-term and long-term implications of management actions. Most participants agreed that communication is a key challenge; communicating the varying benefits, objectives, and tradeoffs related to fire is complex. This complexity is evident in the inability to categorize fires simply as good or bad. Other challenges to fire management mentioned by some participants include: - An insufficient understanding of the health effects of smoke. - The Endangered Species Act, the Clean Air Act, and the Healthy Forests Restoration Act have competing and unharmonized objectives. - Legal challenges to prescribed fire, which can prevent its use and leave land management agencies or actors open to liability and gross negligence when fire is used or when it is not and untreated fuels goes on to cause large fires. According to some participants, co-management could be a suitable practice for working with diverse stakeholders to create conditions that allow for more flexible fire management. Co-management is a process that can create opportunities to share ideas, deliberate tradeoffs, and find common ground that is appropriate for the context and the place under discussion. Some participants noted that field trips could help with reaching agreement in communities to help diverse stakeholders better understand the risks and tradeoffs. Others said federal, state, local, and private land managers need to come together long before a fire occurs to be in a position to readily implement land and fire management plans when needed. Another practice mentioned by some participants is the creation of opportunities for sustained community engagement, which would facilitate more flexible fire management. Messages that are correctly tailored to the context and delivered by people trusted in the community, for example, prescribed fire councils or fire chiefs, would help this engagement. Prescribed fire councils could also be a conduit for air quality conversations, a few participants added. It is important that the communication strategy emphasizes hope, not fear. Several participants noted state foresters also need to be involved because they are typically in place over a long period of time. A few participants mentioned media and zoning commissions are other players who should be involved. Just as engagement with communities is desirable, many participants thought that similar efforts with policymakers and politicians would also be worthwhile. Opportunities on this front include: - Resurrecting the Joint Fire Science Policy Consortium in Washington, DC, and empowering it to interact with members of Congress. - Encouraging more interactions of researchers and managers with congressional members of the Hazard Caucus Alliance. - Reviving the Wildland Fire Leadership Council, an intergovernmental committee that supports implementation and coordination of federal fire management policy. At the time of the workshop, the council was inactive because of the change in presidential administrations. - Establishing a Federal Fire Science Coordinating Council, recommended in a 2015 report on wildland fire science and technology by the National Science and Technology Council in the Executive Office of the President (NSTC, 2015). Some participants noted that engaging policymakers and politicians, particularly Congress, through these means would help to communicate the message that the costs associated with fire extend far beyond the amount spent on fire suppression. Finally, Steelman said that there needs to be more focus on using fire events as opportunities to educate stakeholders, which could help develop a common understanding of the risks and tradeoffs involved with fire. More work lies ahead on messaging after a fire. The breakout sessions were structured to respond to wildland fire research status, needs, and challenges outlined in the statement of task, specifically: - Helping wildland fire managers and responders discriminate between “good” and “bad” fires; - Adaptive fire and forest management; - Proactive approaches to landscape level fuel management; and - Societal needs and considerations to support and implement long-term wildland fire management strategies. With regard to the first item, it was clear that many participants, particularly those who participated in the stakeholder engagement breakout session, thought that the dichotomy of “good” and “bad” fire was too strong. Instead, whether a fire is “good” or “bad” can depend on the point of view of the stakeholder and the point in time from which the aftermath of a fire is considered. Fires that cause destruction to human developments may later prove to have favorable effects on ecosystem health. Therefore, many participants emphasized the importance of taking the context of fire into consideration, including who may be affected by the fire, what kind of ecosystem a fire may burn, and what the management objectives of a fire-prone community may be. Most participants thought that managing fires with community input will help fire scientists, fire managers, and community members better understand the risks and tradeoffs involved in living with prescribed fire and wildfire and may increase all parties’ ability to understand the nuance associated with fire’s risks and benefits, which change over time. Many workshop participants said that reaching common ground through co-management of fire with communities will likely help with the three other items outlined in the statement of task. Such local engagement will be important because of the variety of fire regimes throughout the United States and the increasing number and changing demographics of people living in the WUI. With regard to the data needs that will help adaptive management of fire, forests, and fuels, it emerged in more than one breakout session that data need to be more streamlined. Data that are more uniform can be shared more easily among fire scientists, and the knowledge generated from that data can be passed on to fire managers faster if it is harmonized. Data on more metrics, such as drought and wind, would be helpful for making long-term (30–50 years) predictions. The need for improved climate and meteorological modeling tools was also voiced by several participants. Some participants advocated for more experimentation with fire, rather than just through modeling, to better understand the effects of different fuels and fuel structures on fire spread. More studies of post-fire habitats, particularly comparisons between burned and unburned land following a fire, would provide information about future fire risks; some participants thought that such research in different fire regimes would be beneficial because a better understanding of post-fire effects would help fire scientists and fire managers communicate with communities about the fire risks and tradeoffs specific to their area. Technologies such as drones and imaging tools at multiple spectral scales could help collect much of the data that would inform better adaptive management approaches for fire, forests, and fuels.
<urn:uuid:05a34b70-c27f-44e3-87d2-a5bd058c01b3>
CC-MAIN-2020-16
https://www.nap.edu/read/24792/chapter/7
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371821680.80/warc/CC-MAIN-20200408170717-20200408201217-00060.warc.gz
en
0.948645
4,324
2.75
3
In the most difficult moments, remember to “REASSURE” kids: Reach out, hold his hand or put an arm around him, and begin the conversation. You might say, for instance, “Dad is not here anymore because he died. When a person dies, his body stops working. The heart stops beating and the body stops moving, eating, and breathing.” Explain that death is a natural part of life. Encourage him to ask any questions, and let him know that you will answer them as best as you can. But remember it’s okay to not have all the answers. Assure him that even though one parent died, it doesn’t mean the other parent will die, too. You might say, “A person can’t promise that he or she won’t die, but we will take care of ourselves as much as we can.” Or, “It’s our job to enjoy our lives, stay healthy and safe, and let people know how much we love them.” Sometimes children may not realize death is permanent. They may ask questions such as, “When is Daddy coming back?” Try to use terms such as “died” and “dead.” Although phrases like “sleeping” and “passed away” seem gentler, they may be confusing. Soothe him by giving big hugs or offering a comfort object to hold, such as a stuffed animal. Understand that you may have to repeat this conversation, especially for younger children. Have patience and know that children will come to understand over time. Remind him that you are here to listen and to help. Although no one will ever take the place of the parent who died, many people are love her and here to help. Explain that he will be cared for. Offer examples of how you and other special family members and friends will be there for him.
<urn:uuid:78d4c5f1-36ed-4e21-9228-4eeb3cf6400a>
CC-MAIN-2020-05
https://sesamestreetincommunities.org/activities/explaining-and-reassuring/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250614880.58/warc/CC-MAIN-20200124011048-20200124040048-00496.warc.gz
en
0.979059
416
3.390625
3
This blog we produced is also a useful tool from a professional point of view. At anytime I can come back, and refer to one of the many different e.Learning tools. But not just mine, thanks to this technology I can gather teaching ideas from my cohorts to make for a truly invaluable resource. Over these weeks I have learnt by reading other blogs, replying to comments, talking to peers and lectures that there is an exciting e.World out there. Take for example the first few entries in my blog. I found avatars such an engaging tool I used one in an English Literacy presentation on Digital Gaming. The Avatar was at the start of my film to connect the topic to my overall question. To top it off I presented it digitally to tie in with my learnings. Another tool I have found invaluable is wikis. After seeing the YouTube video on wikis, I can definitely see this as a tool I will be using in the classroom. The thought of having information stored in a place for students and teachers to view, access and add is important to the learner. As stated by (Kearsley & Shneiderman, 1999) “all student activities involve active cognitive processes such as creating, problem-solving, reasoning, decision-making, and evaluation. In addition, students are intrinsically motivated to learn due to the meaningful nature of the learning environment and activities”. Other tools that gained my interest were PowerPoint (however I have now converted to Prezi) Interactive whiteboards, videos and Animations and Simulations. First of all PowerPoint is a tool that can be used so the students can present information in an assessment format or as a teacher to make a WebQuest through hyper linking. This tool is beneficial to the student because they “are used to receiving information really fast (Prensky, 2001). Also this is the same for interactive whiteboards, videos and Animations and Simulations. Prensky states that digital natives “prefer their graphics before their text rather than the opposite. They prefer random access (like hypertext). They function best when networked. They thrive on instant gratification and frequent rewards’”. As there are many more to mention I believe the tools I have mentioned will be ever present in my class. This does not mean I will not use the other tools, it means that I feel my digital natives will get the most from these programs.
<urn:uuid:35c7378f-5341-401a-ab5e-8bde557fc3a4>
CC-MAIN-2017-26
http://philsexcitinge-learningjourney.blogspot.com/2009/07/synopsis.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00591.warc.gz
en
0.958345
496
3
3
Students use a variety of rulers to measure small strips of paper. Students analyze their results and discuss which ruler was the most accurate and which digits were the most certain. 3 Views 7 Downloads Sig Figs, Scientific Notation, Conversion factors Worksheet In this significant figures instructional activity, students determine how many significant figures the given numbers have, round numbers to the appropriate number of significant figures, and express numbers in scientific notation. This... 7th - 9th Math AP Chemistry. Topic 1: Chemical Foundations, Review. Day 8 A comprehensive selection of questions regarding the basic principles of chemistry. Thirteen questions ask your pupils to perform calculations about density and mass, give the atomic structure of certain elements, provide formulas, and... 9th - 12th Math Addition And Subtraction with Significant Figures After a series of videos on the use and application of significant figures, Sal gets down to business as he demonstrates the methods of adding and subtracting significant numbers. Viewers will find his easygoing manner and clear examples... 9 mins 7th - 9th Math Multiplying And Dividing with Significant Figures Sal takes out the big guns - or in this case, big calculator - in this video, which carries the concept of significant figures even further. Sal shows how to multiply and divide significant figures using both electronic and handwritten... 10 mins 7th - 9th Math
<urn:uuid:4f72a850-e252-4950-b4cb-983ac73422d1>
CC-MAIN-2017-26
https://www.lessonplanet.com/teachers/significant-figures-9th-higher-ed
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320338.89/warc/CC-MAIN-20170624203022-20170624223022-00572.warc.gz
en
0.911443
288
3.984375
4
Whether you’re a policymaker, a member of the media, or just someone seeking well-researched, trusted, and non-partisan scientific information, CAST offers a wealth of publications on a wide range of agricultural science topics. These papers have been created by recognized experts in their respective fields, and they are written in a style that makes their content accessible to anyone wishing to understand the issues. Publications are listed with the most recent releases first. Use the search feature to find specific publications by series, subject, or title. Your CAST membership delivers additional educational resources, from reference publications to our weekly Friday Notes digital newsletter. This paper examines the many economic factors and impacts of the COVID-19 pandemic, with a focus on the agriculture sector. Technology is a key enabler of more efficient agricultural production as growers attempt to meet the cost-effective need for increased food, fiber, and bioenergy, while managing limited inputs, conserving valuable natural resources, and protecting environmental quality. Each new pest management technology (weed, insect, disease) developed brings a number of benefits and risks—environmental, health, resistance—that must be considered and managed through effective stewardship practices to ensure that benefits are fully realized while risks are minimized. Today, the technology necessary to culture cells for human consumption in the form of cell cultivated meat is developing at a rapid pace. Milestones to bring these products to market for consumer purchase are being achieved quickly, and media attention has dramatically increased. Still, there are many questions that need to be addressed before cell cultivated meat is ready for the dinner table. Why is it so difficult to recruit and retain food animal veterinarians in the United States? And how might this impact the future food supply? There is a keen awareness among many consumers that pesticide chemicals frequently reach consumers in the form of food residues. This report concludes that there is no direct scientific or medical evidence indicating that typical exposure of consumers to pesticide residues poses any health risk. The world’s population is expected to reach more than 9 billion by 2050, creating a grand societal challenge: ramping up agricultural productivity to feed the globe. Livestock and poultry products are keys to the world supply of protein, but genetic diversity of livestock is fading. This paper addresses several important challenges regarding the effective protection of remaining genetic diversity. Este importante informe de CAST describe la biología y las prácticas agronómicas de la alfalfa […] Although efficiency of our agricultural systems has increased, water quality remains a concern with minimal measured improvements observed nationwide. This paper provides an overview of the processes, conservation practices, and programs that influence the impact of agriculture on surface and groundwater quality. This commentary documents the need for and anticipated benefits of developing data-sharing standards, incentivizing researchers to share data, and building a data-sharing infrastructure within agricultural research. This issue paper reviews the causes and consequences of groundwater depletion, with a focus on impacts to agriculture as the largest sector of groundwater use.
<urn:uuid:dc9e48d1-9826-442c-a1d9-1eea2968e765>
CC-MAIN-2020-29
https://www.cast-science.org/publications/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140746.69/warc/CC-MAIN-20200713002400-20200713032400-00036.warc.gz
en
0.921725
625
2.71875
3
Paul Charles MorphyArticle Free Pass Paul Charles Morphy, (born June 22, 1837, New Orleans, Louisiana, U.S.—died July 10, 1884, New Orleans), American chess master who, during his public career of less than two years, became the world’s leading player. Acclaimed by some as the most brilliant player of all time, he was first to rely on the now-established principle of development before attack. (See chess: Development of theory.) Morphy learned chess at the age of 10. At 19 he was admitted to the Louisiana bar on condition that he not practice law until coming of age. After winning the first American chess championship tournament at New York City in 1857, he traveled to Europe, where he defeated Adolf Anderssen of Germany, the unofficial world champion, and every other master who would face him—the leading English player, Howard Staunton, avoided a match with him. In Paris Morphy played blindfolded against eight strong players, winning six games and drawing two. He returned to the United States in 1859 and issued a challenge, offering to face any player in the world at odds of pawn and move (where Morphy would play Black, thus giving up the first move, and would play minus one pawn). When there was no response, Morphy abandoned his public chess career. After an unsuccessful attempt to practice law, he gradually withdrew into a life of seclusion, marked by eccentric behaviour and delusions of persecution. What made you want to look up "Paul Charles Morphy"? Please share what surprised you most...
<urn:uuid:f2a9c904-50d9-464c-87ff-6e85d3ebe477>
CC-MAIN-2013-20
http://www.britannica.com/EBchecked/topic/392834/Paul-Charles-Morphy
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708946676/warc/CC-MAIN-20130516125546-00093-ip-10-60-113-184.ec2.internal.warc.gz
en
0.98023
328
2.609375
3
One of Sir Isaac Newton's accomplishments established that the gravitational force between two bodies is proportional to their masses. All other things being equal, the planet with the strongest pull is the one with the largest mass, which is Jupiter. It is so massive and has such a strong gravitational pull, it likely prevented the formation of a planet between itself and Mars in the region known as the asteroid belt. TL;DR (Too Long; Didn't Read) Jupiter, the fifth planet from the Sun, has the strongest gravitational pull because it's the biggest and most massive. Jupiter is by far the largest planet in the solar system -- all the rest of the planets, put together, would easily fit inside it. It has a mass of 1.898 octillion kilograms (4.184 octillion pounds) -- more than 317 times that of the Earth. Jupiter is a gaseous planet and doesn't have a fixed surface, but if you could stand at a point in its atmosphere at which the atmospheric pressure is the same as on Earth's surface, your weight would be 2.4 times what it is on Earth. Jupiter and the Asteroid Belt In the late 1700s, a pair of German astronomers discovered a mathematical formula that allowed them to predict the distances of the planets from the sun with surprising accuracy. This relationship, known as the Titius-Bode Rule, is reliable enough to have contributed to the discovery of Uranus, although it fails to correctly predict the orbits of Neptune or Pluto. It is accurate as far as the first seven planets are concerned, however, and it predicts the existence of a planet in the region occupied by the asteroid belt. The intense gravity of Jupiter is the probable reason why no such planet exists. Almost a Star Jupiter is almost big enough to be a star, but it would have needed to be approximately 80 times more massive when it formed for its gravitational field to be strong enough to initiate hydrogen fusion at its core. As it is, it has attracted 50 moons large enough to have names and 18 smaller ones. Some of these moons were probably formed at the same time that the planet formed, but others may be captured comets and asteroids that have wandered into the solar system from interstellar space. Some, like comet Shoemaker-Levy 9, eventually orbit within Jupiter's Roche limit -- the closest a body can approach a planet without being pulled apart by the planet's gravity -- where they break apart and fall to the planet's surface. Jupiter and Neighboring Planets Jupiter's gravitation attraction has profound effects on the rest of the planets in the solar system. It protects the inner planets from asteroid impacts by attracting asteroids and altering their trajectories. It also causes Mars to orbit in a path around the sun that's more oval and less of a perfect circle than most other planets, which has an effect on its seasons. The gravitational pull of Jupiter also perturbs Mercury's orbit, which is already highly eccentric, and it may lead to the destruction of that planet, according to astrophysicists Jacques Laskar and Gregory Laughlin. Their computer simulations predict that Mercury could crash into the sun, Venus or Earth, or be ejected from the solar system, in about 5 to 7 billion years.
<urn:uuid:83f1f797-a9a2-4768-831f-cbe89c09b34f>
CC-MAIN-2020-05
https://sciencing.com/planet-strongest-pull-23583.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607596.34/warc/CC-MAIN-20200122221541-20200123010541-00170.warc.gz
en
0.958984
665
4.15625
4
Content Rating 4+ 1 / 5 British English Sounds Game Education, Games, Word, Puzzle IOS Have difficulty hearing and distinguishing different English sounds? If so, we've got you covered! Most learners bring habits from their mother tongue into their English speech. We’ve designed the Oxford English Sound Pair game so you can have fun practising difficult sound pairs, using common words carefully selected to to catch your hidden habits / to be tricky for your mother tongue. Over 20 linguistic origins are covered, from Spanish to Swahili, French to Farsi, Turkish to Thai." You will not only practise sound pairs but also learn the phonetic spellings as you go along, which are difficult to grasp for many language learners. The app uses the same phonetic spelling as the Oxford University Press, so it will be hugely beneficial when learning to pronounce new words from dictionaries. Learn the letter-sounds to improve your accent, fluency and confidence. The ad-free game covers all 50 sounds of English, with over 200 words for FREE! Add more words via in-app purchases or earn them by perfecting your scores. • Challenge yourself to find the correct sounds in a given word • Familiarise yourself with the phonetic spellings virtually with no effort • Record yourself saying the word and compare your pronunciation to the native speaker’s; understand better where your habits catch you out ;-) • Keep track of your progress regularly to see how well you are doing • Set Ondle reminders to test your English sound skill every day • And most importantly: have FUN! Compatibility for iOS 13 Third party library update
<urn:uuid:949ac4be-49f8-4697-a7ad-0fe215567a6e>
CC-MAIN-2020-05
https://www.apps-trader.com/english-iphone-and-ipad-apps/1335621490/oxford-english-sound-pairs.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250604849.31/warc/CC-MAIN-20200121162615-20200121191615-00208.warc.gz
en
0.898479
346
2.578125
3
Elephants have played an important part in Thailand’s history and today the Thai elephant, also known as “Chang Thai” in Thailand, remains an enduring symbol of Thailand and its culture. In the past, they were used in warfare, and as beasts of burden in the logging industry. Today, they still play an important role in agriculture in Southeast Asia. In 1900, the number of elephants in Thailand was estimated to be around 100,000 but just over a century later, that figure had been dramatically reduced to around 3,000 – 4,000 making the Asian Elephant an endangered species. Elephants were officially placed on the endangered species list in 1976 by the U.S. Department of the Interior. These days, a great deal of effort is being put into the preservation of these magnificent creatures. Elephant sanctuaries have popped up all over Thailand in order to preserve the Asian elephant population in Southeast Asia as well as to rehabilitate injured elephants and to educate those interested in helping the elephant species thrive. The Thai Elephant Conservation Center, or TECC, cares for more than 50 Asian elephants in a beautiful forest located near the well-travelled Chiang Mai. The TECC is known for its pioneering work in conservation and science. This is Thailand’s only government owned elephant sanctuary and prides itself on being affordable and accessible to both tourists and local Thai school children and families. The TECC is often praised for its relaxed and non-commercial atmosphere. Many of the programs offered here put an emphasis on learning how to interact with an elephant as the mahouts do. A mahout, or elephant handler, are those trained and educated to care for elephants. Another reputable sanctuary is, the Elephant Nature Park. This is a rescue and rehab center for elephants, where one can feed and bathe the animals, plus you can learn some history about each elephant’s past. The most exciting part is you can even stay overnight in one of their hut accommodations. Boon Lott’s Elephant Sanctuary is another exemplary facility to visit if interested in elephant conservation. BLES is a non-profit organization entirely dependent on funds generated by visitors and donations from foundations, agencies and individuals. BLES describes a visit to its camp as a hands-on experience where one can gather food, walk an elephant to release sites, scrub the elephants down as well as observe the animals in an indigenous setting. BLES intentionally keeps its guest numbers low for the benefit of both the elephants as well as the visitors. Regardless of which facility you and your group choose to visit, there are things you should keep in mind to ensure you – and more importantly – the elephants, have an enjoyable and responsible time. When bathing the elephants, you should forget about staying dry. You might as well immerse yourself completely in the experience to fully enjoy it. You’ll have more fun playing in the water with the elephants than from watching outside of it, so come prepared. Bring swim trunks, sunscreen, an extra set of clothes, whatever it is you think you’ll need during or after making a splash with these delightful creatures. Another suggestion I have is to bring bananas. And plenty of them. They are cheap and plentiful in Thailand, and the elephants happen to be quite fond of them. A wide stance is suggested when feeding your new elephant friend as they’re known to enthusiastically search visitors for food with their powerful trunks. It’s also recommended that you split the bananas in two, so as to make the feeding experience last longer. And don’t bother peeling the bananas, the elephants just eat them whole. Not all elephant sanctuaries offer the opportunity to ride an elephant. If you should choose to do so, make sure you take the elephant’s comfort into account. It’ll be a much more enjoyable experience to learn to ride an elephant at a sanctuary where single, bareback riding is taught. Regardless of which elephant sanctuary you should choose to visit, it’s always a good idea to research the establishments you might be visiting. Making an informed decision is one of the ways we can continue the efforts to conserve and rehabilitate these revered and majestic animals. If elephants never forget, they’ll likely remember your kindness and generosity! If you’re looking to learn more about volunteer opportunities to work with elephants in Thailand, check out the Dream Jobbing contest happening right now. One person will be chosen to win a volunteer trip to Thailand to work with these amazing creatures. They will get to visit elephant camps located in different regions of Thailand, assisting with hands on care for elephants and the development of sustainable businesses to support responsible elephant tourism. The entry period has already passed, but you can still vote on your favorites to pick the final winner, and follow along with the journey! Visit http://dreamjobbing.com/dreamjobs/Thailand
<urn:uuid:ec857180-77a8-4c55-afe6-4fdbf84fd46a>
CC-MAIN-2017-30
http://bookthailandnow.com/elephant-conservation/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424945.18/warc/CC-MAIN-20170725002242-20170725022242-00445.warc.gz
en
0.960675
1,011
3.03125
3
Information Sources, Interest, and Involvement Television and the Internet are the primary sources Americans use for science and technology (S&T) information. The Internet is the main source of information for learning about specific scientific issues such as global climate change or biotechnology. - More Americans select television as their primary source of S&T information than any other medium. - The Internet ranks second among sources of S&T information, and its margin over other sources is large and has been growing. - Internet users do not always assume that online S&T information is accurate. About four out of five have checked on the reliability of information at least once. Continuing a long-standing pattern, Americans consistently express high levels of interest in S&T in surveys. However, other indicators, such as the types of news they follow closely, suggest a lower level of interest. - High levels of interest in S&T are part of a long-standing trend, with more than 80% of Americans reporting they were "very" or "moderately" interested in new scientific discoveries. But relative to other news topics, interest in S&T is not particularly high. - As with many news topics, the percentage of Americans who say they follow "science and technology" news "closely" has declined over the last 10 years. - Recent surveys in other countries, including South Korea, China, and much of Europe, indicate that the overall level of public interest in "new scientific discoveries" and "use of new inventions and technologies" tends to be higher in the United States. - Interest in "environmental pollution" or "the environment" is similarly high in the U.S., Europe, South Korea, and Brazil. About 9 in 10 respondents in each country expressed interest in this topic. In 2008, a majority of Americans said they had visited an informal science institution such as a zoo or a natural history museum within the past year. This proportion is generally consistent with results from surveys conducted since 1979, but slightly lower than the proportion recorded in 2001. - Americans with more formal education are much more likely to engage in informal science activities. - Compared with the United States, visits to informal science institutions tend to be less common in Europe, Japan, China, Russia, and Brazil. Public Knowledge About S&T Many Americans do not give correct answers to questions about basic factual knowledge of science or the scientific inquiry process. - Americans' factual knowledge about science is positively related to their formal education level, income level, the number of science and math courses they have taken, and their verbal ability. - People who score well on long-standing knowledge measures that test for information typically learned in school also appear to know more about new science related topics such as nanotechnology. Levels of factual knowledge of science in the United States are comparable to those in Europe and appear to be higher than in Japan, China, or Russia. - In the United States, levels of factual knowledge of science have been stable; Europe shows evidence of recent improvement in factual knowledge of science. - In European countries, China, and Korea demographic variations in factual knowledge are similar to those in the United States. Compared to the mid-1990s, Americans show a modest improvement in understanding the process of scientific inquiry in recent years. - Americans' understanding of scientific inquiry is strongly associated with their factual knowledge of science and level of education. - Americans' scores on questions measuring their understanding of the logic of experimentation and controlling variables do not differ by sex. In contrast, men tend to score higher than women on factual knowledge questions in the physical sciences. Public Attitudes About S&T in General Americans in all demographic groups consistently endorse the past achievements and future promise of S&T. - In 2008, 68% of Americans said that the benefits of scientific research have strongly outweighed the harmful results, and only 10% said harmful results slightly or strongly outweighed the benefits. - Nearly 9 in 10 Americans agree with the statement "because of science and technology, there will be more opportunities for the next generation." - Americans also express some reservations about science. Nearly half of Americans agree that "science makes our way of life change too fast." - Americans tend to have more favorable attitudes about the promise of S&T than Europeans, Russians, and the Japanese. Attitudes about the promise of S&T in China and South Korea are as positive as those in the United States and in some instances even more favorable. However, residents of China and Korea are more likely than Americans to think that "science makes our way of life change too fast." Support for government funding of scientific research is strong. - In 2008, 84% of Americans expressed support for government funding of basic research. - More than one-third of Americans (38%) said in 2008 that the government spends too little on scientific research and 11% said the government spends too much. Other kinds of federal spending such as health care and education generate stronger public support. The public expresses confidence in science leaders. - In 2008, more Americans expressed a "great deal" of confidence in scientific leaders than in the leaders of any other institution except the military. - Despite a general decline in confidence in institutional leaders that has spanned more than three decades, confidence in science leaders has remained relatively stable. The proportion of Americans indicating "a great deal of confidence" in the scientific community oscillated between 35% and 45% in surveys conducted since 1973. In every survey, the scientific community has ranked either second or third among institutional leaders. - On science-related public policy issues (including global climate change, stem cell research, and genetically modified foods), Americans believe that science leaders, compared with leaders in other sectors, are relatively knowledgeable and impartial and should be relatively influential. However, they also perceive a considerable lack of consensus among scientists on these issues. Over half of Americans (56%) accord scientists "very great prestige." Ratings for engineers are lower (40% indicate "very great prestige"), but nonetheless better than those of most other occupations. - In 2008, scientists ranked higher in prestige than 23 other occupations surveyed, a ranking similar to that of firefighters. - Between 2007 and 2008, engineers' rating of "very high prestige" increased from 30% of survey respondents to 40%. Public Attitudes About Specific S&T Issues Americans have recently become more concerned about environmental quality. However, concern about the environment is outranked by concern about the economy, unemployment, and the war in Iraq. - Between 2004 and 2008, the proportion of Americans expressing "a great deal" or "a fair amount" of worry about the quality of the environment increased from 62% to 74%. Nonetheless, when asked to name the country's top problem in early 2009, only about 2% mentioned environmental issues. - In 2008, 67% of Americans believed that the government was spending too little to reduce pollution and 7% thought it was spending too much. - The trend in support for environmental protection is less evident when Americans are asked about trade-offs between environmental protection and economic growth. In March 2009, 51% of all Americans indicated that economic growth should take precedence over the environment. Americans support the development of alternative sources of energy. - A majority of Americans favor government spending to develop alternate sources of fuel for cars (86%), to develop solar and wind power (79%), and to enforce environmentally friendly regulations such as setting higher emissions and pollution standards for business and industry (84%). - Since the mid-1990s, American public opinion on nuclear energy has been evenly divided, but the proportion of Americans favoring the use of nuclear power as one of the ways to provide electricity for the U.S. increased from 53% in 2007 to 59% in 2009. - Europeans are divided on nuclear energy, but support is on the rise. The proportion of Europeans who said they favored energy production by nuclear power stations increased from 37% in 2005 to 44% in 2008, while the proportion opposing it decreased from 54% in 2005 to 45% in 2008. Support for nuclear energy varies a great deal among countries in this region. Citizens in countries that have operational nuclear power plants are more likely to support nuclear energy than those in other countries. Despite the increased funding of nanotechnology and growing numbers of nanotechnology products in the market, Americans remain largely unfamiliar with this technology. - Even among respondents who had heard of nanotechnology, knowledge levels were not high. - When nanotechnology is defined in surveys, Americans express favorable attitudes overall. A majority of Americans favor medical research that uses stem cells from human embryos. However, Americans are overwhelmingly opposed to reproductive cloning and wary of innovations using "cloning technology." - Support for embryonic stem cell research is similar to previous years. In 2008, 57% of Americans favored embryonic stem cell research while 36% opposed it. A higher proportion (70%) favors stem cell research when it does not involve human embryos. - More than three-quarters of Americans oppose human cloning.
<urn:uuid:8cb76081-f2ba-43aa-984d-a7b496edc24d>
CC-MAIN-2014-15
http://www.nsf.gov/statistics/seind10/c7/c7h.htm
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00573-ip-10-147-4-33.ec2.internal.warc.gz
en
0.940384
1,853
3.15625
3
URI Friends of Oceanography lecture explores oceanic mapmaking Narragansett, RI --February 2, 2004 --The rapid expansion of global exploration from Europe in the 1400 and 1500s led to a corresponding development in the art and science of mapmaking. Since almost all voyages took place by sea, the maps put heavy emphasis on the shape of coastlines and the spatial relationship between countries and continents. But despite the importance of the oceans to trade and communications, in conflict, and to fishing, the maps give little attention to the ocean as such, with one truly remarkable exception, the subject of a Friends of Oceanography Science Lecture. "The Ocean in Maps from the Renaissance Era and the 1539 Carta marina" will take place on Thursday, February 12, at noon in the Coastal Institute Auditorium on the URI Bay Campus in Narragansett. The speaker will be Dr. Thomas Rossby, a physical oceanographer at the URI Graduate School of Oceanography. The lecture will focus on the Carta marina published by Olaus Magnus in Venice in 1539. In this map of the Nordic countries, a map which broke completely new ground in terms of size, accuracy and information content, he gives the ocean astonishing physical presence by drawing in sea monsters, some more strange than others, merchant ships, fishing boats and kayaks. He also gives the ocean itself unusual presence or "texture" including whorls, or eddies in modern terminology. Rossby will briefly review the evolution of maps in the Renaissance leading up to the publication of the 1539 Carta marina, of which there exists two copies, one in Munich, Germany, and the other in Uppsala, Sweden. After a brief survey of the many, many pictorials throughout the map, each one a mini-story, he will focus on the ocean between Scandinavia and Iceland. In particular, Rossby will consider the meaning of a band of whorls Magnus drew in the map east of Iceland. The location of these coincides almost perfectly with the Iceland-Faroes Front, a major ocean current that is part of the system of warm currents that help keep northern Europe habitable. Nowhere else in the chart do whorls appear in such a systematic fashion. It is possible that Magnus drew these to indicate the special nature of the waters east of Iceland, and as such would appear to be the earliest known description of mesoscale eddies in the ocean. During the life of Olaus Magnus, his travels and contacts, it seems likely that he got his information from mariners of the Hanseatic League operating out of northern Germany cities, many of which he is known to have visited and lived in both before and after he was exiled from Sweden. A resident of Saunderstown, Rossby received a B.S. in applied physics from the Royal Institute of Technology in Stockholm, Sweden, and a Ph.D. in oceanography from the Massachusetts Institute of Technology. His research interests include the dynamics and kinematics of ocean currents with special interest in the Gulf Stream and the circulation of the North Atlantic. He is also interested in ocean instrumentation. Established in 1985 to support and promote the activities of the URI Graduate School of Oceanography, Friends of Oceanography informs and educates the membership and the general public about the scientific, technological, and environmental research that takes place at GSO. The organization sponsors public lectures, open houses, marine-related mini-courses, science cruises on Narragansett Bay, and an annual auction. For information about Friends of Oceanography, call 874-6602.
<urn:uuid:955404aa-5567-4518-93eb-da817c6a9d56>
CC-MAIN-2014-10
http://www.uri.edu/news/releases/index.php?id=2464
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394024785431/warc/CC-MAIN-20140305130625-00009-ip-10-183-142-35.ec2.internal.warc.gz
en
0.938003
741
2.84375
3
Cervical Cancer: Symptoms, Stages, Causes, Medicines & Treatments Commonly asked questions about cervical cancer: - What is cervical cancer? - What are the types of cervical cancer? - What are the risk factors of cervical cancer? - What are the symptoms of cervical cancer? - What are the stages of cervical cancer? - What is the survival rate of cervical cancer? - How do you test cervical cancer/prevent cervical cancer? - Is cervical cancer curable? - How do you treat cervical cancer? - How to Prevent Cancer - What drugs are used to treat cervical cancer? What is Cervical Cancer? The cervix has two main cells that cover it, including squamous cells, which are located on the exocervix, and glandular cells, which are located on the endocervix. The two cell types meet in an area known as the transformation zone — the location of which changes based on age or whether you have given birth. Most cervical cancers begin within the cells with pre-cancerous changes located in the transformation zone. Even though a woman may have cells with pre-cancerous changes, only some women with these changes will develop cancer. It usually takes several years for cervical pre-cancer to change into cervical cancer, but it can happen in less than a year. For most women, pre-cancerous cells will go away without any treatment, however, there are cases where pre-cancer still turns into full invasive cancer. Cervical Cancer Statistics - Cervical Cancer accounts for 0.8 percent of all cancer cases among people in the US. - Cervical cancer was the most common form of cancer-related death in women. However, within the last 40 years, medical advances have lowered the death rate by more than 50 percent, according to Cancer.org. - Cervical cancer tends to occur in midlife. Most cases are found in women younger than 50 and rarely occurs in women younger than 20. What are the Types of Cervical Cancer? There are two main types of cervical cancer: Squamous cell carcinoma and adenocarcinoma, each one is classified by what they look like under a microscope. - Squamous cell carcinoma: Squamous cell carcinoma begins with the thin flat cells that line the bottom of the cervix. This type accounts for 80-90 percent of all cervical cancer cases. - Adenocarcinoma: Adenocarcinoma develops in the glandular cells that line the upper portion of the cervix. These cancers make up 10-20 percent of all cervical cancer cases. What are the Risk Factors of Cervical Cancer? The main risk factors associated with cervical cancer are genetics, lifestyle and pregnancy, according to CancerCenter.com. There are also several smaller factors that can attribute to cervical cancer. - Pregnancy: Women who have had three or four full-term pregnancies or who have had a full-term pregnancy before the age of 17 are 2-3 times more likely to develop cervical cancer. - Smoking: A women who smokes doubles her chance of cervical cancer. - Sexual history: Certain sexual behaviors are considered to increase the risk of developing cervical cancer. These include: sex before the age of 18, sex with multiple partners or having sex with someone who has had multiple partners. Studies have also shown a link between chlamydia infection and cervical cancer. - Oral contraceptive use: Women who have taken oral contraceptives for more than five years have an increased risk of cervical cancer. However, this risk goes back to normal after the pills have been stopped for a few years. - Other Conditions - Weakened immune system: In most people with a healthy immune system, the HPV virus clears itself from the body within 12-18 months. However, people with HIV — or other health conditions that limit the bodies ability to fight off infections — are at a higher risk. - HPV: Although HPV causes cancer, having HPV does not mean you will get cancer. Most women who contract HPV will clear the virus or will have the abnormal cells removed. What are the Symptoms of Cervical Cancer? The American Cancer Society states that in the early stages of cervical cancer, many women show no physical symptoms. However, there are several common symptoms that may develop with time, such as vaginal bleeding, unusual vaginal discharge and pelvic pain. - Vaginal bleeding: This includes bleeding in between periods, after intercourse or after menopausal bleeding. - Unusual vaginal discharge: Commonly a watery pink and foul smelling discharge. - Pelvic pain: Pain during intercourse — or at other times — may be a sign of changes to the cervix or less serious conditions. Advanced stage symptoms of cervical cancer include, but are not limited to: - Weight loss - Back pain - Leg pain or swelling - Leakage of urine or feces from the vagina - Bone fractures What are the Stages of Cervical Cancer? CancerCenter.com lists five different stages of cervical cancer. The TNM (Tumor, Node, Metastases) Classification System is used to determine which stage the cancer is at. - TNM System: - Tumor (T): Describes the size of the original tumor. - Lymph Node (N): Indicates if the cancer is present in the lymph nodes. - Metastasis (M): Refers to whether or not the cancer has spread to other parts of the body. - Stage 0: The cancer cells are confined to the surface of the cervix. - Stage I: In Stage I, the cancer has grown into the cervix but has not grown out of the cervix yet. Stage I has been broken down into two sub categories: - Stage IA: There is a very small amount of cancer that can only be seen under the assistance of a microscope. It is generally less than 5 mm deep and 7 mm wide. - Stage IB: There is a small amount of cancer that can be visible or seen under the assistance of a microscope. It is generally more than 5 mm deep and 7 mm wide. - Stage II: In Stage II, the cancer has grown out of the boundaries of the cervix and uterus, however, it has not reached the walls of the pelvis or the lower part of the vagina. The disease in this stage has also not spread to the lymph nodes or distant parts of the body. It has also been broken down into two sub-categories: - Stage IIA: The cancer has not spread into the tissues next to the cervix, however, it may have spread into the upper part of the vagina. - Stage IIB: The cancer has spread into the tissues next to the cervix. - Stage III: In Stage III, the cancer has spread into the lower part of the vagina or the walls of the pelvis, but not to nearby lymph nodes. This stage of cervical cancer has also been broken down into two sub-categories: - Stage IIIA: The cancer has spread to the lower third of the vagina, but not walls of the pelvis. - Stage IIIB: The cancer has grown into the walls of the pelvis and/or has blocked both ureters, but has not spread to the lymph nodes or distant sites. Or the cancer has spread to the lymph nodes in the pelvis but not to the distant sites. - Stage IV: In Stage IV, the cancer has spread to nearby organs or other parts of the body. Stage IV is also separated into two sub-categories: - Stage IVA: The cancer has spread to the bladder or rectum, but has not spread into distant sites or lymph nodes. - Stage IVB: The cancer has spread into distant organs past the pelvis, such as the lungs or liver. What is the Survival Rate of Cervical Cancer? The following survival statistics were published in 2010 in the seventh edition of the AJCC Cancer Staging Manual. Cervical cancer survival rates are based on the stage of the cancer at the time of diagnoses: How Do You Test/Prevent Cervical Cancer? Cervical cancer is the most common cancer in women worldwide. However, it is far less common in the United States and other countries where cervical cancer screening is more routine. Doctors recommend women over the age of 21 receive routine Papanicolaou (Pap) tests and pelvic exams. Even though these tests are not 100 percent accurate, the tests and exams allow doctors to catch cervical cancer early on while it is still in its highly treatable stages. Is Cervical Cancer Curable? Yes, cervical cancer is curable. How Do You Treat Cervical Cancer? The National Cancer Institute states that there are four main kinds of standard treatment used to treat patients diagnosed with cervical cancer: - Surgery: A number of different surgical procedures can be preformed by your doctor to remove the tumor. - Radiation therapy: External radiation therapy involves shrinking the tumor using high energy X-rays to kill cancer cells. Internal radiation therapy utilizes a radioactive subsistence sealed inside of needles, wires or catheters that are placed directly into or near the cancer. - Chemotherapy: Chemotherapy uses drugs to kill the cancer cells. Chemotherapy may be given by pill or administered by needle into the vein or muscle. - Targeted therapy: Targeted therapy is a type of treatment that targets specific cells with specialized drugs or other substances that identify and attack specific cancer cells and leave normal cells unharmed. How to Prevent Cancer Cancer prevention can vary based on different research, and opinionated studies or news reports. However, these simple lifestyle changes can make a difference in the prevention of developing or forming cancer. - Eat healthy - Limit or stop your use of tobacco - Have a balanced lifestyle - Avoid risky behavior - Visit your doctor - Protect your skin from the sun What Drugs are Used to Treat Cervical Cancer? Some of the drugs that are used to treat cervical cancer and other types of cancer include: - Bevacizmuab (Avastin): Bevacizmuab (Avastin) is used to interfere with the growth and spread of cancer cells by hindering the growth of blood cells. - Blenoxane (Bleomycin): Blenoxane (Bleomycin) is used to slow and interfere with the growth of cancer cells in the body. - Hycamtin (Topotecan Hydrochloride): Hycamtin (Topotecan Hydrochloride) is used to kill cancer cells when other treatments have failed.
<urn:uuid:5ee6d3d1-d399-4ef0-9976-27f0e50c1cbf>
CC-MAIN-2017-34
https://www.druglawsuitsource.com/condition/cancer/cervical-cancer/
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886133447.78/warc/CC-MAIN-20170824082227-20170824102227-00079.warc.gz
en
0.94279
2,223
2.75
3
Wayne National Forest It is the only national forest in Ohio and is clustered in three areas along the Appalachian Mountains and the Ohio River. The southern cluster is primarily located in Lawrence County and spans from Portsmouth to Gallipolis and along the northern banks of the Ohio River. The northwestern cluster primarily spreads across Athens, Hocking and Perry Counties and includes Athens on its southern end. The northeastern cluster is located in Washington and Monroe Counties running from Marietta northeast along the Ohio River. The grounds include many areas that were strip mined in the late 1800s. Accordingly, the forest includes areas experiencing various degrees of reforestation. The forest is nestled in rugged foothills of the Appalachian Mountain Range, north of the Ohio River Valley. Flora and fauna The Wayne National Forest boasts more than 2,000 species of plants, including hardwoods, pine and cedar as well as an endangered species, running buffalo clover. Wildlife includes bobcats, coyotes, eagles, hawks, osprey, wild turkey, turkey-vultures and songbirds as well as deer and beaver. Among Bigfoot researchers, the rugged forested hills are suspected to harbor a sizable Bigfoot population. The climate changes considerably throughout the course of the year. In the winter months it is on average around 30 Degree's during the day time and can dip in lower teens at night. In the summer the day time temperature ranges on average from about 77 to 90, night time temperature are in mid 60's. the fall and spring have very mild weather. The rainiest months is normally April and May. Get in By plane By car Get around By car By foot The forest includes 300 miles of trails for hiking, biking, horseback and ATVs. [add listing] See [add listing] Do [add listing] Buy [add listing] Eat [add listing] Drink [add listing] Sleep Stay safe Much of this area is not served by cell phone towers and you will have no communications. The area is popular with hunters and all terrain vehicle (ATV) users. Rumor has it that the area is also popular with marijuana growers, in and out of the Wayne National Forest. If you come upon a patch, change direction and move on slowly while looking for man-traps. Don't even think about getting close enough to pick any. You may be being video taped by law enforcement, or worse. - Exercise ordinary caution as in any outdoor activity. Get out
<urn:uuid:2bfd300a-f031-42a4-b313-62b06972dad8>
CC-MAIN-2013-48
http://wikitravel.org/en/Wayne_National_Forest
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164018116/warc/CC-MAIN-20131204133338-00053-ip-10-33-133-15.ec2.internal.warc.gz
en
0.941626
523
2.640625
3
Connecting the CDF and the PDF Requires a Wolfram Notebook System Interact on desktop, mobile and cloud with the free Wolfram Player or other Wolfram Language products. The probability density function (PDF - upper plot) is the derivative of the cumulative density function (CDF - lower plot). This elegant relationship is illustrated here. The default plot of the PDF answers the question, "How much of the distribution of a random variable is found in the filled area; that is, how much probability mass is there between observation values equal to or more than 64 and equal to or fewer than 70?"[more] The CDF is more helpful. By reading the axis you can estimate the probability of a particular observation within that range: take the difference between 90.8%, the probability of values below 70, and 25.2%, the probability of values below 63, to get 65.6%.[less] Contributed by: Roger J. Brown (November 2007) Reproduced by permission of Academic Press from Private Real Estate Investment ©2005 Open content licensed under CC BY-NC-SA The calculations here are based on the normal distribution, which is completely determined by its mean and standard deviation. Changing these values changes the result of probability estimates. In the first snapshot you can see that the chance of seeing a value at or below 70 is approximately 81%. There is no need to subtract as in the default view because the lower bound of 50 is presumed to have a zero probability. The second snapshot shows that it does not matter which side is filled. The procedure for finding the probability of particular random variables is the same. In the third snapshot the filled portion of the PDF plot is too narrow to visualize, a reminder that a single point has no probability mass for a continuous distribution. (To avoid problems in the illustration there is a tiny difference between the high and low points—65 versus 64.99—that may be ignored for purposes of the exposition.) The points in the CDF plot overlap visually, also showing that the probability of being between two values goes to zero as the values approach each other for a continuous distribution. More information is available in Chapter Five of Private Real Estate Investment and at mathestate.com. R. J. Brown, Private Real Estate Investment: Data Analysis and Decision Making, Burlington, MA: Elsevier Academic Press, 2005.
<urn:uuid:fc1501d9-de67-49f5-b31c-8919c3a31be5>
CC-MAIN-2023-14
https://demonstrations.wolfram.com/ConnectingTheCDFAndThePDF/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00589.warc.gz
en
0.913904
496
2.671875
3
A month of lockdown across Africa cost the continent about 2.5 percent of its annual GDP, equivalent to about 65.7 billion U.S. dollars per month, a newly published United Nations Economic Commission for Africa (ECA) report revealed on Sunday. The newly published report entitled “COVID-19: Lockdown Exit Strategies for Africa,” proposes African nations various COVID-19 exit strategies following the imposition of lockdowns that helped curtail the virus. At least 42 African countries applied partial or full lockdowns in their quest to curtail the pandemic. But it has had devastating economic consequences. The UNECA also estimated that the COVID-19 lockdown has wider external impact on Africa in terms of lower commodity prices and investment flows. “With the lockdowns came serious challenges for Africa’s economies, including a drop in demand for products and services; lack of operational cash flow; reduction of opportunities to meet new customers; businesses were closed; issues with changing business strategies and offering alternative products and services; a decline in worker production and productivity from working at home; logistics and shipping of products; and difficulties in obtaining supplies of raw materials essential for production,” the report read. The report, among other things, proposed seven exit strategies that provide sustainable, albeit reduced, economic activity. The strategies include improving testing, lockdown until preventive or curative medicines are developed, contact tracing and mass testing, immunity permits, gradual segmented reopening, adaptive triggering, as well as mitigation. Gradual segmented reopening may be needed in countries where containment has failed with further measures to suppress the spread of the disease being required where the virus is still spreading, the report indicated. “The spread of the virus is still accelerating in many African countries on average at 30 percent every week,” the report advised. According to the report, active learning and data collection can help policymakers ascertain risks across the breadth of policy unknowns as they consider recommendations to ease lockdowns and move towards a “new normal.” It further urged African nations to learn from the experiences of other regions and their experiments in reopening; and to use the “extra time” afforded by the lockdowns to rapidly put in place testing, treatment systems, preventive measures, and carefully design lockdown exit strategies in collaboration with communities and vulnerable groups. The ECA argued that one of the most sensitive issues facing policymakers is the impact of COVID-19 lockdowns on food security. On Sunday, the Africa Center for Disease Control and Prevention (Africa CDC) said that the number of confirmed COVID-19 cases across the African continent surpassed 61,165.
<urn:uuid:f65611e0-3c97-4952-b39f-686e9f98b81a>
CC-MAIN-2023-14
https://www.africahealthtimes.com/africa-loses-65-7b-to-covid-19-in-one-month-eca.aht/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948965.80/warc/CC-MAIN-20230329085436-20230329115436-00571.warc.gz
en
0.943671
547
2.609375
3
A character arc is related to the plot line of a story in that it effects and is effected by the plot. But it portrays the journey the character is taking inside his head as opposed to the one he is taking in the outside world. A character arc is only going to exist in a certain type of story so before we talk any more about it let’s take a look at two different types of stories. Plot Driven vs. Character Driven Stories In plot driven fiction the events of the plot move the story forward and cause the characters to react. The characters take a back seat to the plot as they are effected by and react to the external world. These types of stories are very common: Just think of writers like Tom Clancy, Stephen King and James Patterson. The characters don’t do a lot of contemplating their navels when there are bullets to avoid, mysteries to be solved and monsters about. Character driven fiction is another beast completely. In this type of story the character is the prime mover and moves the plot along through the decisions and choices they make along the way. This type of story is much more nuanced and, consequently, harder to write. But, this type of story can also be much more insightful and truer to life than a plot driven story. While plots can be fairly easy to design, the story has to go though point B before it gets from point A to point C, it gets more complicated when you involve the thoughts of the character. In a thriller or mystery the character is compelled by the events of the plot to take action. However, in a character driven story the discovery of a dead body might cause the main character to think about the events of his childhood before he even thinks about looking for the murderer, if he even does. So, in this type of fiction you must take into consideration the character’s internal motivation for doing things before he acts in the external world or reacts to the events of that world. It’s A Bit Of A Literary Dance As the plot and the character arc play off each other they may change over time. This can be frustrating to say the least. In order to keep these two dance partners dancing to the same tune you need to be keep some other things in mind. Follow These Sign Posts To Get You To Where You Want To Go If you’re having difficulty meshing the plot and the character arc step back and think about a few things first. What kind of story are you writing? Is it a romance novel? If so, then the story needs to follow that general direction and it can’t devolve into something else. Is it a western, an adventure story, a murder mystery? Then let the expectations of the reader reading those genres help point you in the right direction. Where does the conflict of the story lie? There are 6 types of conflict. Is it a conflict between the main character and himself or the main character and society as a whole? This will help you find your way too. For more on conflict click here. Are you going to Albuquerque or Dublin? These are two completely different destinations and two completely different journeys. To help guide you along the way you need to know your dramatic throughline. Does the character succeed, fail, give up? These are simple questions but easily forgotten with everything else you have to do in your story. To understand the importance of the dramatic throughline click here. To understand how to develop characters people will pay money to read click here.
<urn:uuid:02be9110-a5c5-43b3-a8e6-35bd4520ccd4>
CC-MAIN-2017-26
http://www.e-novel-advisor.com/character-arc-important.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322275.28/warc/CC-MAIN-20170628014207-20170628034207-00067.warc.gz
en
0.957337
738
2.78125
3
Growing Herbs From Seeds Is Fun And Saves Money If you like to cook with fresh herbs you more than likely need a good supply of them. Nothing is handier than to just go outside the door to harvest them fresh and full of flavor. Once you are used to that you don't want to cook without that. Particularly if you need big quantities of certain herbs and a constant supply it is the best to plant herbs from seed. Growing herbs from seeds is also the cheapest way to produce herb plants. In general you can grow all types of herbs from seed but it might not always make sense. Growing rosemary, sage or thyme might take a long time before you can harvest from them. You also need only a few plants to have enough supply. For these types of herbs it might be easier and quicker to buy an established plant in your local nursery. Other varieties like parsley, chives, basil or coriander are easily and quickly grown from seed. The best and most successful way of growing herbs from seeds is to start them off in seed trays and pots in a greenhouse or poly-tunnel. You can also start them off on a window-sill or conservatory if you don't have a greenhouse. Watch the video tutorial below on how to sow seeds. DISCLAIMER: Please note that the above is an affiliate link. If you buy through this link we will earn a small comission that will help to support this website. This won't affect the price you will be paying for the product. Fill the seed tray with the seed compost, level it anf firm it lightly. Don't fill the tray up to the top. Leave about 1/4 inch from the top and water over it with the rose on the watering can. Pour the seeds onto the palm of your hand. Take some between thumb and index finger and sow them thinly and evenly onto the prepared seed tray. Try not to put them too close together. Put some fine seed compost into a plastic flower pot with holes in the bottom. Shake the compost evenly over the seeds. Bigger seeds need a thicker layer of compost than small seeds. The rule of thumb is to cover them with three times of the thickness of the seed. The seed packet will give you exact information on this. Some plants will need light for germination and need no covering. The same applies for very fine seed. Water the tray with the fine rose on the watering can again. with a sheet of glass or clear plastic. This step eliminates watering, speeds up germination and keeps pests away. If you don't cover the tray you will have to water carefully and keep the soil evenly moist (but not wet). Once the seedling emerge remove the glass. Place the tray in a shaded spot in the greenhouse. It is important to choose a place that is evenly warm and bright but not sunny.5. Transplant the seedlings: Once the seedlings have produced their second sets of leaves it is time to thin them out and transplant them into individual pots. Fill the pots with seed compost and poke a hole into it that is big enough for the roots. Loosen the soil in a corner of the tray and lift the seedlings out carefully. Insert the roots into the prepared hole, fill it up with compost and firm it carefully. Water well and don't forget to label your pots!6. Harden them off before planting them out. Grow on your new herb plants until they are big enough to go into their final positions. You can either plant them into the ground or into pots, window-boxes, etc. It is important to prepare your seedling for the life outside. The easiest way to do that is to move the plants into a cold frame that you leave uncovered during the day and covered over night for a period of two to three weeks. If you don't have a cold frame you can just put them outside the greenhouse for the day and bring them in again at night. Don't place them into full sun in the beginning. 7. Plant the herbs into their final position in the garden or into containers. Photos: Chiot's Run Some herbs are suitable for sowing directly into the ground. These include parsley, chives, salad rocket or coriander. To do that prepare the ground well. It should be free from weeds. Choose the spot for growing herbs from seeds well. Most of them originate from the Mediterranean and need a sunny spot with good drainage. If you have heavy soil add some sand and compost to loosen it up. The soil itself can be on the poor side and does not need to be fertilized heavily. Adding some good garden compost is enough for growing herbs from seeds. The best time for growing herbs from seeds is during late spring once the soil has warmed up sufficiently and temperatures are up during the night as well. Check for best sowing times for the individual varieties on the seed packets. If you are starting the seeds indoors you can start earlier and have your plants ready much sooner. Loosen the soil with a fork and rake it until the soil is fine and crumbly. Check the seed packet sowing method, depth and distance of the rows. Mark the rows on the seed beds with some pegs and string and sow the seed into shallow drills. Cover the seeds with the appropriate amount of soil and water well after seeding. Important is to keep the seedbed moist. After a few days or weeks you will start to see the little seedlings as they stick their heads up. Make sure to control slugs! Careful hand weeding is essential during this period. You might have a lot of other unwanted seedlings appear between your herb seedlings that will compete for space, water and nutrients. Photo: mair(member no longer active) Mark your seed rows with some fine sand. That makes it easier to determine what are weeds and what are your wanted herb seedlings. Once the seedlings are big enough and have about two sets of leaves it is time to thin them out to their final distance. This is an important step that a lot of people skip! If you don't do it you will end up with plants that don't have enough room to grow to their full size or that will bolt prematurely. Remove weak plants completely. Sometimes you might be able to transplant some of the overcrowded seedlings into other spots or gaps. Keep the seeds lightly moist. Once they are mature most herbs, particularly the one of Mediterranean origin like sage, thyme, rosemary and oregano are pretty drought tolerant and require little watering. Are you looking for good vegetable and flower seeds for a reasonable price? At www.genericseeds.com you will get a great selection of flower and vegetable seeds including Heirloom varieties at great prices. The seeds are guaranteed, tested and non GMO. They use a cheaper way of packaging their seeds and can pass the saving on to you. That way you will pay for the seed and not the seed packet! GenericSeeds offers a great selection of seeds for fantastic value.
<urn:uuid:273dd327-ea4f-4597-8780-e74d30bd0ba0>
CC-MAIN-2014-23
http://www.gardening-advice.net/growing-herbs-from-seeds.html
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00360-ip-10-146-231-18.ec2.internal.warc.gz
en
0.959815
1,457
2.625
3
Flamenco is one of the folk music genres of Spain. In Spain the word flamenco is not just associated with the guitar but also the people, songs and dances of Spain. The history of flamenco follows that of Spain. When the Moors ruled Southern Spain they brought with them their instruments and the most important of these was the Ud. This eastern lute is still to be found all over the world but in Spain it collided with European ideas and flamenco is the product. Flamenco is woven into the life of the region of Andalucia where it originated and the people actively engage in the songs and dances. The guitar has always been the main instrument used in flamenco to support the dancers and singers due its percussive timbre. The early flamenco guitarists very rarely played solo; their role was purely to provide music for the dancers and singers. The rise of the solo flamenco guitarist is a late development and many of the great flamenco soloists are also renowned for their ability to accompany singers and dancers. The modern classical guitar and its physical development can be traced back to the Spanish guitar-maker Torres. Alongside the classical guitar is the flamenco guitar. The flamenco guitar has the same history and the greatest luthiers in Spain have always made both types of guitar. The main structural difference the flamenco guitar has in relation to the classical guitar is a thinner body. This creates a timbre that is sharp and percussive. This is considered the ideal sound with which to accompany dancers. The flamenco guitar also has wooden tuning-pegs, which is the traditional method of construction for all early guitars. The need for the classical guitar to be able to be heard in a concert hall and the demand for greater resonance from classical composers; means that the classical guitar has left behind the use of wooden tuning-pegs and its body size has increased in comparison to the flamenco guitar. In many respects the flamenco guitar is similar in construction to the guitars of earlier centuries. Paco de Lucia and Sabicas are two flamenco guitarists that students of the guitar should be aware of. Both have played a major part in the changes of this evolving art form. Sabicas developed the technique of tremolo and Paco de Lucia extended the harmonic framework with his use of jazz chord voicings. Juan Martin is a flamenco guitarist who keeps the traditional forms sharply in focus and provides the clearest guide for beginners wishing to study the various flamenco forms. The flamenco forms have been developed over a long time. There is a distinction of terms when flamenco forms are described. When flamenco vocal forms are described they are called "cantes" and the guitarist forms are called "toques". Each flamenco form has certain rhythmic and tonal qualities that encapsulate the form and flamenco guitarists are expected to have a knowledge of the origin and usage of these. Flamenco Chord Progression This is a very common chord progression that most guitarists who wish to learn flamenco start with. A descending one octave Phrygian mode starting from the note "E" which can be played over the above chord progression. - Golpe - the Spanish word for "tap". This technique involves "tapping" the body of the guitar to produce a percussive sound. The third finger of the right-hand strikes the table of the guitar with the nail and flesh. Flamenco guitars have a "golpeador" (plastic cover) which protects the wood of the guitar during the use of this technique - Rasgueo - this is the most common strumming technique for flamenco. The right-hand is formed into a closed fist with the thumb resting against the guitar or low E string for support, and each finger is flicked out one at a time (4-stroke rasgueo) to sound the strings with the nails. A common variation is to use the technique with only the index finger. The index finger is also used for the up-stroke which only strikes the treble strings after the completion of the 4-stroke rasgueo.
<urn:uuid:1afa58b6-1e7e-4274-8df3-ed7544088071>
CC-MAIN-2017-26
https://en.wikibooks.org/wiki/Guitar/Flamenco
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323970.81/warc/CC-MAIN-20170629121355-20170629141355-00545.warc.gz
en
0.956532
876
3.140625
3
|Share this thread:| Sep3-13, 08:34 PM Mukul Sharma of Dartmouth claims there is very good evidence that the extremely rapid draining of Lake Agassiz about 12900ya is only a partial contributor to the cold period (Younger Dryas) that started 12900ya. There is geological evidence of a large meteor impact in Quebec at the same time. He claims that is the primary cause. The register article mentions other posited effects: start of the Megafauna extinction, an increase agriculture by Native American peoples. I think we should reserve judgement until the paper is out. But a "head's up" is in order. This will be out shortly in PNAS, which is kind of home to somewhat speculative articles sometimes. IMO. Obviously they are refereed papers. Earth sciences news on Phys.org |Register to reply| |The main Younger Dryas thread||Earth||8| |Younger Dryas Caused by Ice Dam Collapse?||Earth||4| |Extraterrestrial impact caused Younger Dryas?||Earth||68| |Can you date a younger guy?||General Discussion||134| |From one of PF's younger members...||General Discussion||33|
<urn:uuid:1aba84ee-bb64-46b4-898e-6af870f5f29e>
CC-MAIN-2014-23
http://www.physicsforums.com/showthread.php?t=708534
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00470-ip-10-146-231-18.ec2.internal.warc.gz
en
0.894207
263
2.625
3
One of the strangest-looking yet important planes ever created has reached its 20th birthday — the Airbus "Beluga" transport plane. The Beluga is a highly modified Airbus A300, and named for its resemblance to the Beluga Whale. Airbus has marked the occasion with a video about the plane. In its early days, Airbus used modified and obsolete 1940s-vintage transport planes which were built by its biggest competitor, Boeing. That had to be a little bit embarrassing. Boeing even joked that "every Airbus is delivered on the wings of a Boeing." The modified Boeing Stratocruisers were nicknamed "Super Guppies." Super Guppies were also used by NASA to ferry rocket parts to their final assembly prior to launch from Kennedy Space Center in Florida. Airbus' original "Super Guppy" by I wish I was flying on Flickr (CC Commercial License) Officially named the Airbus A300-600ST, (the ST for Super Transporter) this workhorse aircraft is second only to the Antonov An-225 in terms of cargo capacity by volume. Its lifting capability is only 47 tons, however. Its mission is to carry aircraft components, such as completed fuselage sections and wings from Airbus factories around Europe to final assembly factories in Toulouse and Hamburg. Five Belugas are flying for Airbus. Their size limits them from transporting pieces of the A380, which is the world's largest passenger jet. However, Flight Global says Airbus may be evaluating an A330 version of the Beluga, based on manufacturing schematics for a new Beluga line station (mentioned in the video above). Boeing 747-LCF Dreamlifter, by Cory Barnes on Flickr (CC Commercial License) Boeing has its own modified beast-of-burden aircraft, the 747-LCF (Large Cargo Freighter) also known as the Dreamlifter. Boeing's fleet of four modified 747-400s were built specifically to transport 787 Dreamliner (see what they did there?) sections. Development and manufacturing of the Dreamliner is a truly global initiative for Boeing, one that some would say led to the program's delays. But regardless, it's remarkable that manufacturers can modify planes to meet their own needs. These planes may not be aerodynamically efficient, but they get the job done a lot faster than having to ship plane pieces over land or sea.
<urn:uuid:5d4086ff-99bd-44f8-9253-6760a806e601>
CC-MAIN-2020-24
https://jalopnik.com/airbus-beluga-transport-plane-turns-20-1634756199?utm_campaign=socialflow_jalopnik_twitter&utm_source=jalopnik_twitter&utm_medium=socialflow
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347439019.86/warc/CC-MAIN-20200604032435-20200604062435-00114.warc.gz
en
0.967725
491
2.953125
3
Mashhad, Behesht-e Reza Cemetery At the far end of the Behesht-e Reza Cemetery in Mashhad, there is a barren area of land where the corpses of political prisoners executed in the 1980s are buried. According to the statement of a mortuary worker at this cemetery, the prisoners executed during the 1988 massacre in Mashhad were buried in two different parts of this cemetery, some at the back of this cemetery and others in another vacant part of the cemetery. This information is confirmed by the testimonies by victims’ families who visited the cemetery in the summer and autumn of 1988. They report seeing large areas of raised and disturbed soil with protruding body parts. 170 political prisoners are believed to be buried in mass graves in Behesht-e Reza Cemetery in Mashhad. To see the exact location of the mass grave, you need to zoom in.
<urn:uuid:8c29407c-3918-4b93-9130-7c11752651d6>
CC-MAIN-2020-16
https://painscapes.com/en/cities/169
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00475.warc.gz
en
0.964617
184
2.53125
3
As a software engineer I adopted the concept of top-down design of software. I first read about this method in the early 1970s. Essentially, you start by taking the statement of the problem and start from the top to break it down into smaller and smaller sub-problems. The description of the design looks like a tree structure where the root of the tree is the module that solves the whole problem. This module calls on various sub-modules that solve different sub-problems. This technique can be thought of as an outgrowth of what I was taught in my early days at MIT. When given a tough exam question, first write down everything you know about subject of the question. By the time you have finished writing down everything you know, you have either solved the problem or have found the direction to go to solve the problem. When doing a top-down design, you make architectural decisions at the top that constrain what you must do at the lower level. Many people objected to top-down design because they felt that you could not impose such constraints on the lower level before you knew what was possible to do at the lower level. This objection comes from a misconception of how I believe a top-down design should be done in real life. In reality, top-down design is a way of organizing the design process. At every level, you give enough thought to the next lower level to be reasonably certain that the next lower level can in fact be implemented. You may have to descend very far down the levels during the design phase to make certain that all your assumptions can be met. The top-down design method is a way of organizing that descent so that it is focused on solving the top-level problem. In the same way management decisions can be thought of as a top down design process. The top manager, in consultation with others in the management team, breaks the problem down into sub-pieces. No top level decision is made until there is reasonable certainty that the lower levels can do their part to accomplish the task. So while the top manager (or top designer) guides the process using her or his own vision, nothing is cast in concrete without consultation with sub-managers (domain experts) to insure that the plan is feasible. This consultation process is where other ideas get raised that might lead to an even better solution than the manager originally envisioned. This does not guarantee that the original plan will never have to undergo major restructuring during implementation, but it does attempt to minimze the chances of that happening. It minimizes the risk without paralyzing the effort to move forward. If you insist on 100% guarantees, you will be too late to solve the problem (miss the market window). A plan (or design) developed in this way ends up as also being a road-map to delegating tasks during operation (or implementation). This is what makes a project manageable when you shift from design to implementation. Every member on the team knows what her or his responsibility is with sufficient detail, that the manager only has to manage by exception. As long as things are verifiably following the plan, no drastic management action needs to take place. Each manager can concentrate on the duties specifically needed to carry the plan forward. Please think of the process described above when you think of President Obama working with the people of his administration and all the people in the country in coming up with solutions to all our problems. Using this management style, nobody in the organization has to be a workaholic in order for the organization to succeed. See my demonstration and download of software that I have used to carry out the software design and management method described above. I should add that top-down implementation, testing, and documentation go hand-in-hand with top-down design.
<urn:uuid:fdfc9135-7323-44dd-b4a2-8fa69d2fb30e>
CC-MAIN-2017-26
http://ssgreenberg.name/PoliticsBlog/2008/12/08/the-relation-between-top-down-design-and-good-management/
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320023.23/warc/CC-MAIN-20170623063716-20170623083716-00386.warc.gz
en
0.961987
772
2.671875
3
Artists of all kinds have the exceptional ability to make people think. They put you in their own shoes—whether it’s only for a brief moment observing a mural, or an entire hour watching a documentary. With art, it’s possible to captivate the masses through clothing lines, unique graffiti, architecture—the list goes on and on. However, what happens when art has a message? Let’s say 20 ft. “nature buildings” were created by famous artists using natural bio-materials, and then placed around the globe inside 5 major parks. After viewing an installation like this, do you think your perspective would change a little? This is the idea behind “sustainable art.” Any art form that includes principles of ecology, environmentalism, social equality, green economy, or pretty much anything that takes on the message “let’s not mess things up for the future,” could be considered sustainable art. An example of this much-needed form of artistic expression can be found in Eve Mosher’s work. In her 2011 project titled “Seeding the City,” 4ft-by-4ft trays containing native plants were placed on 1,000 buildings throughout Brooklyn and Manhattan. A flag was placed on the roof and at street level of each building so they could be identified. Obviously, plants as small as these could not do much to improve the air quality of NYC. But, the message behind it is sound: the participation of building owners and the visibility of the flags will make urbanites passing through aware of the natural vegetation potential rooftops have. Bloomberg Philanthropies understands that art can be a very powerful tool—one that brings communities together and helps drive economic development. It is for this reason that they started the Public Art Challenge, a new program that recently invited U.S. cities with over 30,000 residents to submit proposals for transformative projects. Three of these cities will be granted up to $1 million each for their art projects that are supposed to establish public-private partnerships, bolster the economy, and improve quality of life. Struggling to think of what a “transformative” art project may look like? You might be thinking too far outside the box. The High Line in New York City, once an old New York Central Railroad line, is now an “aerial greenway” (floating park). The 1.45-mile-long park runs along the Lower West Side of Manhattan and gets nearly 5 million visitors annually. Besides natural perks like the 210 native species growing, the High Line has also revitalized Chelsea—a once “gritty” neighborhood in the late 20th century.
<urn:uuid:6a7e48bf-23ee-4c8c-ae57-b1e73b5aa9e3>
CC-MAIN-2020-10
http://themodernape.com/2014/11/14/art-sustainability/
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00248.warc.gz
en
0.952573
564
2.734375
3
Plants alter their metabolic pathways in response to a variety of abiotic stresses, not least a lack of water. A specialised form of photosynthesis known as Crassulacean Acid Metabolism (CAM) is just one manifestation of this. Before we look at CAM, a small diversion is needed to consider how plants normally make the sugars they need for growth. Sugars are made by photosynthesis, a two-step process which takes place in the tiny green chloroplasts inside plant cells, with water, carbon dioxide and energy from the sun as the key raw materials. The first stage, the so-called ‘light reactions’, (unsurprisingly) require light to make them happen; energy from the sun is used to split molecules of water into hydrogen ions, free electrons and that all-important by-product, oxygen. Free electrons and hydrogen ions are too reactive to have floating around a cell and so are used immediately to produce the cell’s energy currency, ATP, and other intermediates (such as NADPH, in the diagram below) which are needed to produce sugars. The second stage of the process, the Calvin or Calvin-Benson cycle, uses the ATP and NADPH to convert carbon dioxide into glucose, the simplest form of sugar. This stage doesn’t require any additional light. In most temperate regions, plants have sufficient water to allow them to open their stomata during daylight hours and take up the carbon dioxide they need, because they can replace the water which leaves at the same time by transpiration. The light and dark reactions of photosynthesis can then take place more or less simultaneously – think of the plant as a factory which runs a daytime only operation, making sugar. Such plants are known as C3 plants, because the carbon dioxide molecules form an intermediate molecule with three carbon atoms (PGA in the diagram below) before sugars are finally produced. C3 photosynthesis in a chloroplast. Light energy splits water molecules and the energy released is used to produce ATP and NADPH. These drive the production of sugars from CO2 in the Calvin cycle. CAM plants, however, such as the Sempervivum species we saw growing in intense sunlight at the top of Chang La pass, cannot afford to open their stomata during the day. Water is in such short supply that any lost during the day cannot be replaced by the roots at night. Sempervivum sp. at Chang La, 5360 m To get around this problem, CAM plants operate a night as well as a day shift in their sugar factories. They open their stomata only in the cool of night, when water will not be lost so fast by transpiration. This forces them to add an extra step into the regular process; carbon dioxide which enters the leaf cells during the night shift has to be stored there overnight (as the C4 acid in the diagram below), then released and made available for use during the day shift when the light reactions take place. Crassulacean acid metabolism. CO2 enters the leaves at night and is stored before being released into the Calvin cycle during the day when light is available to supply ATP and NADPH from the light reactions (with the stomata closed). CAM metabolism means these plants are very efficient at making use of the small amounts of water available to them. They make around one gram of dry matter for each 125 g of water they use, which is three to five times better than a typical C3 plant. What about the name? This behaviour was first recognised in plants belonging to the family Crassulaceae which includes some less exotic plants found in particularly arid environments much closer to home. Think about what it’s like living on top of a dry stone – a tiny desert-like microhabitat, often in the full glare of the sun. Mossy stonecrop, Crassula tillaea, growing on rocks in NW Scotland Whilst on the subject of Crassulaceae, if anyone knows the name of this pretty little plant growing on bare rocks beside the road, particularly as we descended from Ladakh into Kashmir, I’d be very grateful. I suspect it to be some kind of Crassula or Sedum species, but can’t find it in any of my books. Why don’t all plants use this clever variant of photosynthesis? Because there is no such thing as a free lunch and the extra steps required use up some of the plant’s precious energy. Plants will only resort to CAM when they need to in order to survive. One of the consequences of this is that some plants can switch in and out of CAM mode – so called facultative CAM plants. They can make the extra enzymes required, along with the physiological changes required to reverse the pattern of stomatal opening, just when they need to. I haven’t been able to track down any Himalayan plants known to do this, perhaps because no-one has really looked, but the tropical American genus Clusia has a number of examples. Clusia lanceolata. http://thewildpapaya.com
<urn:uuid:3cfceab3-921c-4ff5-afd6-1a935f94ff98>
CC-MAIN-2017-47
https://heatherkellyblog.wordpress.com/2015/02/17/cam-cam-cam/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805923.26/warc/CC-MAIN-20171120071401-20171120091401-00160.warc.gz
en
0.95022
1,069
3.984375
4
"LA AVENTURA DE LA HISTORIA" Translation of Spanish Article “AVENTURA DE LA HISTORIA” Rare Byzantine Icon Discovered in Istanbul Church : An Icon from Two Ages In the summer of 2021, the conservator and restorer of works of art and antiquities Venizelos G. Gavrilakis, founder and director of the VENIS STUDIOS laboratories, received the request to take charge of a project for the conservation and restoration of a Byzantine icon painted on a support of wood in the Buyukada Panagia Eleoussa Church. The temple, located on the Princess Island of Istanbul and dedicated to the Dormition of the Mother of God, was founded in 1735 near the Greek cemetery on the southern slope of Isa Tepesi. Rebuilt on its present site in 1793 at the end of Fayton Meydani, it was renovated in 1871. Created with deep meaning and painted in bright colors, the icons represented images of God or Saints. These figures, which fulfilled liturgical functions, acted as a bridge between the normal and the divine, helping communication between the faithful and the deity. Venerated in churches, homes or public places, the images were made either in the form of a mosaic or painted or frescoed to cover the walls of the walls and on other supports such as wood, on which different techniques were used such as egg tempera and the encaustic. Gavrilakis and Vaia Karagianni, co-director of the laboratories, found themselves before a different and unusual icon. The fabulous work of art that the VENIS STUDIOS conservators were preparing to restore had been preserved since the Byzantine era inside a heavy bronze chest, a particular refuge that had been guarded for more than a hundred years. Covered in a silver sleeve nailed to the wood with hundreds of tiny nails, the icon was beautifully decorated on both sides of the wood with two images that, at first, did not seem to fit. The representation of the Virgin of Eleoussa with the child Jesus, dated at first glance due to its characteristics in the fourteenth century, contrasted with what had been drawn on the back and which seemed to be an image of the "Descent into Hades", something that a priori did not make much sense since such images were not usually represented in the artistic period. The stylistic features were also completely different between the two parts of the icon. The icon on the back was reminiscent of representations of the Resurrection of Christ, typical of the late Byzantine era (16th century), something extremely rare. Despite the evidence, the restorers had their doubts and could not confirm it with complete certainty until both icons had been cleaned of the thick oxide varnish and other substances that covered most of the surface of the painting and that made it difficult to recognize the details of the work. The Restoration process The wooden support on which both representations had been made was in poor condition, forcing the restorers to first repair the support and the paint on the front. Once the first process that stabilized and consolidated the wooden support and the paint layer was finished, the specialists were able to turn the icon to begin the investigation and safe restoration of the back of the icon. As the cleaning work on the icon on the back was completed, it became easier to recognize the different details of the piece that confirmed all the initial indications that it was made in the 16th century. The conservators were faced with two pieces of art corresponding to two different chronological periods. The Byzantinologist Athanasios Semoglou, in charge of the historical research, confirmed that the left side of the icon belonged to the end of the 14th century or even the 15th century while the right side corresponded to the 16th century. The most plausible theory is that the two icons would have been united in the 16th century. Gavrilakis would come to this conclusion because of a crack in the icon that represents the descent into hell from the top to the bottom, as well as several nails on the side. The absence of damage to the Virgin was the key point for the director of the laboratories to reach the conclusion that the icon had been broken and had been joined to another later. It would not be until later that the Virgin with the child would be "repaired" and united with another Icon. “It is very likely that the icon was originally painted on both sides with the "Virgin of Eleousa" says Gavrilakis, "most likely the original 14th-century icon was broken, leaving the right side quite damaged," he adds. "It would not be until the 16th century that an "ancient restoration" was carried out with the part of the icon that had been saved, adding the other half of the missing wooden support". The "Virgin Elousa with the child" on the front that would be complemented on the back with another religious image that represents one of the themes that were current in the 16th century. While it was quite common to find double Byzantine icons combining the Virgin and Child with the Crucifixion or with various themes of the Passion of Christ or the descent from the Cross, the Resurrection was not a common combination of the Virgin and Child, known as Theotokos, so the icon has been considered an exceptional discovery. The desire to maintain the functionality of the icons, as many of them were processional, led to many renovations being carried out at a later period which can be seen almost exclusively on the front. This is the reason for the different dating of the two faces in the icon, which could also be the case in the present work. This unique double-period icon will be put back on display in its original place so that pilgrims and visitors from all over the world can see this wonderful and unique work of art. The History and Symbolism of Icons from Byzantium to today The icons (from the Greek eikon, image) are an extraordinary artistic and religious testimony. These representations of Christ, the Virgin, a Saint or an event from Sacred History that normally have fixed canons were frequently used in images painted on small portable wood. The iconographers created representations in which every detail was full of special symbolism while the colors acted as attributes and created a circle of meanings. Gold was light, the center of divine life, while white and ultramarine were assigned to the Virgin. According to Byzantine tradition, the authorship of the first icons is attributed to Saint Luke. The Byzantine iconography developed from the Council of Ephesus, in 430 AD, at which time the Virgin was proclaimed as the mother of God and the cult of her figure was consolidated.
<urn:uuid:e0064da4-d404-4d59-9ff0-99c253dbe60d>
CC-MAIN-2023-40
https://www.venisstudios.com/english-en/media/la-aventura-de-la-historia-en
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00047.warc.gz
en
0.981176
1,376
2.6875
3
Coding in Scratch : Projects Workbook (DK Workbooks) Get kids building exciting computer projects, such as animations, games, and mini-movies, with DK Workbooks: Coding in Scratch: Projects Workbook. Perfect for children ages 6–9 who are new to coding, this highly visual workbook is a fun introduction to Scratch, a free computer coding programming language. With easy-to-follow directions and fun pixel art, DK Workbooks: Coding in Scratch: Projects Workbook helps kids understand the basics of programming and how to create cool projects in Scratch through fun, hands-on learning experiences. All they need is a desktop or laptop with Adobe 10.2 or later, and an internet connection to download Scratch 2.0. Coding can be done without download on https://scratch.mit.edu. Kids can light up the night sky with their own colorful messages and drawings or make their own music and become the ultimate DJ. They can create a digital portrait of a pet and customize the pictures with sounds and animations, or test their knowledge with a times tables quiz. This workbook is filled with open-ended projects that use art, music, sound effects, and math and can be shared online with friends. Kids can even test their coding knowledge with written vocabulary and programming quizzes at the end of each project. Supporting STEM education initiatives, computer coding teaches kids how to think creatively, work collaboratively, and reason systematically, and is quickly becoming a necessary and sought-after skill. DK's computer coding books are full of fun exercises with step-by-step guidance, making them the perfect introductory tools for building vital skills in computer programming.
<urn:uuid:cc6179d1-66b8-470b-9a3a-f21be8b92b38>
CC-MAIN-2017-34
http://www.pandora.com.tr/urun/coding-in-scratch-projects-workbook-dk-workbooks-/511437
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110578.17/warc/CC-MAIN-20170822104509-20170822124509-00548.warc.gz
en
0.922673
350
2.984375
3
This is the real story. On this site we have claimed many times that words and semiotics are held together in networks. We have further hypothesized that “psychological morphemes” are also held together in networks. A “psychological morpheme” is the smallest meaningful unit of a psychological response. It is the smallest unit of communication that can give rise to an emotional, psychological, or cognitive reaction. Of course word networks, semiotic networks, and emotional, psychological, and cognitive networks all intertwine with each other. FIML practice is designed to help partners untangle unwanted emotions from these intertwined networks. FIML practice focuses on psychological morphemes because they are small and thus rather easily understood and rather easily extirpated from real-time contexts (when partners are interacting in real life in real-time). The hard part about FIML practice is it is done in real life in real-time. But the easy or very effective part about FIML is that once partners learn to do it, results come quickly because the practice is happening in real life in real-time. It is not just a theory when you do it in that way. It is an experience that changes how you communicate and how you understand yourself and others. In FIML practice partners are mindful of their emotional reactions and learn that when one occurs, it is important to query their partner about it. They are mindful of psychological morphemes and as soon as one appears, but before the morpheme calls up a large network leading to a strong reaction, they query their partner about it. This practice leads, we have claimed, to a fairly smooth and effortless extirpation of unwanted psychological responses. This happens, we believe, because the data provided by the partner that “caused” the reaction shows the partner who made the FIML query that the psychological morpheme in question arose due to a misinterpretation. Seeing this repeatedly for the same sort of neurotic reaction causes that reaction and the psychological network that comprises it to become extinguished. A fascinating study from the University of Kansas by Michael Vitevitch shows that removing a key word from a linguistic network will cause that network to fracture and even be destroyed. An article about the study and a link to the study (pay wall) can be found here: Keywords hold vocabulary together in memory. Vitevitch’s study involves only words and his analysis was done only with computers because, as he says, ““Fracturing the network [in real people] could actually disrupt language processing. Even though we could remove keywords from research participants’ memories through psycholinguistic tasks, we dared not because of concern that there would be long-term or even widespread effects.” FIML is not about removing key words from linguistic networks. But it is about dismantling or removing psychological or semiotic networks that cause suffering. Psychological or semiotic networks are networks rich in emotional meaning. When those networks harbor unwanted, inappropriate, or mistaken interpretations (and thus mistaken or unwanted emotions), they can cause serious neurotic reactions, or what we usually call simply “mistaken interpretations.” We believe that these mistaken interpretations and the emotions associated with them can be efficiently extirpated by revealing to their holder the “key” psychological morphemes that set them off. My guess is the psychology of a semiotic network hinges on repeated reactions to key psychological morphemes and that this process is analogous to the key words described in Vitevitch’s study. Vitevitch did not remove key words from actual people because it would be unethical to do so. But it is not unethical for consenting adults to help each other find and remove key psychological morphemes that are harmfully associated with the linguistic, semiotic, cognitive, and psychological networks that make up the individual. …Multiculturalism failed in Poland-Lithuania, just as it did later on in Austria-Hungary and indeed has throughout history. The Polish case is especially interesting as it is often held up today as an example of a great multicultural state where the various disparate groups lived in peace and harmony. Reality, on the other hand, is much different, especially when it comes to the Ukrainian portions. (Source) Panpsychism means “all mind” or mind in all things, with an emphasis on cognition being a fundamental aspect or part of nature. Pansignaling means “all signaling” or signaling in all things, with an emphasis on signaling being a fundamental aspect or part of nature. I like the term pansignaling because it gets us to look at the signals, without which there is nothing. Another word that is close to these two is panexperientialism, which connotes that “the fundamental elements of the universe are ‘occasions of experience’ which can together create something as complex as a human being.” These ideas or similar can be found in the Huayan and Tiantai schools of Buddhism. Highly recommend giving these ideas some thought and reading the links provided above. I tend to favor thinking of this stuff from the signaling point of view. A signal can be found, defined, analyzed, and so on. A signal is a fairly objective thing. When we consider signals and consciousness, it is very natural to consider that signals are parts of networks and that networks can be parts of bigger networks. As I understand it, panexperientialism holds the view that atoms have experience, and that molecules have experience as do the atoms that make them up… and so on till we get to cells, organs, brains, human consciousness. Human consciousness, which is fundamentally experiential, is what humans mainly think of as experience. At all levels, the “parts” of human consciousness also are conscious or cognizant and thus capable of experience. Thus, there is no mind-body problem. Cognition or awareness is part of nature from the very bottom up. For example, a single bacterium can know to move toward something or away from it. Life is “anti-entropic signaling networks” that organize, self-organize, combine, cooperate, compete, eat, and change constantly. From this, we can see where impermanence and delusion as described in Buddhism come from.
<urn:uuid:98266463-6cf4-4844-9e93-15f48b43b487>
CC-MAIN-2017-47
https://americanbuddhist.net/2017/02/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806760.43/warc/CC-MAIN-20171123070158-20171123090158-00541.warc.gz
en
0.953709
1,328
3.078125
3
The universe we inhabit is full of cycles. night turns to day, the planets revolve around the sun, the seasons change, and much more. we use these cycles to measure the passage of time, but like the cycles of the universe, that was just the beginning. We’ve also created time zones, calculations down to the millisecond, and various ways to measure them all. our lives revolve around time, which is why we have created a rich and varied database of tools and information that we like to call time now. the time now is an accurate tool that provides multiple time related services, various detailed articles and more. you will be able to know what the current local time is, in more than one hundred thousand cities around the world, as well as the utc/gmt offset, the full name of the time zone and the abbreviation. You will know if each location observes daylight saving time (dst) or daylight saving time, right now or in the near future. this database is updated with each new decision of governments or astronomical institutions. Know the local weather and forecast in most cities around the world. You have access to current conditions, the 48-hour forecast, the 2-week forecast, and an hour-by-hour temperature forecast. most websites would stop there, but we also give you sunrise and sunset times, length of day, moon phases, and even moonrise and moonset. enjoy multiple daily updates of this data, up to every fifteen minutes. the time now also offers comprehensive local business directories with opening and closing times in many countries including uk, sweden, germany, poland, norway, denmark, netherlands, finland, france and italy. the local business directory for each country is available in its translated version of the website. In case you need a specific conversion, we provide many useful tools like: - a time zone converter that will help you find the time difference between two cities or two time zones. - an international meeting planner, to find the best time for a meeting with people from all over the world. - a dialing code wizard, to help you make a phone call between two locations. - a distance calculator, to find the distance between two cities. The time is now available in 29 languages. it is used by millions of people around the world every month as a valuable resource of information, knowledge and a means to plan and understand the weather around the world. the scientific and philosophical concept of time Before one can understand time zones, daylight saving time, and other methods of measuring time, it would be best to have an idea of how science defines this concept. Beyond science, this concept is also heavily researched and discussed in the realms of religion and philosophy. We can’t reach out and grab it, nor can we watch it go by, and yet time exists anyway. is defined as “a measure by which events can be ordered from the past, through the present, and into the future. It also measures the duration of events and the intervals between them.” what we can see, feel and touch are known as spatial dimensions. these are the first, second and third that we all know. however, time itself is known in science as the fourth dimension. when we measure things like speed and repetition, we use standard units of measure like seconds, minutes, and hours. this is known as the “operational definition of time”. it is purely scientific and does not seek to understand the concept in any philosophical way. Of course, the lines begin to blur as scientists try to measure events in space-time and other elements of the universe around us. Attempting to truly measure time is a goal with which science continues to struggle. Proper measurement is crucial in all scientific fields like astronomy, navigation and many more. currently our international measurement system is based on events that are repeated at certain intervals. The movement of the sun across the sky, the phases of the moon, the beating of a heart, these are all means of measuring the apparent flow of time. In terms of philosophy, there are two main beliefs regarding time and its existence or absence. this early approach is named after sir isaac newton. he believed that time was part of the universe, that it exists as a separate and independent dimension from our own where events occur in sequence. in one of his works, philosophiae naturalis principia mathematica, he spoke of absolute time and space. the concept spoke of a “true and mathematical time, by itself and by its own nature flows equally regardless of anything external”. things like motion and the “feel” of time were not true concepts of the term. he called these things “relative time” and they were the only concepts we could understand as a species. The other side of the coin is a theory put forth by two famous philosophers named Gottfried Leibniz and Immanuel Kant. this secondary theory is more simplistic, simply holding to the belief that time is not a thing or a place. given this truth, it cannot be accurately measured or traveled through. a history of time measurement: calendars and clocks Chronometry is the science of measuring time and it comes in two different forms: the calendar and the clock. when seeking to measure a duration of less than one day, the clock is used. measuring something that is longer requires the use of a calendar. Let’s examine how these two fundamental tools came to be. 1. a brief history of the calendar The first calendars were used 6000 years ago, based on artifacts discovered in the paleolithic era, and depended on the phases of the moon. Known as lunar calendars, these early versions had twelve to thirteen months per year. however, these calendars were not entirely accurate because they did not take into account the fact that a year has approximately 365.24 days. Calendars measure days in whole numbers, so a method called collation was introduced that adds a leap day, week, or month to the calendar when necessary to maintain measurement accuracy. Julius Caesar decreed in 45 B.C. that the Roman empire would use a solar calendar and became known as the Julian calendar. This version still suffered from inaccuracy because the collation it used caused annual solstices and equinoxes to change measurements by as much as 11 minutes per year. Pope Gregory XIII introduced a second type of calendar in 1582. It was known as the Gregorian calendar and is now the most widely used version today. 2. a brief history of the watch Watchmaking is the study of devices used to measure time. This quest dates back to 1500 BC when the Egyptians created the first sundial. This stationary device uses a shadow cast by the sun to measure the passing of the hours throughout the day. however, these devices were only accurate during the day. A more accurate solution was something called a water clock which was also used by the ancient Egyptians. The actual origin of these devices is unknown, but along with sundials, they were the first tools used to measure time. The water clock worked by creating a steady stream of water that could be used to measure the passage of time. however, it required constant maintenance, otherwise the water would run out. many ancient civilizations were very focused on keeping accurate measurements of time because they used it to track their astronomical findings. Water clocks were used constantly until the Middle Ages. the use of incense, candles and hourglasses was also prevalent. Although mechanical clocks appeared as early as the 11th century, it wasn’t until people like Galileo Galilei and Christiaan Huygens created new methods, like the pendulum clock, that they became reliable. Today the most accurate tool for measuring time is atomic clocks. these amazing devices can maintain perfect accuracy for millions of years. In fact, they are so accurate that they are used to set other clocks and GPS systems. Instead of using mechanical or repetitive methods, these clocks measure atoms at incredibly low temperatures. an atomic clock in boulder, colorado called nist-f1 is used to set standard time for all of the united states. It is located at the National Institute of Standards and Technology. the precision of this clock means that it will not drift by a single second for at least 100 million years. all of this is based on the internationally defined standard for what constitutes a single second: “the second is the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium 133 atom”. By measuring these cesium atoms at incredibly low temperatures, atomic clocks can track time almost perfectly by this established standard. international measures of time Our modern society requires that we have an established standard for how we measure time. The most basic means of doing this is known as international atomic time (tai), and it measures seconds, minutes, and hours by coordinating atomic clocks around the world. Since 1972 we have used Coordinated Universal Time, or UTC. it follows the tai standard with slight shifts known as leap seconds to ensure it stays in sync with the earth’s rotation. this standard replaced greenwich mean time (gmt), but the two terms are still used interchangeably. the reason for the replacement was that the gmt method used telescopes and solar time to set the standard instead of the more precise method of atomic clocks. Even though the time standard changes, the location of Greenwich is still used as the basis for measuring coordinates. Although the measurement of time is standardized throughout the world, there is also a means of defining the exact time of day in various regions known as time zones. this is another internationally observed standard that offsets utc based on location. These zones were established for legal, business, and social reasons and are typically located along US country or state boundaries. uu. For the most part, these zones offset time by a whole number of hours, but in some cases the change is as little as thirty or forty-five minutes. The concept of these time zones was first suggested in 1858 in a book written by Quirico Filopanti called Miranda! this concept was not used, but it laid the groundwork for others to follow. The invention of them is attributed to Sir Sanford Fleming, but even the concept of it was greatly modified into what we use today. The adoption of time zones was slow and gradual. The last country to implement the use of the current standard was Nepal in 1986. All modern countries today use time zones in some way, shape or form. the idea is the same, as is the standard measurement of time, but the implementation of them varies. For example, China and India use a single time zone even though their countries are wider than the fifteen degrees of longitude that normally dictate a time zone. a tool for history With technology and research we have continued to grow and expand our understanding of time, but we still have many unanswered questions. however, what we do have are very specific methods to measure it around the world and now is the time to bring you all that information and more. Our tools are always up to date and our database of information is constantly expanding and growing. we are the penultimate resource now and in the future.
<urn:uuid:b3df243c-d802-4636-b4f0-cf9aa37b692c>
CC-MAIN-2023-23
https://iazeemi.com/alexa-what-time-is-it-now/
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649193.79/warc/CC-MAIN-20230603101032-20230603131032-00679.warc.gz
en
0.968452
2,342
2.796875
3
India-Mongolia relations date back to ancient times and in modern times bilateral relations have been rapidly developing through multiple facets. Both countries have been spiritual neighbors which have now transformed into a strategic partnership. Read here to know more about the bilateral ties. The cooperation between India and Mongolia, which was previously limited to diplomatic missions, the provision of soft loans and financial aid, and joint ventures in the IT industry, is now rapidly expanding. In 2015, the two Prime Ministers announced a “strategic partnership” between the two Asian democracies. Mongolia is a landlocked country situated between Russia and China. Its two characteristics are noteworthy. First, in foreign policy, it follows the non-aligned approach, and second though situated between two autocracies, it is a democratic country. History of India-Mongolia relations India and Mongolia have interacted since ancient times through the vehicle of Buddhism. Some Indian and Mongolian historians have speculated about the migration of some tribes from the Kangra kingdom (Himachal Pradesh) to Mongolian territory 10000 years ago. Mangaldev, son of the King headed the migrants and the majority of them returned to India after staying there for about 2000 years. A branch of the Kangra family, the Katoch dynasty is considered to be the oldest surviving dynasty in the world. According to some Mongolian scholars, Buddhism traversed to Mongolian steppes through Tibet. During the Hunnu State of the 3rd century BCE and later during the period of the Great Mongol Empire Buddhist monks, and several traders from India visited Mongolia. In 552 BCE, a Lama Narendrayash from the State of Udayana (Northern India) with some others visited Nirun state. Since to most Mongols, India is the land of Buddha, Lamas and students from Mongolia used to travel to Nalanda, once the largest residential University in India, to study Buddhism. In modern times, Buddhism was promoted by cultural and literary contacts between the people of India and Mongolia The Mongol invasions of the Indian subcontinent are a major chapter in Indian history. - The Turco Mongol conqueror Timur attacked the Tughlaq dynasty in Delhi in 1398. - In 1526, Babur, a descendant of Timur and Genghis Khan from Fergana Valley (modern-day Uzbekistan), came through the Khyber Pass and established the Mughal Empire, covering modern-day Afghanistan, Pakistan, India, and Bangladesh. - The Mughal emperors married local royalty, allied themselves with local maharajas, and attempted to fuse their Turco-Persian culture with ancient Indian styles, creating a unique Indo-Saracenic architecture. Both India and Mongolia especially during the 5-7th centuries CE were in close direct contact. It seems that Buddhism in Mongolia accelerated the process of the further spread of Indian culture in Mongolia. The intellectual development of Mongolia was influenced by the Mahayana school of Buddhism and its philosophy. Philosophical treatises of Nagarjuna used even simplified expressions of philosophical terminology. India-Mongolia Political and diplomatic relations India established diplomatic relations with Mongolia on 24 December 1955. Mongolia appreciates India’s support for its membership in United Nations in 1961 which was championed by Pt. Jawaharlal Nehru. In 1991, India supported Mongolia’s membership in Non-Aligned Movement (NAM). Mongolia along with India and Bhutan co-sponsored the famous UN Resolution for the recognition of Bangladesh as an independent country in 1972. The first, ever visit by PM Shri Narendra Modi to Mongolia in May 2015 marked the 60th anniversary of the establishment of diplomatic relations between India and Mongolia. - This visit was part of the “Act East Policy” which proved to be a watershed movement in India-Mongolia relations. - The declaration of ‘Strategic Partnership’ and the announcement of USD 1 Billion LoC for the development of Infrastructure in Mongolia set the tone for accelerated economic cooperation with Mongolia. India and Mongolia have the ‘India-Mongolia Joint Committee on Cooperation (IMJCC)’ chaired at the Ministerial level. Mongolia supported India for the non-permanent seat of the UN Security Council (UNSC) for 2011-2012. India and Mongolia declared support to each other for UNSC non-permanent seats respectively for terms 2021-22 and 2023-24. Mongolia voted in favor of India’s proposal for Yoga’s inscription into the list of UNESCO’s Intangible Cultural Heritage. India also voted for registering Mongolian legacy on “Mongolian Traditional Custom to Worship Mountain and Ovoo” in the list of Intangible Cultural Heritage. Mongolia has publicly reiterated its support for India’s membership to the permanent seat of the expanded UNSC. India-Mongolia has a declaration in place for the protection of snow leopards- the Bishkek Declaration. There are numerous agreements signed between the nations over the years, the major ones are: - Joint Trade Sub-Committee and cooperation between the Planning Commission of India and the National Development Board of Mongolia - co-operation in the field of geology and mineral resources - Trade and Economic Cooperation provides r MFN status to each other in respect of customs, duties, and all other taxes on imports and exports. - Investment Promotion and Protection Agreement The main items of exports to Mongolia include medicines, mining machinery, auto parts, etc. Imports from Mongolia include raw cashmere wool. There is the India-Mongolia Joint Working Group for Defence cooperation which meets annually. The Joint India-Mongolia exercise ‘Nomadic Elephant’ is held annually. Indian Armed Forces Observers regularly participate in the Annual multilateral peacekeeping exercise ‘Khan Quest’ in Mongolia. The BSF (MHA) of India and the Mongolian General Authority for Border Protection (GABP) have been closely cooperating on border patrolling and related subjects for over eight years. Cooperation between National Emergency Management Agency (NEMA) and National Disaster Management Agency (NDMA) has picked up pace in recent years. Bilateral ties between India and Mongolia have the potential to advance Asia’s regional integration process. Even if trade between India and Mongolia has recently increased at a higher rate, there are still chances and room for boosting growth. It is necessary to locate and get rid of obstacles. Given the importance of Mongolia in the current security context, strategic cooperation needs to be significantly upgraded. It will take skillful handling of bilateral trade and economic policy difficulties to make considerable progress. While there is still a need for strategic engagement on the security and foreign policy fronts, it is imperative to develop an effective plan in the new, complicated international setting to ensure that shared objectives are rapidly attained without falling victim to large power rivalry. -Article written by Swathi Satish Leave a Reply
<urn:uuid:dd76841f-1fd6-409d-a1e5-81b152726d81>
CC-MAIN-2023-14
https://www.clearias.com/india-mongolia-relations/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00184.warc.gz
en
0.927526
1,483
3.671875
4
The Port of Dundee is an important industrial city and seaport in Eastern Scotland. Located about 64 kilometers north of Edinburgh, it lies on the northern banks of the Firth of Fay, an inlet of the North Sea. In 2006, over 142 thousand people called Dundee City home. The area around the Port of Dundee was first inhabited by Iron Age Picts. The Port of Dundee was designated a royal town (burgh) in the early 12th Century. For the next four to five centuries, it was the victim of many sackings and bloodshed visited upon the town by the English. Although Edward I revoked the charter, Robert the Bruce replaced it with a new charter in 1327. By 1545, the Port of Dundee was a walled city, but in 1547, it was destroyed by English bombardments. It was besieged again in 1645 during the War of the Three Kingdoms, in 1651 during the Third English Civil War, and again during the Jacobite uprisings. Fishing was important to the town from early days, and it was home to one of Scotland’s biggest whaling fleets. The town contained few stone buildings before 1860. Modern Dundee was created in 1892 and made an autonomous county in 1894. The Port of Dundee grew first as an export point for wool. When wool became less profitable, residents turned to importing and weaving jute. In 1820, the first 20 bales of jute from India were unloaded on the docks, and the city was never again the same. When men learned in the 19th Century that jute fiber mixed with whale oil made sturdy bags and carpet backing, the Port of Dundee’s textile industry became inextricably linked to whaling. Growing quickly, the city grew in reputation as a jute manufacturing center, producing linen, rope, carpet, and canvas. While these goods are still important to the Port of Dundee’s economy, new light manufacturing industries have appeared since World War II. When it became less expensive to make jute cloth on the Indian subcontinent, the last jute mill closed in the 1970s. During the 19th Century, the Port of Dundee grew into an important ship-building and maritime center. Among the two thousand ships built there between 1871 and 1881 was the RRS Discovery used for Antarctic research by Robert Falcon Scott. Its busy whaling fleet traveled the world. The Antarctic’s Dundee Island was named after the whaling expedition from Dundee that discovered the island in 1892. The Port of Dundee’s whaling industry ceased operating in 1912, and the port’s shipbuilding activities ended in 1981. Today, it is known for making confections and preserves, particularly marmalade. In the late 20th Century, the Port of Dundee’s traditional manufacturing industries began to decline, and the service industry has grown in importance. The city is now an important research and education center focusing in biotechnology and information technology. Little of its past survives in the modern Port of Dundee outside a handful of historic buildings and a town gate. Over 300 thousand people live within a 30-minute drive of the city center today, and many commute from nearby counties to work in the Port of Dundee. At the turn of the century, the city supported about 95 thousand jobs at four thousand companies, and investment in the city continues to increase. Review and History Port Commerce Cruising and Travel Satellite Map Contact Information
<urn:uuid:4cf01fe6-0d8a-4171-a6f0-a079d12e5ee3>
CC-MAIN-2023-23
http://www.worldportsource.com/ports/review/GBR_Port_of_Dundee_2870.php
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654097.42/warc/CC-MAIN-20230608035801-20230608065801-00587.warc.gz
en
0.967268
724
3.46875
3
Tooth decay is the most prevalent disease affecting children under the age of five, yet it is almost entirely preventable. By brushing twice per day for two minutes at a time, children can significantly reduce their chances of getting a cavity, and help themselves earn a healthy adult smile. However, it can be difficult for new brushers to enjoy brushing their teeth, and brush long enough to make a difference. So, how can you help your first time brusher learn to enjoy brushing, and help them brush better? For first time brushers, it can be tough to brush for two minutes at a time. This is due to a number of things, but it mostly comes down to the fact that it’s difficult to keep young children still and focused on brushing their teeth for two minutes. You can help your child have more fun while they brush by letting them brush while watching a tooth brushing video. These educational videos help guide children through brushing their teeth, and each lasts at least two minutes. We suggest finding one that you deem appropriate for your child, and one that they will enjoy watching. Buy a Fun Toothbrush You can help convince your child to brush by purchasing them a fun toothbrush that they enjoy using. When looking for a new toothbrush, take your child with you and let them pick one that they find appealing. Also, make sure that the toothbrush handle can easily fit in their hand, and that the head of the toothbrush is small enough to fit into their mouth. Try an Electric Toothbrush An electric toothbrush is an appealing option for children just beginning to brush, since they require less dexterity and physical motion to operate. Additionally, most electric toothbrushes feature brushing timers which tell the operator how much time they have left to brush, as well as a pressure monitor which informs the brusher when they’re brushing too hard. We suggest looking for an electric toothbrush specifically made for children. One great way to help first time brushers is by brushing with them. This allows you to give them specific brushing tips, as well as keep an eye on how ling they’re brushing. It can also help you get into a fun routine with your child and have a bit more time together. New Brushers Love Our Office If your child is just beginning to brush, then visit our office. Our team of pediatric dentists can teach them how to properly brush, and help them learn about the finer points of oral healthcare. Tooth decay almost entirely preventable, help your child get a healthy smile by getting into a healthy brushing routine.
<urn:uuid:b99f9bd2-75cf-4bcb-8c6b-71fda01b0223>
CC-MAIN-2017-39
http://blog.southgeorgiapediatricdentistry.com/help-your-child-brush-their-teeth-with-these-4-tips/
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695066.99/warc/CC-MAIN-20170926051558-20170926071558-00412.warc.gz
en
0.968903
529
3.140625
3
Is Fishermans Bend a forgotten corner of the city? It has often seemed separated from the city. It has been hard to get to, wind-swept, remote and yet so close to the central business district. Today, Fishermans Bend is a focus of attention as a significant urban renewal project. This social history supports the Fishermans Bend Framework. This social history is not chronological. It explores eight themes, each designed to illuminate an aspect of the interwoven stories that meld people, place and time together. Fishermans Bend is a place of stories, people and communities. It is a place of resilience and self-determination. A companion volume provides a guide to history resources that are available to researchers and community members. This social history provides a touchstone for future place-making and interpretive initiatives. Image: Detail from a plan by Sir John Coode (1879), showing the sharp northern bend in the Yarra, marked here as ‘Fishermans Bend’, which is to be replaced by the new canal (source: Map Collection, University of Melbourne) Equally, it is an evocative watery landscape of swamps and sea, wind and sand, sitting right on the edge of 'Nerm' or Port Phillip Bay. Today, it reads as a landscape of industry. And yet all those past landscapes and peoples can still be imagined here; they still exist in the stories of Fishermans Bend. This social history is deliberately concise. It is a like a sketch of possibilities that might be explored by others in the future. To this end, a companion volume to this social history provides a guide to history resources that are available to researchers and interested community members. And as part of the Fishermans Bend Framework, this social history will provide a touchstone for future place-making and interpretive initiatives. To request an accessible Word version, email email@example.com Page last updated: 08/11/19
<urn:uuid:d4ea7140-b578-4460-acab-e788e6fa919e>
CC-MAIN-2023-40
https://www.fishermansbend.vic.gov.au/social-history
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506420.84/warc/CC-MAIN-20230922134342-20230922164342-00897.warc.gz
en
0.917411
411
2.71875
3
Amidst the throes of the Philippine-American war, American soldiers opened the first school in Corregidor, initiating a comprehensive system of education; following Japanese surrender, the U.S.- led occupation commenced its educational reform. Campaigns of Knowledge argues that the creation of a suitable pedagogical subject through schooling was a major technology of U.S. power. U.S. educational policies in the colonial Philippines and occupied Japan were contrasting projects of Orientalist racial management: Filipinos were little brown brothers to be uplifted and deemed fit for industrial education; the Japanese had to be decivilized and re-educated. Literary, filmic, and autobiographical works have registered these programs of subjectification through a complex interplay of assent and defiance, questioning the ubiquity, yet persistence of US pedagogical biopolitics. Contrapuntally viewing colonial archives alongside native textbooks, novels, films, and autobiographies, Campaigns of Knowledge highlights the tension between the ideal subjects scripted by colonial pedagogy and the complex and uneven materialization of this pedagogy in cultural texts.
<urn:uuid:21b2093e-d732-4c7e-a607-b9f9fcbc7152>
CC-MAIN-2020-16
https://english.ufl.edu/campaigns-of-knowledge-us-pedagogies-of-colonialism-and-occupation-in-the-philippines-and-japan/
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500482.27/warc/CC-MAIN-20200331115844-20200331145844-00291.warc.gz
en
0.939445
222
2.75
3
A Quote by Albert Einstein on notion, perception, reality, and science The belief in an external world independent of the perceiving subject is the basis of all natural science. Since, however, sense perception only gives information of this external world or of "physical reality" indirectly, we can only grasp the latter by speculative means. It follows from this that our notions of physical reality can never be final. We must always be ready to change these notions - that is to say, the axiomatic basis of physics - in order to do justice to perceived facts in the most perfect way. Source: Systemic Intervention: Philosophy, Methodology, and Practice Contributed by: ingebrita
<urn:uuid:16041608-9a2a-488a-b230-b4fc404a8d06>
CC-MAIN-2014-23
http://blog.gaiam.com/quotes/authors/albert-einstein/63504
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00363-ip-10-146-231-18.ec2.internal.warc.gz
en
0.925357
141
2.546875
3
- Statistics Background - Parametric Analysis - Nonparametric Analysis - Categorical Analysis - Principal Component Analysis - Multiple Regression - Logistic Regression - An exceptionally student-focused coverage of statistics for data analytics - Traditionally-hard topics are made learnable via hundreds of animations and learning questions - Included background enables all students to succeed - Commonly combined with “Fundamentals of Data Analytics“; numerous configurations possible The zyBooks Approach Data analytics is one of the fastest growing subjects today and is useful in nearly all fields. The subject’s topics, with their underlying statistics, often pose difficulty for students. This zyBook represents entirely new material created specifically to help students master the subject. Written natively for the modern web, the zyBook teaches through hundreds of animations and learning questions in addition to concise, lucid text and figures. The zyBook introduces intermediate techniques for data analytics, including non-parametric techniques, categorical analysis, principal component analysis, multiple regression, and logistic regression. The background statistics and parametric analysis chapters help students hit the ground running, even if they haven’t taken a statistics course in years. Instructors can see student activity completion, can reconfigure the topics, and can even combine with other zyBooks. A common combination is with our “Fundamentals of Data Analytics” zyBook that covers introductory topics. Both zyBooks are appropriate for undergraduate and graduate courses.Evaluate
<urn:uuid:9749c78e-f9fb-4c99-83ae-a3a9b9095e25>
CC-MAIN-2017-30
https://www.zybooks.com/catalog/statistics-for-data-analytics/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549428325.70/warc/CC-MAIN-20170727162531-20170727182531-00524.warc.gz
en
0.8893
314
2.703125
3
Antigua Observer:-The Pan American Health Organization (PAHO) has called on Caribbean countries to take action and make the necessary investments to make their health systems stronger and more resilient. “Preparedness requires more than emergency plans and simulation exercises,” said PAHO director Dominican Dr Carissa F Etienne in addressing the 4th Global Symposium on Health Systems Research here. “It means strengthening core aspects of health systems, from human resources and access to medicines, to health information systems and even legal measures to support public health action.” Etienne’s remarks were made before an audience of more than 2,000 experts at the symposium that was co-sponsored by Health Systems Global, PAHO, the World Health Organization, the Alliance for Health Policy and Systems Research, the Canadian Society for International Health, Canada’s International Development Research Centre and the Canadian Institutes of Health Research. Investing in health systems resilience is “considerably more cost-effective” than financing emergency response and is likely to better protect people’s health and wellbeing in both emergencies and normal times, said Etienne. “Fragile health systems increase the vulnerability of populations to external risks that impact health and well-being, health protection, and ultimately social and economic development,” she said. “Again and again we see this, through epidemics of H1N1 influenza, Chikungunya and Zika virus; through earthquakes in Chile and Ecuador; hurricanes in Haiti and the Bahamas; and through the effects of climate change on health.”
<urn:uuid:8ee96d7c-d74f-4a3b-ac57-0fcae056428d>
CC-MAIN-2017-26
https://stluciatimes.com/2016/11/21/paho-wants-stronger-caribbean-health-systems
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319902.52/warc/CC-MAIN-20170622201826-20170622221826-00099.warc.gz
en
0.936732
328
2.53125
3
An open economy is an economy that engages in international trade and allows for the free flow of capital, goods, services, and labor across its borders. This type of economy is in contrast with a closed economy, which does not participate in international trade and restricts the flow of capital and labor across its borders. In an open economy, governments typically pursue policies that promote the efficient use of resources and create a stable macroeconomic environment. These policies can include the use of fiscal, monetary, and trade policies to achieve the desired economic outcomes. Open economies are beneficial for both domestic and international economic growth, as they provide access to a larger market, lower costs, and improved efficiency. The increased access to resources, technology, and labor can lead to increased competition, increased productivity, and improved economic performance. Furthermore, open economies allow countries to diversify their economic activities, reducing the risk of economic shocks and providing more opportunities for growth and development. Overall, an open economy offers numerous advantages, such as increased access to resources, technology, and labor, improved efficiency, increased competition, and increased economic growth. However, it is important to note that open economies also come with risks, such as increased exposure to international economic volatility. Therefore, it is important for governments to carefully consider the benefits and risks associated with open economies before implementing them. Example of Open economy - India: India is a prime example of an open economy. In recent years, the country has implemented a number of reforms to open up its economy, such as liberalizing foreign direct investment, introducing the Goods and Services Tax, and reducing tariffs. As a result, India has seen an influx of foreign investment, increased exports and imports, and improved economic growth. When to use Open economy Open economies can be beneficial for both domestic and international economic growth. Here are a few situations in which open economies can be useful: - When a country has limited resources and needs access to external resources, an open economy can provide access to a larger market and lower costs. - When a country needs to diversify its economic activities, an open economy can provide access to additional markets and opportunities. - When a country needs to increase its productivity and efficiency, an open economy can provide access to new technologies and labor. - When a country wants to reduce the risk of economic shocks, an open economy can provide stability. Types of Open economy - Fixed exchange rate regime: This type of open economy is characterized by a fixed exchange rate between the domestic and foreign currency. In this system, the government or central bank sets a fixed rate of exchange and maintains it through a variety of policies, such as the purchase and sale of foreign currency. This system can provide a stable macroeconomic environment, but it can also be difficult to maintain and can lead to currency devaluation if it is not managed properly. - Floating exchange rate regime: This type of open economy is characterized by a freely floating exchange rate between the domestic and foreign currencies. In this system, the exchange rate is determined by the market forces of supply and demand. This system can provide more flexibility and is better able to adjust to changing economic conditions, but it can also lead to increased volatility and macroeconomic instability. Advantages of Open economy - Increased Access to Resources: Open economies allow countries to access resources, technology, and labor that may not otherwise be available domestically. This access can lead to increased productivity, improved efficiency, and increased economic growth. - Increased Competition: Open economies promote competition, as countries have access to a larger market. This increased competition can lead to improved quality and increased innovation, resulting in improved economic performance. - Increased Economic Growth: Open economies can lead to increased economic growth due to increased access to resources, technology, and labor, improved efficiency, and increased competition. Limitations of Open economy - Increased exposure to international economic volatility: Open economies are more exposed to international economic volatility, as they are subject to the global economic environment. This can lead to significant fluctuations in the domestic economy, due to changes in the global economy or changes in the exchange rate of the domestic currency. - Increased competition: Open economies are subject to increased competition from foreign firms. This can lead to increased prices and reduced profit margins, as domestic firms have to compete with foreign firms that may have lower production costs or access to cheaper inputs. - Increased risk of capital flight: Open economies are more vulnerable to capital flight, which occurs when investors move their capital out of the domestic economy in search of higher returns elsewhere. This can lead to a decrease in investment and economic growth, as well as a decrease in the value of the domestic currency. - Economic Liberalization: Economic liberalization is the process of reducing government intervention in the economy and allowing markets to operate on their own. This approach is often taken by countries looking to open up their economies and attract investment and foreign trade. - Privatization: Privatization is the process of transferring ownership and control of public enterprises to the private sector. This approach is often used in open economies to increase efficiency and reduce the costs of running public services. - Free Trade Agreements: Free trade agreements are agreements between two or more countries to reduce or eliminate trade barriers, such as tariffs, quotas, and subsidies. These agreements are often seen as a way to promote economic growth and development in open economies. Overall, there are various approaches that a government can take to promote an open economy. These approaches include economic liberalization, privatization, and free trade agreements. Each of these approaches has its own benefits and drawbacks, and governments must carefully consider these when deciding which approach to take.
<urn:uuid:6a545576-5c07-4b46-ad73-1e3372060c85>
CC-MAIN-2023-40
https://ceopedia.org/index.php/Open_economy
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506320.28/warc/CC-MAIN-20230922002008-20230922032008-00727.warc.gz
en
0.954299
1,156
4.1875
4
For instruments that play chordal accompaniments, this is an incredibly useful skill. - You do not have to learn to read music to be able to do this, but it is very helpful to know a little bit about music theory so that you can predict which chords are most likely to happen in a song. Try starting with Beginning Harmonic Analysis (Section 5.5). - Really listen to the chord progressions to the songs you do know. What do they sound like? Play the same progressions in different keys and listen to how that does and also does not change the sound of the progression. Change the bass notes of the chords to see how that changes the sound of the progression to your ears. Change fingerings and chord voicings, and again listen carefully to how that changes the sound to your ears. - Practice figuring out the chords to familiar songs (that you don't know the chords to). For songs that you do know the chords to, try playing them in an unfamiliar key, or see if you can change or add chords to make a new harmony that still fits the melody. - A teacher who understands harmony can help tremendously with this particular skill. Even if you don't normally take lessons, you might want to consider having a series of lessons on this. Find a teacher who is willing and able to teach you Specifically about harmony and typical chord progressions.
<urn:uuid:2555717a-348e-444c-84fc-a3274a461d3d>
CC-MAIN-2020-29
http://www.opentextbooks.org.hk/ditatopic/2368
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655892516.24/warc/CC-MAIN-20200707111607-20200707141607-00148.warc.gz
en
0.952815
281
3.578125
4
Inflammatory Bowel Disease (IBD) is a group of disorders that cause swelling and inflammation in the intestines, affecting as many as 3 million Americans. Often confused with Irritable Bowel Syndrome (IBS), IBD refers to two chronic diseases: Ulcerative Colitis and Crohn’s Disease. While Crohn’s Disease and Ulcerative Colitis have common features, there are important differences. Ulcerative Colitis affects the lining of the large intestine (colon), causing it to become inflamed and develop ulcers. Crohn’s Disease, on the other hand, can affect any part of the gastrointestinal tract and causes inflammation that extends deeper into the intestinal wall than with Ulcerative Colitis. Crohn’s Disease is a disease that causes inflammation or swelling, and irritation of any part of the digestive tract—also called the gastrointestinal (GI) tract. The part most commonly affected is the end part of the small intestine, called the ileum. While Ulcerative Colitis tends to affect only the lining of the bowel, Crohn’s Disease typically involves the entire bowel wall. Symptoms of IBD can include: - Mild to severe diarrhea - Abdominal pain - Rectal bleeding, sometimes leading to anemia - Weight loss, dehydration & malnutrition - Possible delayed development & stunted growth in children - Certain types of arthritis & skin disorders What causes IBD? The cause of Inflammatory Bowel Disease (IBD) is unknown, though many factors may be involved, including diet, environment and genetics. The common pathway is inflammation of the lining of the intestinal tract, but the event that activates the body’s immune response has yet to be identified. Evidence suggests that genetic defects may affect how the immune system is switched on and off in response to bacteria, a virus or certain food proteins. Diagnosis is the first step to getting relief. Even with years of damage to the bowel, some Inflammatory Bowel Disease (IBD) patients have no symptoms. When symptoms are present, they can mimic other disorders. For these reasons, diagnosing IBD is challenging. Upper and Lower Endoscopies permit direct visualization of the digestive anatomy and are the tools most commonly used for evaluating symptoms suggestive of IBD. Of course, medical history and other tests are also vital to the diagnosis. Follow-up is also important, as IBD, in particular Ulcerative Colitis, carries an increased risk of Colon Cancer. The board-certified physicians at Suburban Gastroenterology have the skill and knowledge to differentiate the symptoms and determine if your problem is IBD or something else. Whatever the diagnosis, our specialists have not only the know-how to treat your disorder, but the dedication to help you live a more normal and rewarding life. For the personalized, experienced and results-focused care you deserve, call Suburban Gastroenterology today: (630) 527-6450 or make an Appointment online.
<urn:uuid:e89dde83-404e-405e-b297-959597ef8d5b>
CC-MAIN-2023-14
https://www.mechealth.com/inflammatory-bowel-disease-ibd/
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00117.warc.gz
en
0.922563
639
3.625
4
Launch of the Soviet Union’s Sputnik satellite in 1957 guided people’s eyes toward the nighttime sky. Within a short time, many satellites were in earth’s orbit and the U.S. space agency was working with amateurs to monitor the objects visually. These Moonwatch observers used small telescopes to monitor satellites. An active and successful Moonwatch club emerged in Terre Haute operated by the Terre Haute Astronomical Society. The group operated out of a garage at Allis-Chalmers, a local manufacturing company, for its first three years. In 1960, the group approached Rose Polytechnic Institute about hosting the Moonwatch program. The Terre Haute astronomical Society Moonwatch station at Allis-Chalmers circa 1959. The Rose Poly Board of Managers responded positively to the request and, thanks to a large gift from the Estate of Lynn Reeder, a 1915 civil engineering alumnus, soon had the money needed to build what would become the Lynn H. Reeder laboratory and observatory. The facility was on the west side of campus and operated until its demolition in 2000. The Reeder Lab and the observatory as they appeared from about 1973 until they were demolished in April of 2000. A 1988 campus master plan called for the elimination of the Reeder Lab. However, thanks to the efforts of Professor Richard Ditteon, the Student Government Association, generous alumni and others, astronomy gained in popularity and strength at Rose-Hulman in the 1990s. Improved equipment made doing research much easier and soon it was hoped a new location for the observatory could be found—far from the lights of new residence halls being constructed on the west side of campus. A Fecker telescope in use at the Reeder Lab and Observatory. Photo from the 1978 Modulus. - A generous gift of $500,000 from the Oakley Foundation made the Oakley Observatory possible. Ground was broken for the new facility in 1999 far from the campus lights. Near the observatory is a new Lynn Reeder lab, which provides classroom space, a computer lab, kitchen and restroom. - Rose-Hulman is proud of the important research work taking place in the Oakley Observatory. Many previously unidentified asteroids have been documented from its telescopes. We are also pleased to host star parties for the local community and local K-12 students. - For a more detailed history by Professor Richard Ditteon, please click here.
<urn:uuid:9d507403-1c74-486f-9e56-2c469854d395>
CC-MAIN-2017-30
http://www.rose-hulman.edu/academics/learning-and-research-facilities/oakley-observatory/history.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424884.51/warc/CC-MAIN-20170724142232-20170724162232-00008.warc.gz
en
0.952056
507
3
3
Inversive geometry and involutory quandles With compasses alone, all the points that can be constructed with straightedges and compasses can be constructed. That means that straightedges are only necessary for the actual drawing of lines. One would not want to dispense with straightedges, however, since the constructions with compasses alone are much more complicated. The geometry of compasses was developed independently by G. Mohr in Denmark in 1672, and by L. Mascheroni in Italy in 1797. The easiest way, however, to show that compasses are sufficient depends on circle inversion which wasn't invented until 1828 by Jacob Steiner. - R. Courant and H.E. Robbins, What is Mathematics? Oxford Univ. Pr., New York, 1953. - H.S.M. Coxeter, Introduction to Geometry, Wiley, New York, 1961. - Euclid, Elements, - D. Pedoe, Circles, Dover, New York, 1957. April, 1998; March, 2002. David E. Joyce Department of Mathematics and Computer Science Worcester, MA 01610
<urn:uuid:05d0c9f3-1280-475b-9f22-8d6d6aa2e684>
CC-MAIN-2014-10
http://aleph0.clarku.edu/~djoyce/java/compass/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999652934/warc/CC-MAIN-20140305060732-00087-ip-10-183-142-35.ec2.internal.warc.gz
en
0.872597
242
3.578125
4
Often when we have a phobia it can’t specifically be traced to a specific event. Unless you got stuck somewhere small as a child you might not have a good reason to be claustrophobic, but try using that logic when you get in a stuffy elevator and start to feel the panic rising. Well some research has pointed to the idea that phobias may actually be memories from out ancestors that have been passed down through our actual DNA. As far-fetched as it sounds, it makes a lot of sense. In the same way that our genes that affect our physical appearance alter over time to make changes that support us, it might occur in our emotional state as well. An example of this is how different skin colors in different areas may provide different defenses against the climate they were originally found in. Reason then would follow that if your great grandfather almost died of a bee sting, he might want you to weary of the insect as well. The study that has tested this concept looked to mice for some answers. They found that when a group of mice were exposed to the smell of a cherry blossom while they were given an electric shock would quickly learn to associate the smell as a dangerous one. When those mice were breed and had babies, the younger generation of mice grew up to be just as scared of the scent of cherry blossoms despite never having been exposed to the smell before or to the electric shocks while they were smelling them. Interestingly, the next generation of mice responded the same way, so it appears that the phobia can get passed down as long as it exists in the mice’s bodies. Since the mice aren’t physically speaking to their children to warn them of the cherry blossom dangers, the idea behind this is that something actually changes in their DNA that serves as a tool to pass on the fear. If they feel a strong enough sense of danger from something their body wants to pass it on. In humans, this means that not only general phobias may be influenced by the DNA that you were given, but that anxiety and post traumatic stress disorder could be influenced by it as well. If you feel like you inherited your father’s case of anxiety, you literally could have. Paying attention to this sort of study could begin to change the way we approach things like fears and anxieties. Sometimes trying to get to the emotional root of the cause (as in therapy) might not be helpful since there might not actually be a real cause from your life. If we can figure out where an inherited phobia comes from, maybe we can learn to release the fears that don’t belong to us.
<urn:uuid:e1376ab6-5493-4bfb-86dd-79bb379d625d>
CC-MAIN-2017-47
http://natureshealthwatch.com/are-phobias-genetically-passed-down-memories/
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806426.72/warc/CC-MAIN-20171121204652-20171121224652-00108.warc.gz
en
0.980418
542
3.0625
3
- From Ridge to Reef - Designed for Snorkelers - Australia: Cruising the Great Barrier Reef - Belize: Snorkeling & Coral Reef Ecology - Honduras Bay Islands - Bali to Komodo by Boat - Palau: Snorkeling the Rock Islands - Palau Islands by Motoryacht - Raja Ampat Archipelago by Liveaboard - Raja Ampat & Spice Islands Cruise - Raja Ampat: Whales & Snorkeling (cruise) - Wildlife Encounters - Polar Cruises - Volunteer Vacations - Family Trips - Trip Calendar - Whale Watching - Support Us - About Us - Contact Us Belize: Coral Reef Monitoring (Snorkeling) Work side-by-side with researchers to monitor the health of Turneffe Atoll's diverse coral reefs. Volunteer on this week-long program, and work side-by-side with our researchers to collect data on Belize's outstanding coral reefs. Participants stay at our Blackbird Caye Field Station on the eastern edge of Turneffe Atoll and make daily excursions to conduct reef monitoring. Only snorkeling experience is necessary - participants are trained in research methods and use of equipment. Belize boasts the largest and most biologically diverse barrier reef in the Western Hemisphere. It is part of the larger Mesoamerican Reef ecosystem which has been classified as a Hope Spot and been the subject of recent coverage by National Geographic Magazine for its exceptional biodiversity. Belize's reefs are still relatively healthy, yet the worldwide decline of reef ecosystems is of utmost concern. Our research aims to monitor reefs around Turneffe Atoll to detect changes in reef health and inform marine management efforts. Oceanic Society, in cooperation with the Belize Coastal Zone Management Authority and Belize Fisheries, has initiated a coral reef monitoring plan to collect basic ecological data on reef and seagrass habitats. Our goal is to answer questions related to coral reef community population, structure, health and viability over time. In addition, participants will study population dynamics of ecologically important reef fish and long-spined sea urchins. Sampling techniques require no specialized equipment such as scuba, and have relatively little lasting impact on local habitats. As a volunteer in this 8-day program, you will assist the coral reef researcher in performing shallow water coral reef transects for a quantitative measure of reef resources. Following defined transect lines, volunteers swim the area to record specific fish types known as bioindicator species. Only snorkeling skills are needed to participate; you can choose from multiple tasks and will be trained in the use of equipment and in sampling techniques. Participants stay at the Oceanic Society field station on Blackbird Caye. Accommodations are in rustic beachfront cabanas that offer double occupancy rooms with private baths. Contact our office to learn more or to register for this program! U.S./Belize City/Blackbird Caye Arrive in Belize City for a 90-minute boat transfer to Blackbird Caye. Evening trip briefing and welcome dinner. Spend seven nights in comfortable beachfront cabanas. Blackbird Caye. Research training session in the morning with a snorkel check-out off the dock and a snorkel trip to a nearby shallow reef sites. The reef sites are in the warm waters within the atoll. Day 3 thru 7: Each morning and/or afternoon, under the direction of the researcher, there will be boat surveys to nearby coral reefs to gather data about water quality, reef inhabitants and indicators of reef health. Snorkeling opportunities are excellent in the clear warm waters, and some free time will be provided for snorkeling right from the beach. There will be evening presentations on the research project, the natural history of reef inhabitants, and marine ecosystems. In addition, preparations will be made for the following days' activities. Blackbird Caye/Belize City/U.S. Early morning boat transfer back to Belize City in time to transfer to the airport for your flight back to the U.S.
<urn:uuid:e933e777-8303-482f-b1c2-30b93df4e1ba>
CC-MAIN-2014-23
http://oceanic-society.org/trip/research/belize-reefs-snorkeling
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857714.64/warc/CC-MAIN-20140722025737-00189-ip-10-33-131-23.ec2.internal.warc.gz
en
0.885015
869
2.578125
3
Connecting Your Mind And Body For Improved Health There’s always an ongoing conversation that goes back and forth between the mind and body. Your body directly affects your thought process, and vice versa. Here’s another way to think about it: the way you move and interact with your surroundings molds how you feel, think, and behave. You might be surprised to know that this connection starts early in life. In fact, the earlier young children become mobile and reach cognitive milestones, the faster they develop and maintain their mental health. And it continues as we age. Exercise and physical activity in adults help promote healthy physical and mental aging. When you work out, you keep your body fit, and you stimulate your mind. So, it’s a win-win! What Is The Mind-Body Connection? The mind-body connection is the intertwining of both the mind and body. This connection is so powerful that your body may experience a physical response, like nausea, crying, or a stress-induced headache if you think of something. While this physical response may not be one of your favorite things, it’s basically why you and your ancestors have managed to survive up until now. In other words, the mind-body connection is why you’re alive today. How Does The Mind-Body Connection Work? There are four primary parts in your brain that have a direct effect on the mind-body connection. The first is the emotional cortex, which is responsible for dealing with your emotions. The second part is the hippocampus, which deals with how you consolidate your memories. Then, you have the prefrontal cortex that allows you to strategize and decide what to do. Finally, the amygdala is what controls your fight-or-flight response. Your brain turns on this response when your body feels there’s an external threat. As a result, it releases large doses of cortisol, the stress hormone. In times of danger or trauma, this hormone signals your lungs and heart to make you breathe faster. Not only that, but they pump your muscles full of adrenaline. This is what helps you either escape danger or fight your way to safety. The amazing thing is that each physical symptom you experience is also something you feel emotionally, and vice versa. So, for example, if you sprain your ankle, the physical pain can also be accompanied by a sense of anger or sadness. On the other hand, if you experience a panic attack, you feel a tightness in your chest, nausea, and just achy all over. What Are The Benefits Of The Mind-Body Connection? The Dalai Lama XIV once said, “If the mind is tranquil and occupied with positive thoughts, the body will not easily fall prey to disease.” Science has proven this relationship because so many of our emotions and thoughts are in constant communication. Everything from the immune, endocrine, and peripheral nervous systems, many of our organs, and all our emotional responses share common chemicals that go back and forth. Now, let’s look at why this connection is so important and how you can use it to improve your overall lifestyle. 1. Boost Attentiveness Knowing how the mind and body connect encourages you to pay more attention to your thoughts and emotions. Hence, you can use this connection to your advantage. For example, if you’re not doing well emotionally, your body will give off specific cues. If you’re in tune with them, you’ll know exactly how to react before you become too overwhelmed. One of the best ways to do this is by being mindful of your thought patterns and how you talk to yourself. Start to focus on when negative thoughts come into your mind and why some negative self-talk starts the way it does. By knowing that, you can give yourself a chance to stop them before they escalate and become too much to handle. Once you do that, you’ll end up dealing with emotions in a healthier way, which allows for fewer physical setbacks. 2. Learn to Release Emotions Knowing how you feel and what triggers bring on certain thought patterns can go a long way in helping you release pent-up or negative feelings. For starters, you become better at finding activities that affect both your mind and body, like learning various breathing techniques, positive visualizations, yoga, and much more. As a result, you begin to know what your body is feeling and find the best ways to calm your mind. This comes in extremely handy if you’re prone to stress, anxiety, and depression. 3. Develop Healthy Habits When you’re able to deal with difficult emotions, you become more in control of your thought patterns. Thus, your overall well-being gets a nice boost. For example, you pay better attention to your needs. So, you don’t get easily dragged into drinking, drugs, or binge eating when you’re in a bad mood. Instead, you engage in different physical activities to release the pent-up emotions. You also start to eat and sleep better, hydrate more, and manage to stay consistent throughout. That’s when you’ll finally realize that you’ve made a conscious effort to develop healthy habits by living a balanced lifestyle. Over time, maintaining mental and emotional stability will become a way of life. Download your free Body Mind Spirit ebook report here: Get free Report! How did you like this post? If you found this article helpful to you, you may want to share it with others by clicking the social networking buttons – Thank You! Cheers, Helene Malmsio Related Reading: https://www.discoveryhub.net/how-to-balance-mind-body-spirit.html I really want to know what you think of this site, this page, and to hear your tips or suggestions about it. So please share your story or simply add a Comment in the comment box. If you feel that the information on this page has been useful to you please give it a Like or share it with your friends - thanks!! "You are a life Saver!! I recently discovered this site and I can tell you that my life has not been the same. I now come here EVERYDAY and spend at least 1 I used to spend that time browsing online fashion and beauty magazine which just means that I spend more. Now I have replaced that habit with coming here. In future I will think about contributing articles as well. Thank you! Thank you!! Thank you!!! and God bless" Amazon and the Amazon logo are trademarks of Amazon.com, Inc. or its affiliates. As an Amazon Associate I earn from qualifying purchases. Product prices and availability are accurate as of the date/time indicated and are subject to change. Any price and availability information displayed on [relevant Amazon Site(s), as applicable] at the time of purchase will apply to the purchase of this product.
<urn:uuid:9c80c49f-a72c-4042-8bb2-c185c976ad56>
CC-MAIN-2023-40
https://www.discoveryhub.net/connecting-your-mind-and-body-for-improved-health.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510214.81/warc/CC-MAIN-20230926143354-20230926173354-00819.warc.gz
en
0.929221
1,485
3.3125
3
Source: The Conversation (Au and NZ) – By David Brynn Hibbert, Emeritus Professor of Analytical Chemistry, UNSW We measure stuff all the time – how long, how heavy, how hot, and so on – because we need to for things such as trade, health and knowledge. But making sure our measurements compare apples with apples has been a challenge: how to know if my kilogram weight or metre length is the same as yours. You won’t notice anything – you will not be heavier or lighter than yesterday – because the transition has been made to be seamless. Read more: Explainer: what is mass? Just the definitions of the seven base units of the SI (Système International d’Unités, or the International System of Units) are now completely different from yesterday. How we used to measure Humans have always been able to count, but as we evolved we quickly moved to measuring lengths, weights and time. The Egyptian Pharaohs caused pyramids to be built based on the length of the royal forearm, known as the Royal Cubit. This was kept and promulgated by engineer priests who maintained the standard under pain of death. But the cubit wasn’t a fixed unit over time – it was about half a metre, plus or minus a few tens of centimetres by today’s measure. The first suggestion of a universal set of decimal measures was made by John Wilkins, in 1668, then Secretary of the Royal Society in London. The impetus for doing something practical came with the French Revolution. It was the French who defined the first standards of length and mass, with two platinum standards representing the metre and the kilogram on June 22, 1799, in the Archives de la République in Paris. Scientists backed the idea, the German mathematician Carl Friedrich Gauss being particularly keen. Representatives of 17 nations came together to create the International System of Units by signing the Metre Convention treaty on May 20, 1875. France, whose street cred had taken a battering in the Franco-Prussian war and was not the scientific power it once was, offered a beaten-up chateau in the Forest of Saint-Cloud as an international home for the new system. The Pavilion de Breteuil still houses the Bureau International de Poids et Mesures (BIPM), where resides the International Prototype of the Kilogram (henceforth the Big K) in two safes and three glass bell jars. The Big K is a polished block of platinum-iridium used to define the kilogram, against which all kilogram weights are ultimately measured. (The original has only been weighed three times against a number of near-identical copies.) The British, who had been prominent in the discussions and had provided the platinum-iridium kilogram, refused to sign the Treaty until 1884. Even then the new system was only used by scientists, with everyday life being measured in traditional Imperial units such as pounds and ounces, feet and inches. The United States signed the Treaty on the day, but then never actually implemented it, hanging on to its own version of the British Imperial system, which it still mostly uses today. The US may have rued that decision in 1999, however, when the Mars Climate Orbiter (MCO) went missing in action. The report into the incident, quaintly called a “mishap” (which cost US$193.1 million in 1999), said: […] the root cause for the loss of the MCO spacecraft was the failure to use metric units in the coding of a ground software file, “Small Forces”, used in trajectory models. Essentially the spacecraft was lost in the atmosphere of Mars as it entered orbit lower than planned. The new SI definitions So why the change today? The main problems with the previous definitions were, in the case of the kilogram, they were not stable and, for the unit of electric current, the ampere, could not be realised. And from weighings against official copies, we think the Big K was slowly losing mass. All the units are now defined in a common way using what the BIPM calls the “explicit constant” formulation. The idea is that we take a universal constant – for example, the speed of light in a vacuum – and from now on fix its numerical value at our best-measured value, without uncertainty. Reality is fixed, the number is fixed, and so the units are now defined. We therefore needed to find seven constants and make sure all measurements are consistent, within measurement uncertainty, and then start the countdown to today. (All the technical details are available here.) Australia had a hand in fashioning the roundest macroscopic object on the Earth, a silicon sphere used to measure the Avogadro constant, the number of entities in a fixed amount of substance. This now defines the SI unit, mole, used largely in chemistry. From standard to artefact What of the Big K – the standard kilogram? Today it becomes an object of great historical significance that can be weighed and its mass will have measurement uncertainty. From today the kilogram is defined using the Planck constant, something that doesn’t change from quantum physics. The challenge now though is to explain these new definitions to people – especially non-scientists – so they understand. Comparing a kilogram to a metal block is easy. Technically a kilogram (kg) is now defined: […] by taking the fixed numerical value of the Planck constant h to be 6.626 070 15 × 10–34 when expressed in the unit J s, which is equal to kg m2 s–1, where the metre and the second are defined in terms of c and ΔνCs. Try explaining that to someone! – ref. The way we define kilograms, metres and seconds changes today – http://theconversation.com/the-way-we-define-kilograms-metres-and-seconds-changes-today-117255
<urn:uuid:7154c9c1-a5b8-4ed8-a419-ed0cdf18a733>
CC-MAIN-2020-29
https://eveningreport.nz/2019/05/20/the-way-we-define-kilograms-metres-and-seconds-changes-today-117255/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00311.warc.gz
en
0.938614
1,275
3.515625
4
Human Beings and their bipedal primate ancestors have been walking for more than 4 million years. Although they may have used some form of foot covering, these did not evolve into the appliance that we think of as a shoe until perhaps 150 years ago. Consequently, it is easy to assume that the structure and function of the foot is a refined mechanism for connecting directly with the ground and that shoes are a compensative innovation for having to spend our days walking on cobblestones or concrete. Shoes can interfere with expressing native foot functions and therefore exercising the innate mechanisms discussed in this section is best-done barefoot1. There is a deeper discussion on this topic in the section on footwear. Native foot functions require a firm grip with the ground. Bodyweight is the opposing force to the action of these functions2, and when the foot slips they misfire to some degree. The sensory mechanisms in the foot are based on this grip, and when the sensory input to the function is diminished or absent, the function is compromised. This is even true for wearing socks, as they do not enable the prerequisite traction. Without adequate sensory input, the functions will atrophy which then requires that we replace innate function (shock absorption/structure support/energy return) with appliances such as cushioned insoles, orthotics, or even surgery. Unfortunately, our modern world does not offer many opportunities for safe and socially acceptable barefoot walking. One of the few places is on beach sand, which is suboptimal for this type of exercise. Around the house, on hardwood or another surface with good traction is perhaps the only place where you can get some practice in on a daily basis. Combined with accommodating footwear1, this may be adequate to keep the feet happy. A designed environment that supports natural foot function would include rough floors (e.g. slate) that encourage the full range of response of the feet. 1 A description of this author’s opinion of appropriate footwear can be perused in the section on FOOTWEAR 2 The sections on Tensegrity and Gravity explore how aligning bodyweight optimizes load-bearing through the feet: We need to crawl before we can walk. Very few of us mastered crawling before we progressed to figuring out how to walk, and there is a benefit in continuing to developing our crawling automaticities to improve how we crawl with “archetypal” crawling. Crawling on other floor exercises are an excellent means to improve our sense of feeling well regulated and to encourage healing. A description of archetypal crawling: The more efficient our crawling, the more fluid our bipedal gait will be. There is a consensus among some developmental physiology oriented therapists that regular crawling can cure almost any ill, as it helps to reintegrate our high-level modern neurophysiology with our more ancient, enabling bringing greater resources to any issue. To assist with the transformation of feet from compensatory to adaptive, use of this product (or similar) on a daily basis while training, and on a continuing basis as needed is strongly recommended: This tool is documented to reduce Plantar Fascitis, Bunions and Hammertoes if used in the recommended protocol. Footwear and our modern lifestyle can greatly degrade foot function. If foot function is compromised there are two options: 1) Add Orthotics or some form of a compensatory mechanism. The issue with doing this is that there is a likelihood of progressive degeneration, where more aggressive compensatory mechanisms become necessary. 2) Drive the foot back into better functionality. Option #2 is the purpose of this section on Feet. Driving the foot into a more functional state requires an understanding of what proper foot function is – how to properly use our feet. However, our toes may have become so misaligned that they are unable to engage with foot function as described in this section. If this is the case, it is possible to realign the toes over time using toe separators: These are usually used overnight, although some people like to walk around the house in them too. EXERCISE FOR USING THE OUTSIDE EDGE OF ONE FOOT TO CORRECT BIG TOE ALIGNMENT OF THE OTHER FOOT RUNNING ON FLAT SURFACES Living in a world made from flat surfaces is a wonderful human innovation, but not one without costs. One of the risks of running on flat surfaces is that our running gait becomes too rhythmic (we get into the groove) and the pressure waves rattling around in the body can interfere, creating momentary high-pressure gradients – commonly at articulations where the density of tissue changes. Treadmills have this same hazard of locked-in rhythm and a hard, flat surface. This issue is especially relevant when running on concrete. Because of its high-density, concrete reflects almost all of the energy directed into the ground in stride back into the foot, acting as an acoustic mirror. The high-frequency components of these pressure waves are the most damaging, as our ancestors rarely encountered them and did not need to evolve a strategy to deal with them. Running on dirt tracks or trails is almost the opposite. The rough surface and variability make for a broken rhythm, and the dirt surface absorbs much of the high-frequency component of the foot’s impact with the ground. This is what our Primate lineage evolved to run on, so it makes sense that we are optimized for these conditions. WALKING ON STAIRS Most stairs are not wide enough for all but the smallest feet to make a full connection. When going upstairs, this is not an issue because we can keep the foot in the Gait Line by just using the ball of the foot. Going downstairs if we maintained the Gait Line the ball of the foot would hang over the edge of the stair, which is hazardous. We therefore usually evert the feet in a “duck walk” pattern. Stairs are an excellent example of how our BUILT ENVIRONMENT compromises native biomechanics. The way to address this is to minimize walking down flights of stairs, and to contact the stair with the part of the ball of the foot behind the Big Toe, in essence walking downstairs on the balls of the feet. There is a reframe for the context of stairs here: IN/OUT VS. UP/DOWN This aspect is mentioned here primarily to bring attention to the issue, which in this writer’s understanding is an opportunity for further innovation. However, simply bringing awareness to the issue may help negate its negative repercussions. Discussion of tactics and strategies to aid in training improved foot usage: IMPROVING OUR FEET An overview of foot structural and functional issues and means to optimize how we use our feet:
<urn:uuid:f6420562-b666-4bde-87a8-974c8494913d>
CC-MAIN-2020-29
https://www.dimensionalmastery.us/exercises-for-the-feet/
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00087.warc.gz
en
0.935037
1,395
3.40625
3
Human teeth are made up of four layers: enamel, dentin, cementum, and pulp. The enamel is mostly naturally white and is the outermost tissue on the tooth. The dentin is beneath the enamel and is yellowish. The cementum covers the tooth root while the pulp is the sensitive soft tissue with nerves and blood vessels. Enamel thinning is a common reason for yellow teeth. There are several options for improving health for yellow teeth and reducing the yellowing. One of these is the use of a good toothpaste designed explicitly for yellow teeth. Visit reviewaz.uk for the most recommended kinds of toothpaste for yellow teeth. Yellow teeth may be caused by habits and factors such as smoking, enamel thinning, certain medical conditions or medications, aging, poor oral hygiene, grinding, and faulty diets. Here are some of the most recommended toothpastes for your yellow teeth. 1. Crest 3D White Brilliance Toothpaste If you are struggling with cavities resulting in unpleasant odors and constant toothaches, then Crest 3D is the toothpaste for you. Crest 3D whitening toothpaste is unique in whitening teeth by removing up to 80% of surface stains, which makes it Crest’s most advanced formula. It also has the capability of protecting teeth from future stains with its 3X stain Fighting Power. The toothpaste contains fluoride, which helps in the fighting of cavities hence stronger and healthier teeth by forming a protective tooth layer. It is safe on the enamel therefore suitable for all persons, including those with susceptible teeth. All you need is a moderate amount of toothpaste to eliminate the dirt and restore your healthy smile. It is a two-step process for tooth whitening; one is brushing with the anti-cavity toothpaste and then a peroxide formula. It has a refreshing mint flavor to keep your breath fresh for longer. It, however, has a thin consistency. 2. Pro Teeth Whitening Toothpaste It is an activated charcoal toothpaste, and the main ingredients are vegetarian and natural. The activated charcoal in the toothpaste facilitates polishing of the tooth surface by eliminating stubborn stains leaving your teeth whiter and your smile brighter. It has a mint flavor that neutralizes the causative bacteria of bad breath in the mouth, leaving your breath fresh all day. Pro Teeth whitening toothpaste lacks fluoride, gluten, and other chemicals and is low in abrasion therefore safe for your enamel. It has naturally derived ingredients that offer protection from stains and bacteria. However, the toothpaste lacks an artificial fragrance; hence may be unsuitable for those who prefer the flavor that remains in the mouth after brushing teeth. 3. David’s Natural Whitening Toothpaste It is an efficient toothpaste to use with dental products for porcelain veneers. The formula lacks fluoride, sulfates, and other abrasive chemicals, which may scratch away the porcelain veneer layer. Xylitol contained in the toothpaste prevents signs of damage and protects the teeth from harm. It is user-friendly. The major con of the product is the packaging tube which causes oozing.
<urn:uuid:15c8b4be-a998-4754-a9b7-397a335c20e3>
CC-MAIN-2023-50
https://blog.emergencydentalservice.com/which-toothpaste-is-best-for-yellow-teeth/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100909.82/warc/CC-MAIN-20231209103523-20231209133523-00186.warc.gz
en
0.926793
665
2.53125
3
This material must not be used for commercial purposes, or in any hospital or medical facility. Failure to comply may result in legal action. WHAT YOU NEED TO KNOW: Bronchopulmonary dysplasia (BPD) is a long-term condition that affects your baby's lungs. BPD is also called chronic lung disease. This condition usually occurs in a premature baby whose lungs are inflamed and damaged. This prevents the baby's lungs from working properly and leads to serious breathing problems. Follow up with your baby's healthcare provider as directed: Your baby may need to return for tests to check how his lungs are working. Write down your questions so you remember to ask them during your visits. - Run a cool mist humidifier. This will help increase air moisture in your baby's room. Follow the humidifier instructions carefully. - Give oxygen as directed. Your baby may need extra oxygen to help him breathe easier. It can be given through a mask over his mouth and nose. It may also be given through small tubes placed in his nose. Ask your baby's healthcare provider about how and when to give extra oxygen at home. - Use a pulse oximeter as directed. A pulse oximeter is a machine that tells how much oxygen is in your baby's blood. A cord with a clip or sticky strip is placed on his earlobe, finger, or toe. The other end of the cord is hooked to a small machine. You may need to use this machine to see if he needs more oxygen. Ask your healthcare provider for more information about a pulse oximeter. Cardiopulmonary resuscitation (CPR): Call 911 immediately, or send someone to call for help. Call 911 before you start CPR. Stay on the telephone with the 911 operator until he tells you to hang up. Begin CPR if your baby is not breathing or is gasping. Continue CPR until he responds or healthcare providers arrive. Remember that CPR on a baby is different from an adult. Ask your healthcare provider for more information on CPR for babies. A dietitian may talk to you about your baby's feeding and nutrition. A dietitian can help you increase the amount of calories your baby gets. During feeding, hold your baby so his head is higher than his stomach. Your baby may become tired easily when feeding. If needed, stop the feeding to allow him to take breaths between sucks on the bottle or breast. Always check for signs of fatigue and any skin color changes. - Do not let anyone smoke around your baby. If you smoke, it is never too late to quit. Your baby is more likely to get lung infections if he breathes in cigarette smoke. Cigarette smoke can also cause breathing problems. Do not let anyone smoke inside your home. Ask your healthcare provider for information if you need help quitting. - Keep your baby away from people who have colds and the flu. This decreases your baby's chance of getting sick or getting an infection. - Wash your hands often. This will help prevent the spread of germs. Encourage everyone in your house to wash their hands with soap and water after they use the bathroom. Wash your hands after you change diapers and before you prepare food or eat. Contact your baby's healthcare provider if: - Your baby has a fever. - Your baby has chills or a cough. - Your baby's skin is swollen or has a rash. - You have any questions or concerns about your baby's condition or care. Seek care immediately or call 911 if: - Your baby has trouble breathing. - Your baby is more sleepy, irritable, or fussy than usual. - Your baby is not able to eat or drink anything for 24 hours. - Your baby's skin, lips, or fingernails are pale or blue. © 2017 Truven Health Analytics Inc. Information is for End User's use only and may not be sold, redistributed or otherwise used for commercial purposes. All illustrations and images included in CareNotes® are the copyrighted property of A.D.A.M., Inc. or Truven Health Analytics. The above information is an educational aid only. It is not intended as medical advice for individual conditions or treatments. Talk to your doctor, nurse or pharmacist before following any medical regimen to see if it is safe and effective for you.
<urn:uuid:0c61faa9-29c6-4116-93d1-47f03ef0f8dd>
CC-MAIN-2017-43
https://www.drugs.com/cg/bronchopulmonary-dysplasia-discharge-care.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824543.20/warc/CC-MAIN-20171021024136-20171021044136-00842.warc.gz
en
0.936576
895
3.203125
3
Small but powerful magnets are becoming an increasing safety risk in children, and now, a new report published in the Lancet discusses two more cases in which U.K. children became ill after ingesting the pieces. Dr. Anil Thomas George of Queen's Medical Centre at Nottingham University Hospitals in the U.K. describes the "widespread availability" of cheap magnetic toys that contain the parts that become easily detached and consumed by children. "While we understand that it may be impossible to prevent small children from occasionally swallowing objects, we would highlight to parents the potential harm that could arise from multiple magnet ingestion," George said in a statement. "We would advise parents to be more vigilant and take extra care when giving their children toys that may contain magnets small enough to swallow. "We would also welcome an increased awareness of this problem among toy manufacturers, who have a responsibility to alert parents to the presence of magnets in their products," he continued. Incidents of children and teenagers accidentally ingesting high-powered magnets have been on the rise in recent years, Kim Dulic, a spokesperson for the Consumer Product Safety Commission, told ABCNews.com in March. And most of the magnets are so small that it's difficult to notice if one or two go missing in a sofa or on the floor. "The popularity of these products are growing, and it's resulting in an increasing amount of incidents," said Dulic. One incident of ingesting magnets was reported in 2009, seven in 2010 and 14 through October 2011 in the U.S. The ages of these cases ranged from 18 months to 15 years old, and 11 required surgical removal of the magnets. In March, ABCNews.com reported that a 3-year-old Oregon girl who consumed 37 Buckyball earth magnets, which punched holes in her stomach and intestines. She, along with most people who consume the magnets, experienced flu-like symptoms within a couple days of ingesting magnets that have not passed through the digestive system. The availability of toys with small magnetic elements has increased, George wrote in the report. And, since magnetic tongue rings and lip piercings in which two high-powered magnets sit on both sides of the lip or tongue have also become more popular in recent years, teenagers are also at risk, the CPSC warns. Button-size batteries, found in remote controls, toys, calculators and bathroom scales, have also led to accidental ingestions. "The difference between magnets and these batteries is that you can see symptoms within two hours of swallowing them," said Dulic. "It burns the esophagus and it can start soon after." And, while the CPSC created new regulations in 2008 for children's products that contain magnets, the rules do not extend to adult products, which are also known to contain the pieces. "We've found that a lot of teens are getting these at school, so parents should be sure to notify their teens as to what's happening with these products," said Dulic. "They can just be really dangerous." "We believe that improvement in public awareness about this risk will be key in preventing such incidents," said George.
<urn:uuid:1c8e361b-a134-4895-aa9d-dd60f8c67665>
CC-MAIN-2013-48
http://abcnews.go.com/Health/magnetic-ingestion-rise/story?id=16624042
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163051590/warc/CC-MAIN-20131204131731-00042-ip-10-33-133-15.ec2.internal.warc.gz
en
0.973281
647
2.65625
3
Click here to log in 4 million accounts created! JOIN our free club and learn English now! Get a free English lesson every week! 2 MILLION subscribers! - English translator - Our other sites Learn English > English lessons and exercises > English test #104313: Difference between Much and Many Difference between Much and Many To tell the difference between Many and Much ask yourself whether the sentence makes sense if you write a number instead of 'many' or 'much'. Here are some examples: Many = for countable objects 1. Were there (many, much) people at the party last night? + 'Can I count people, and does it make sense with a number before 'people'? + Were there '5' people at the party last night? + It makes sense to say that there were 5 people, so use 'many' Much = for objects you can't count 1. How (many, much) milk do you want? + 'Can I count milk, and does it make sense with a number before 'milk'? + Do you want 2 milk? + Putting '2' in front of milk doesn't make sense, so use 'much' 2. How (many, much) bottles of milk do you want? + 'Can I count bottles, and does it make sense with a number before 'bottles'? + Do you want 2 bottles of milk? + Putting '2' in front of bottles of milk makes sense, so use 'many' because you can count bottles. English exercise "Difference between Much and Many" created by anonyme with The test builder Click here to see the current stats of this English test [Save] [Load] [?] End of the free exercise to learn English: Difference between Much and Many A free English exercise to learn English. Other English exercises on the same topic : Quantities | All our lessons and exercises
<urn:uuid:ec1f699a-2131-465c-9467-b7f3657e2821>
CC-MAIN-2017-30
http://www.tolearnenglish.com/exercises/exercise-english-2/exercise-english-104313.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426161.99/warc/CC-MAIN-20170726122153-20170726142153-00286.warc.gz
en
0.925176
411
3.40625
3
JANUARY 16, 2017 MAXIMUM CONTAMINANT LEVEL VIOLATION LETTER FOR TOTAL TRIHALOMETHANES (TTHMs) FOR 3RD AND 4TH QUARTERS OF 2016 Download File: JANUARY 2017 VIOLATION LETTER.pdf Efforts have been made to spread the word about conserving water, one of our planet's most valuable resources. Happily, conserving water benefits consumers' wallets, as well. This conservation effort, however, cannot be realized without the community's support. Here are some water-saving tips that are easy to implement in your daily life: - Take a short shower instead of a bath. Install a low-flow showerhead, which restricts the quantity of flow to less than 3 gallons of water per minute. If a shower is unavailable, reduce the level of water in the bathtub. - Avoid using hot water when cold water will suffice. - When brushing teeth, turn water off until it is time to rinse. - When purchasing a new home or remodeling a bathroom, purchase a low-volume flush toilet that uses 3.5 gallons of water or less in contrast to traditional toilets that use double that amount of water. - Test toilet for leaks by adding a few drops of food coloring to the water in the tank. - Use a toilet tank displacement device, such as a gallon plastic milk jug filled with stones or water and recapped and placed in the toilet, which will reduce the amount of water needed to flush. - Always wait until there is a full load to run the dishwasher. - Keep a container of cold drinking water in the refrigerator instead of running tap water unnecessarily. - ater lawns and grass in the a.m. hours. Avoid watering in the heat of the day. This will limit water loss due to evaporation. If you have discolored water, it is because of minerals in the water. If the water is red, it is iron in the water. If the water is black, it is magnesium in the water. It is still safe to drink. DO NOT do any laundry if your water is discolored. If you have washed, DO NOT dry. The heat from the dryer will stain your laundry. If this happens, call the Bridge City, City Hall 409-735-6801 or by e-mail. They have an agent that can be used in laundry that can remove the stains. City of Bridge City Water Meter Replacement (AUtomatic meter read system) Starting Monday March 4, 2013, the City of Bridge City, TX will begin work to replace water meters. Work will be done by workers wearing, either safety yellow, green or blue shirts, marked with the logo “Siemens/PVI Meter Team”. Trucks marked with the same logo will be used during meter swap outs. Meter replacement is anticipated to take approximately 30 minutes each. Commercial meter customers will be notified of water shutoffs in advance. Anyone having special needs for water should notify the Water Department at 409-735-6801. Please pardon any inconvenience during this upgrade to your service. FREQUENTLY ASKED QUESTIONS: - Why is my water meter being replaced? Over time, water meters become less accurate and can provide inaccurate water readings. By replacing the meters, our City will be able to bill more accurately and efficiently for water usage. In addition, the new system will include an automatic meter reading technology that will save labor time, prevent any reading errors, thereby minimizing the need for the City to go on private property of residents. Potentially all water line leaks to be identified earlier by analysis of data collected. - Does this mean my bill will be increasing? Not necessarily. In cases where rates remain consistent, the new meters will simply record consumption more accurately. In some cases, your bill may increase, but only if your current meter is underreporting usage. Presently the majority of residents are paying for the water they are actually using, while a few residents are only paying for a fraction of the water. This condition is not fair to all residents. The City does not intend to make bills retroactive where under- billing has been noted. The new system will ensure fairness and equality for all the residents and businesses in Bridge City from this point forward. - When will this work be performed? The work will be started around Monday March 4, 2013. The entire project will take approximately four (4) months. In most cases the transition will be completely transparent and will not affect the residents. The work will be performed during normal working hours of 8:00 AM – 5:00 PM Monday – Friday. - How will this affect my service? A contractor will come to your residence and replace your meter. The water meter will be checked to verify that water is presently not in use. If no water is being used, the meter will be replaced. There will be interruption of service for approximately 30 minutes during the change, but after that it will be the same great service (and even better) than you’ve come to expect. - How long will it take? In most cases, it’s a simple procedure that will require about 30 minutes. - How do I know who is authorized to do work? We have contracted with Siemens/PVI to conduct this service. They will be driving Siemens/PVI trucks, wearing bright yellow, green or blue shirts identified by “SIEMENS/PVI Meter Replacement Team” and carrying appropriate identification. - Do they need to come inside my house? No, all meters are located outside of the homes. - What if I’m on vaction or not available that day. Whom do I call? In most cases it will not be necessary for anyone to be home. The majority of the work will take place near the street in the meter box. If you have any questions or concerns, contact the water billing department at 409-735-6801. - Why was I not able to turn on my water after the meter was installed? In rare instances, the main cut-off valve to your home may be left off. This will occur when the Siemen/PVI installation team is not able to pressurize your home following the installation. The normal cause of this condition is when an inside spigot is opened during the installation and subsequently left open. The water is not turned back on at the meter to ensure a sink or bath tub does not overflow with the resident not home to turn the inside water off. In these cases, your water will be left off and a door hanger will be left on your front door providing you a point of contact to call to have your water turned back on immediatly. For Assistance Call: Monday – Friday 8:00 A M – 5:00 PM 409-735-6801 Between 5:00 PM and 8:00 AM call the Police Dept. 409-735-5028 - Is there any Special Care or maintenance that I need to do to my meter? No, your new meter does not require any maintenance by the homeowner. As before, the City will take care of all maintenance. However, please know that this new meter has tranmitting technology on it that allows our billing office direct connection with your water meter or the meter readers to read the meter with a lap top computer. The meter reader will no longer be coming by on a monthly basis. - What if I want to turn my water off/on at the meter for repairs or if it freezes? Residents need to be aware that on the new meter there is a wire approximately 5 feet long attached to the meter box lid. Water may be turned off/on by using the off/on valve as before. Make sure that the wire is placed back into the meter box and is not exposed where it may be cut during yard maintenance. Texas Commission of Environmental Quality (TCEQ) has assessed our system and determined that our water is safe to drink. To view click below: Download File: Drinking Water Quality Report.pdf The City maintains main lines and service lines to the water meter. Any problems from the water meter to the house are the homeowner's responsibility. If you are experiencing problems, call the Bridge City, City Hall at 409-735-6801or by e-mail. The City maintains the wastewater collection lines and the individual service lines for each residence and business to the property lines. Each house should have a clean out at the property line that defines this division point. From the clean out to the house or business is the responsibility of the property owner. If you have a back up and water is standing in the clean out at the property line, the problem will be the responsibility of the City. Please contact the City immediately and they will come out. If the clean out is dry, this indicates the problem is between the house and the clean out and is the responsibility of the property owner. The property owner needs to clear the clean out, or call a plumber. DO YOU KNOW WHERE YOUR CLEAN OUT IS? If not, call the City at 409-735-6801 or by e-mail and they will locate it for you.
<urn:uuid:58c57437-cff3-4852-8872-14f9c094f5b2>
CC-MAIN-2017-30
http://www.bridgecitytex.com/bc-WaterSewer-86.php
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423927.54/warc/CC-MAIN-20170722082709-20170722102709-00607.warc.gz
en
0.935268
1,928
2.828125
3
On 28th October 1647 leaders of the Parliamentary army and representatives of the rank and file, along with a number of London radicals known as the Levellers, converged on St Mary’s Church in Putney. ‘The Moderate Intelligencer’, a newsbook of the time, reported that on the 28th ‘a great assembly was this day at Putney Church, where was debated matters of high concernment…there was resolution taken to meet the next day and proceed’. With Charles I locked up, deciding what to do with the king was high on the agenda. But at a time when radical ideas were spreading across the country, enabled by the affordability of the printing press, which was proving difficult to regulate, the debates would be dominated by discussion of a recently printed Leveller pamphlet: ‘An Agreement of the People’. After a five hour prayer meeting in the morning, the debaters convened for the second day of discussions. The day was dominated by debates about the right to vote. On one side, Henry Ireton (Cromwell’s son-in-law) advocated that the right to vote should remain only for people who own property worth more than 40 shillings. The other side of the argument was summed up in the words of Colonel Thomas Rainborough, who declared:’I think that the poorest he that is in England hath a life to live, as the greatest he’. A vote taken late in the evening found a majority for extending the vote to all men, except servants and beggars. Believed to have written by one of the key figures in the Leveller movement, John Wildman, An Agreement of the People was printed in October 1647, in time for the start of the debates on October 28. As the debates were taking placing, the radical ideas contained within the Leveller pamphlet were spreading across London. The Agreement called for every man to be equal under the law, for freedom of religious expression, and for the present Parliament to be dissolved and future Parliaments to meet regularly. After the first two days of the debates, the remainder of the record is fragmentary. It is likely that the army leadership considered that the controversial topics which were being discussed would be risky to be kept on the record. On November 8 the army leaders decided to bring the debating to an end, by proposing that the agitators should return to their regiments. Perhaps concerned by the overwhelming support for the Leveller pamphlet, An Agreement of the People, the army leaders called a halt to proceedings.
<urn:uuid:e8083b93-3f9f-49ff-93d2-34f07d4a1735>
CC-MAIN-2017-30
http://www.theputneydebates.co.uk/the-putney-debates-day-by-day/
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423269.5/warc/CC-MAIN-20170720161644-20170720181644-00606.warc.gz
en
0.973581
542
3.4375
3
Have you ever met a person who has been through hell on Earth, and yet amazes you with their total lack of anger and bitterness? I encountered such a person yesterday. Severin Fayerman is a Holocaust survivor who came to my school (York Catholic) to give a talk about his life’s experiences. He held the entire school in rapt attention during yesterday’s assembly. Fayerman was born in Poland during the aftermath of World War One. His family owned a factory, and it was there he learned to make tools. This skill would later save his life. During the German occupation, the Fayermans were sent to Auschwitz, the most notorious of all Hitler’s camps. They were separated from each other. The prisoners were fed starvation rations – scraps of bread, watery soup. He recalled how he never wanted to be first in line at lunchtime, because the soup from the top of the pot was mostly, if not all, water. During the day, they were made to dig trenches in which to put the remains of the cremated bodies. Often prisoners collapsed from the strain and guards left them where they lay. Because Fayerman knew English, he offered to teach it to his kapo. A kapo was usually a criminal from Germany who was put in charge of some of the other inmates. They were usually given better treatment. The reason this particular kapo wanted to learn English was because he was convinced that Germany would soon conquer Great Britain, and he would be sent to a newly established concentration camp there. Fayerman was glad to oblige. He was given better food and clothing for his efforts. Later, he was shunted from camp to camp. At one point he stayed in Berlin to make tools for an electrical company. While in Berlin, he survived a bombing by Allied aircraft. He still remembers how the ground of the shelter shook under the force of the bombardment. Once, a camp he was staying in was attacked by American aircraft. The prisoners all ducked down. Amazingly, the aircrafts were able to shoot the guards in the towers without killing a single prisoner by accident. When World War Two was in its final stages, he was in a quarry with other prisoners. The guards surrounded them with machine guns. Then one morning he woke up, and most of the guards were gone. He asked one who remained what had happened. The guard told him that since the Americans were coming from one side and the Russians were coming from the other, he could leave. And leave he did. Fayerman remembered something his uncle had told him in one of the camps –that if they ever got free, his family would all meet at Fayerman’s aunt’s house in Austria. Fayerman left for Austria and was able to locate his aunt’s house. When he got there, he found both his parents waiting for him. They had survived the Holocaust! Fayerman got a standing ovation. It was one of the best assemblies we’ve ever had, and one of the most moving. During lunch, Fayerman signed copies of his book, “A Survivor’s Story.” I went up to shake his hand. It was sort of like reaching across the years to another era.
<urn:uuid:9cdc6a38-0295-44fd-b608-deaefa6eca3d>
CC-MAIN-2014-10
http://www.yorkblog.com/teentakeover/2013/02/16/holocaust-survivor/
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011198589/warc/CC-MAIN-20140305091958-00053-ip-10-183-142-35.ec2.internal.warc.gz
en
0.993922
675
2.734375
3
The September edition of Scientific American went completely environmental with topics ranging from nuclear power to renewable energy, from hydrogen transportation to sustainable building, from climate repair to carbon emissions, and from coal to advanced technology. This issue really covered the important topics in a smart, sophistocated, and thoughtful way. I wanted to relate some of the concepts that the magazine mentioned in its article by Eberhard K. Jochem, "An Efficient Solution." Generally speaking, the crux of the article is that wasting less energy is the quickest, cheapest way to curb carbon emissions. Need for Green Building: Nearly 35% of greenhouse gas emissions come from buildings, and 66% of all energy converted into a form usuable for human consumption is lost in conversion. By improving the process whereby energy becomes usuable for human consumption, it is possible to reduce carbon emissions. And more efficient buildings will play a role in this process. If we assume that energy prices will continue to rise, every piece of technology that saves energy is an economic, business opportunity to be captured. Many buildings are constructed with only the first costs in mind. Maybe this is attributable to the process of bidding for projects, which seems to only include an analysis of the total build cost. The life-cycle costs of a building, which would consider the operating costs, never enters into the calculation (unless developers request bids for products with green features and the life-cycle cost is implicit in the construction). Example – Green Renovated Apartments: The article mentions a project in Ludwigshafen, Germany, with 500 living spaces. These places were difficult to rent. So the apartments were renovated to adhere to low-energy consumption standards, which required about 30 kilowatt-hours per square meter per year. Subsequently, rental demand for the apartments soared to 3 x capacity. As a business person, this should ring a bell: an automatic waiting list, pent up demand, nominal advertising as word-of-mouth grows legs, and a healthy business conscience. Not a bad strategy. If you’re thinking about renovating, building, or replacing something, you should know about energy-efficient, green products before making the decision to purchase. Here are some practical tips from the article for using less energy. - Stove – Convection ovens can cut energy by roughly 20%. - Walls – thick cellulose insulation can prevent heat loss (winter) and heat gain (summer). - Refrigerator – new refrigerators use 25% of the energy required for a 1974 model (just buy all energy star electronics + appliances). - Compact fluorescent bulbs – uses 25% of the energy required for incandescents and last 8-10 times longer. - Computers – LCD screens use 60% less energy than conventional CRTs. - Windows – Double panes filled with low-conductivity gas (w/ edge seals made of silicone foam) reduce heat flow by 50%+ . Overall, the entire magazine was pretty amazing and offered examples of how different buildings are saving money and energy. Buildings mentioned include the Swiss Re Tower (London), Menara Mesiniaga (Malaysia), Edificio Malecon (Buenos Aires), ABN-AMRO Headquarters (Amsterdam), Szencorp Building (Melbourne), Genzyme Corporation headquarters (Cambridge, Mass.), and Procter + Gamble’s factory (Germany). Go out, get a copy, and read it…you’ll be smarter for doing it.
<urn:uuid:382169e2-1f0d-4f6f-967e-2a68bb8e71f7>
CC-MAIN-2014-10
http://www.jetsongreen.com/author/preston/page/536
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021900438/warc/CC-MAIN-20140305121820-00078-ip-10-183-142-35.ec2.internal.warc.gz
en
0.934437
722
3.359375
3
Diabetic Dog Food When it comes to diabetes in dogs, most have type 1 diabetes which is the insulin dependent form. This type of diabetes is the opposite of type 2 diabetes, which is the most common form in humans. The main problem here is not the insulin, but cells that become resistant to insulin’s effects, resulting in a high blood glucose level. Insulin is produced by beta cells of the pancreas. When blood glucose levels exceed a certain set level, insulin is excreted, which signals to cells of the body to intake some glucose, thus maintaining blood glucose close to a set level. In type two diabetes, the cells are resistant to insulin, and glucose levels rise. The problem results in clinical signs of weight loss, bladder infections, and other clinical signs. Most minor forms of type 2 diabetes can be controlled with diet and exercise alone. If your dog has diabetes, your veterinarian will likely recommend a diabetic dog food. These types of foods have more complex carbohydrates than normal dog foods, which have simple carbohydrates. Also, the overall level of carbohydrates is lower, and the level of protein is higher. It is recommended that food be feed in smaller more frequent meals to help prevent a glucose spike in the blood levels. Also more frequent exercise is required. The other type of diabetes (type 1) is insulin dependent. This is the more common form in dogs. With this form, insulin must be given. Diet and exercise is not enough to control this type of diabetes. By feeding diabetic dog food you will likely be able to manage your dog’s diabetes without having to give insulin injections if the condition is not too severe. If you would like to try homemade diabetic dog food diets there are also lots of recipes online. When dealing with a diabetic dog, it is best to give the type of diabetic dog food your vet recommends. There are many on the market, and they are different, so listen to your vet’s recommendations. Return to our dog health page. Go to our free ask a vet page. Go to our Veterinary Blog page.
<urn:uuid:0ad585f7-760d-4f55-ac1d-935a06d61c7b>
CC-MAIN-2017-43
http://www.free-online-veterinarian-advice.com/diabeticdogfood.html
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823462.26/warc/CC-MAIN-20171019194011-20171019214011-00372.warc.gz
en
0.933701
432
2.59375
3
Ideas Based on Developmental Areas: (Printable Version Here) - Sensory: Make a sensory bin! Use rice (dye it blue/green) or choose to use blue/green shredded paper, blue and green playdoh (let students mix), finger paint, make earth day slime, planting a tree, planting seeds, and shaving cream (dye it blue/green). - Fine Motor: Try using a garbage truck shape sorter! Children are required to grasp a shape and work to strategically place that shape in the correct position on the garbage truck. Work on speech, language, motor, and turn-taking with this hands-on toy! Some additional ideas include: using tongs with a sensory bin, picking up/placing globe mini erasers when working on an activity and the puzzle globe. - Gross Motor: The act of recycling, by picking up and placing items into different recycling bins, provides children with large body movements. This hands-on engagement can be great to do when working on this theme. Consider saving some of the items that you recycle and placing them all in one bin. Then use these mixed recyclables as a way for your child to learn to sort by plastic, metal, (etc.) This also can link with cognition (classifying/sorting) but can also give your child a movement component to help them with their learning. - Play: Create centers for your children where they can engage in play in your environment. At one station they may dress up like garbage workers and sort through the recycling. At another station, they may engage in pretend gardening activities. By setting out these themed stations it is something that will get your children excited and expose them to several opportunities for language throughout. - Cognition: sequence steps needed to plant a tree, sort items (trash vs. recycle), sort recyclables (paper/plastic/metal), match mat (picture to picture matching), adapted books (matching the pictures), puzzles (work on cognitive + fine motor), identify ways to help the earth vs harm, monitor attention to task and slowly work to increase it over time. - Social/Emotional: Many of the books are written to answer the questions “why should I ____” (ex: why should I recycle). Relating to children’s common questions. Sharing with children that our actions have an impact on the world around us, is very important. Help guide your children through making good choices about how to help the environment. Family Activities: (Printable Family Letter) Talk about ways that you can care for the world around you. These can be small steps that you take at home such as, turning off lights in rooms that you are no longer in, turning the water off while you are brushing your teeth, and choosing to donate used items instead of throwing them away. Together we can learn not only speech and language skills but also how to make the world a better place. Book Recommendations: (Printable Version Here) This is a list of books that pair well with the Earth Day Theme. I highly recommend searching some of these titles on Epic! or on Youtube if you're looking for free ways to utilize these stories for your kiddos. Another option is to print the list of recommendations and take it to your local library. Check out the books you're interested in. When you know you want to add some books to your personal or school library, you can use the links below to purchase through Amazon. Play Recommendations: (Printable Version Here) Apps That You Can Download: (Printable Version Here) - Bert Saves The Earth - The Four Seasons - The Earth: Tiny Bop - Trash It! - Sophie The Sweater - My City Cleaning - Shapes Garbage Truck - Earth: Save the Ocean Educational Video Links: (Printable Version Here) Grab your list of words related to the Earth Day theme that are categorized by speech sound above. This is helpful for your mixed speech therapy groups or if you have a child who is working on their articulation skills in speech therapy.
<urn:uuid:438b2ea3-2ffa-4946-94e0-e6fd4859ce3b>
CC-MAIN-2023-40
https://shop.communicationcottagetherapy.com/blogs/news/earth-day-speech-therapy-plans
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511717.69/warc/CC-MAIN-20231005012006-20231005042006-00677.warc.gz
en
0.923465
856
4.0625
4
In the new paper, Green and colleagues studied the bans' effect, analysing data from surveys of carcasses collected across India between 2004 and 2008. They found that contamination had dropped from 10.1 percent to 5.6 percent by 2008 -- a sign that the ban is working, though not as fast as they'd hoped. Annual death rates dropped from an astronomical 80 percent before the ban to 18 "If we can get that down to 5 percent, then there's a chance" that the vultures will survive, said Green. "That's still a decline, but we could counteract it by putting out food for the birds and protecting their nest sites. We could compensate for that level of decline." There were other encouraging signs in the data. In 2008, the number of carcasses contaminated with meloxicam -- an alternative, vulture-friendly anti-inflammatory -- outnumbered those tainted by diclofenac. This has occurred in spite of the fact that the ban has been unevenly enforced. According to Green, the success represents outreach efforts to veterinarians and farmers, many of whom hold vultures in high esteem. In Hindu mythology, vultures have a god, Jatayu. Among Parsi communities, for whom religious tradition forbids burial and cremation, corpses have historically been left on platforms for vultures to consume. In the birds' absence, Parsis have turned to other methods of dealing with their dead, including solar accelerators designed to hasten decomposition, though none have proved as efficient or hygienic as vultures. Their highly acidic stomachs are lethal to bacteria, and flocks could strip a body in minutes. The loss of vultures is also felt among people who collected leftover cattle bones and ground them into fertiliser. Now the corpses of cattle are buried -- as sacred animals, they often can't be eaten -- or dragged away by an exploding population of feral dogs, which have become a reservoir of rabies. "There's no longer a symbiosis between vultures and people. Now, instead of vultures, there are lots and lots of semi-wild dogs," said Green, who thinks that the dogs' rise to ecological prominence will prevent the vultures from ever recovering their original role. Still, that the vultures could have any sort of future was almost inconceivable a decade ago. Even though 99 percent have died, the remaining one percent may be enough. "They breed slowly, only rearing a maximum of one chick per year," said Green. "They can increase at a rate of 3 to 5 percent per year. It's never going to be really rapid, but over time it
<urn:uuid:aaaee0cb-e417-41e2-98e7-3fd0aafe4940>
CC-MAIN-2014-41
http://www.wired.co.uk/news/archive/2011-05/18/indian-vulture-recovery/page/2
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663007.13/warc/CC-MAIN-20140930004103-00437-ip-10-234-18-248.ec2.internal.warc.gz
en
0.961832
598
3.265625
3
How did we get here? This documentary series provides insight into the origins and milestones of several financial aid programs as we ponder future direction and navigate foreseeable roadblocks to policy innovation. Learn from key experts about the history of financial aid in American higher education. Let’s look back to move forward. - In 2014-2015, the federal government gave $128 billion in financial aid to students. - Approximately 2 million college students eligible for financial aid – specifically a Pell Grant – never apply. - The total amount of outstanding federal student loan debt has more than doubled since 2007. - There are currently eight different repayment options for federal student loans. - The Pell Grant Program currently serves 9 million students at an annual cost of over $30 billion; up from $11 billion to 4.8 million students over a decade ago. - Funding for work-study and other campus-based aid programs have stagnated in recent years. To learn more about the film participants, please view their biographies here. What do you think about the films’ reflection of student aid history? Visit Lumina Foundation’s page to join the conversation.
<urn:uuid:a14b1756-c60d-4d15-9151-970c340d305b>
CC-MAIN-2020-29
http://ihep.net/research/initiatives/looking-back-move-forward-history-federal-student-aid
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00282.warc.gz
en
0.941401
236
2.78125
3
Imported from USA From the Publisher Author and illustrator Tom Newton is a school psychologist. How Cars Work was developed as a high interest mini-textbook for teens, but is also used by automotive service managers and mechanics to help customers understand repairs. This book can be found in adult literacy programs, high schools, and middle schools. How Cars Work makes it fun and easy to learn how cars work. From the Author "When I opened my tutoring center I could not find enough interesting reading material for my teenage students, especially the boys. So I started writing short descriptions about car parts aimed at improving reading comprehension. I used simple drawings to help students visualize and associate the car parts with the words. Eventually I had enough material for a complete book and, well, this is it!" See all Editorial Reviews (
<urn:uuid:829ad68c-ff36-4e1f-8173-5d6e64e9e8b0>
CC-MAIN-2020-10
https://www.desertcart.ae/products/221178-how-cars-work
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00424.warc.gz
en
0.918906
179
3.015625
3
Riveting four-part snap buttons are known as snap fasteners to many people. They have more than one hundred years of tradition. Their appearance and functions have remained practically unchanged during this time. It should be noted that despite their great age, they perfectly fulfill their function even today. They are perfectly joined and the most diverse materials can be snapped with them. Snap fasteners consist of four individual parts. We will get a head and a base component of the snap fastener by combining the two pairs of the components that belong together. The button head has a convex shape from the right side and a cavity for the head of the bottom part of the button from the rear side. The bottom component of the snap fastener has a conical shape into which the head snaps into. Surface treatment of snap fasteners Both types of snap fasteners are suitable for materials with a power of 1-3 mm. They are made of steel and then their surface is finished. Use of snap fasteners in practice Snap fasteners are used to connect fabrics of varying strengths and solidity, terry fabrics, knitted fabrics, leather and canvas cloth. They can also be used to connect fabric to solid materials (wood, cardboard, or plastic), or for the joining of more than two parts of the material in one place. Snap fasteners can be used not just in the clothing industry, but particularly in luggage, footwear, saddlery or paper. You can use them for ready-to-wear clothing (jackets, vests…), haberdashery, saddlery and luggage goods (bags, cases, packaging…) or canine aids (harnesses, leashes, clothing for dogs, cats…), etc. Rules for the proper application of snap fasteners - Select the correct size of the perforation. - Develop enough compressing strength when riveting. - Always perform the blows of the hammer or mallet perpendicularly to the riveter. - Use the correct size of the applied tool for the snap fastener. - With textile material underpin a round snap fastener, best with iron reinforcement. - With eco-leather and artificial leather underpin the snap fastener, best with a round clamp. Instructions for applying metal snap fasteners To apply snap fasteners we need: - 4-part rivet snap button - perforation pliers or round drive punch - rubber pad - a set for the manual riveting of buttons - a hammer or mallet Note: In saddlery and luggage hand production special mallets with wood, plastic, or leather heads are used as hand tools. A hammer is used for footwear and haberdashery production. First, it is necessary to cut a hole in the material corresponding to the diameter size of the shaft on both parts of the button. See the “Manual Riveting” article for similar information and instructions. Application of snap fastener heads - We insert the circular part of the button into the application preparation for rivets. The head fits into the hole and the snap fastener does not shift during riveting. - For the head we apply the material on the front side, the rear side material facing us. - For the material we apply the pressed button to the second part. - We insert the riveter into the hole in the lower part of the snap fastener. The hole and the riveter have an oval shape. Be careful so that the entire underside of the riveter correctly fits into the hole. The slot for the riveter must fit into the snap fastener. - We connect the riveted button with a tool. A single strike is enough, riveting the internal part of the snap fastener, and both parts will hold together. The principle is the same as in riveting. - Both parts are now connected. Application of the lower part of the snap fastener The application procedure of the lower part is identical. We just replace the order of the parts. - We insert the lower cone of the snap fastener into the application preparation for rivets. We snap the flat surface of the blunted pyramid into the hole and the snap fastener does not shift during riveting. - For the cone, we apply the reverse side of the material, the front turned towards us. - For the material we apply the pressed button to the second part – the cap. - We insert the round riveter for the cap. We connect the riveted button with a tool. A single strike is enough, riveting the internal part of the snap fastener, and both parts will hold together.
<urn:uuid:b32e57c1-95fe-48e2-9eb6-caeaeb995562>
CC-MAIN-2020-05
https://blog.pethardware.com/en/rivet-snap-fasteners-snap-buttons/
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00518.warc.gz
en
0.908123
967
2.578125
3
Versions of "Ring around the Rosie" can be found in many languages throughout Europe and the English-speaking world. The German version is called "Ringe, Ringe, Reihe!" It can be found in print in an antiquarian magazine from 1796. That seems to be the earliest version in print. It's sung to the same tune as the English version. The American version is called "Ring around the Rosie". There's reference to it being known in Massachusetts in 1790. Yet it can't be found in print before 1855, in a novel called, "The Old Homestead" by Ann S. Stephens. That version is somewhat different from the current versions. The British name of the song is "Ring a Ring o' Roses". The song is first found in print in its current form in English in Kate Greenaway's book from 1881 called, "Mother Goose or the Old Nursery Rhymes". Here you'll find some versions of the song in different languages from around the world... Ring a Ring o' Roses (Ring around the Rosie) Ring around the Rosie & The Plague Many people believe this song is about The Great Plague of London. That the roses refer to a rash. That the posies are kept in the pocket due to a superstition that it prevented the plague. In the British version they say "a-tishoo" which is the sound of a sneeze. Then they all fall down dead. This idea that the song refers to a plague is not believed by folklorists. Firstly, the song is not seen in print until two centuries after the plague. Why would it not be found in print with other literature that exists from the time? Secondly, the first time the theory is mentioned in print is in 1951. Why wouldn't any of the early folklorists of children's music have mentioned it when they published the song? The best explanation I've seen is that it's folklore about folklore. It's what folklorists call "metafolklore". Let's just admit it's a really good yarn! There's an interesting article about it by Stephen Winick called, Ring Around the Rosie: Metafolklore, Rhyme and Reason. Contribute a Version! If you know a version that you don't see here, let us know! EMAIL US and write "Contribute" as the subject of the e-mail.
<urn:uuid:db3ba60a-3e9d-470b-91cd-2d5342757a93>
CC-MAIN-2020-16
https://www.mamalisa.com/?t=e_family&c=206
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371799447.70/warc/CC-MAIN-20200407121105-20200407151605-00332.warc.gz
en
0.964152
508
3.34375
3
Coal-fired power stations are a major contributor to South African emissions South Africa has committed to reducing its carbon emissions by 34% by 2020, but says it will need financial aid from developed countries to do so. The announcement was made as the world climate talks started in Copenhagen. Environmental group Greenpeace said the announcement had made South Africa "one of the stars of the negotiations". The country's greenhouse gases come mostly from the coal-burning power stations. The government says it is looking at other energy sources. US-LED COPENHAGEN DEAL No reference to legally binding agreement Recognises the need to limit global temperatures rising no more than 2C above pre-industrial levels Developed countries to "set a goal of mobilising jointly $100bn a year by 2020 to address the needs of developing countries" On transparency: Emerging nations monitor own efforts and report to UN every two years. Some international checks No detailed framework on carbon markets - "various approaches" will be pursued Updated: 13:47 GMT, 19 December South Africa said it would lower its carbon emissions to 34% below current expected levels by 2020 and about 42% below current trends by 2025. "This undertaking is conditional on firstly a fair, ambitious and effective agreement," a South African government statement said. "And secondly, the provision of support from the international community, and in particular finance, technology and support." The government said developing countries such as South Africa would need financial help from developed economies, with some of the aid being used to acquire the technology needed to reach its target. The country's chief climate negotiator Alf Wills told Reuters the offer was the first time South Africa had given a specific target for reducing its carbon footprint. AFRICA HAVE YOUR SAY Climate change is a Western problem and at such a Western solution should be found Laura Golakeh, Monrovia The government said international finance has helped South Africa to build new solar and wind-powered plants. But construction has already begun on what is believed to be the biggest coal-burning power plant in the country. The plant is aimed at meeting the country's increasing energy needs and avoiding a repeat of last year's rolling blackouts, which cost the country millions of dollars.
<urn:uuid:12be153e-314c-4012-8f5c-18fd94dbb8fa>
CC-MAIN-2013-48
http://news.bbc.co.uk/2/hi/africa/8398775.stm
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164032593/warc/CC-MAIN-20131204133352-00001-ip-10-33-133-15.ec2.internal.warc.gz
en
0.957917
465
3.234375
3
What Is an Anti-Inflammatory Diet? Health experts say that eating anti-inflammatory foods help fight aging as well as relieve certain health conditions like arthritis and cancer. Find out which foods are on this diet and how it can benefit your health. Good and Bad of Inflammation Why Eat an Anti-Inflammatory Diet? Inflammation is the natural process that occurs when the immune system fights off foreign substances like germs, pollen or chemicals “threatening” the body. So, inflammation is not bad at all. It is the normal immune system response when you are injured or sick. What “Good” Inflammation Does for You When you’ve got fever or swollen glands due to sore throat; when your injury or infected cut swells, turns red and painful; it simply means that inflammation is jumpstarting the healing process. Your white blood cells are going to the site to protect your health. When you’re emotionally stressed, you get a rush of C-reactive proteins, which are inflammatory markers, in your blood stream as an immune response. When Inflammation Goes Bad Inflammation should be temporary. When you’re no longer ill or injured, it should also go away. If it persists even without foreign invaders in the body, it becomes an enemy. When chronic inflammation gets out of hand, instead of healing it destroys. Thus, it has been linked to many diseases including cancer, arthritis, heart disease, diabetes and Alzheimer’s disease. When your joints are inflamed as in the case of rheumatoid arthritis, it can cause serious pain and damage. When fatty plaque form in the arteries due to chronic inflammation, white blood cells can go to the site and form blood clots that can possibly lead to a heart attack. Having inflammatory bowel disease (in the gastrointestinal tract) can affect bone health. It hampers the absorption of calcium and vitamin D – essential nutrients for bone health. Experts also believe that chronic inflammation is linked to faster cell aging as observed in visible signs of aging like wrinkle formation. Anti-Inflammatory Foods and Arthritis What Can You Do to Avoid Inflammation? Aside from foreign substances invading the body, autoimmune disorder, stress, exposure to UV rays and pollution, lack of sleep and poor nutrition contribute to inflammation and aging. If you want to do something about it, make efforts to address these factors. - Get enough sleep For example, you need to sleep at least 7 hours every night because, according to a study, people who sleep less than 7 hours have more inflammation-related proteins in their blood. - Eat anti-inflammatory foods You are also better off if you make changes to your diet and include anti-inflammatory foods. An anti-inflammatory diet is beneficial for every person. Whether you have any inflammation-related health issue or not, you will be healthier with this diet. If you suffer from illness like rheumatoid arthritis, however, don’t expect it to miraculously cure your symptoms. With the dietary changes, you will most likely notice toned down pain or a lesser number of flare-ups. Foods on the Anti-inflammatory Diet A popular diet that includes lots of anti-inflammatory foods is the Mediterranean diet. That means eating lots of high-fiber fruits and vegetables, whole grains, beans or plant-based proteins, fish high in omega-3 fatty acids and herbs and spices with anti-inflammatory substances. - If you love to eat soy products like tofu, soy milk, tempeh and edamame, you’re already eating some anti-inflammatory foods. Beans are loaded with fiber, antioxidants and anti-inflammatory substances. - Healthy fats present in olive oil are good for stopping inflammation. You can also get these healthy fats from avocados, nuts and seeds. But limit your intake if you need to watch your calories. - Omega-3 fatty acids are number one fighters of inflammation. So, eat fish like salmon, tuna and sardines at least two times every week. - Brightly colored fruits and vegetables contain lots of substances that fight inflammation. An example is vitamin K that is present in spinach, kale, broccoli and cabbage. The pigment responsible for the color of fruits like blackberries, raspberries and cherries are also inflammation fighters. - Oatmeal, brown rice and other whole grains or related products are high in fiber, which curbs inflammation. - Cook with anti-inflammatory spices like garlic, turmeric, ginger and cinnamon. Most people are familiar with garlic and ginger and use them regularly in cooking foods. Turmeric is found in curry powder and so is common in Indian cuisine. - If you find it hard to tell which foods cause inflammation and which do not, your best bet is to eat fresh, unprocessed foods then cook them yourself. Whether this diet works or not to dispel your inflammation problem, an anti-inflammatory diet when followed through regularly can improve overall health and lower the risk for many diseases especially those related to aging. Foods to Avoid on an Anti-Inflammatory Diet It's not enough to eat anti-inflammatory foods. You also need to avoid foods that cause inflammation. These include foods that are high in sugar, refined carbohydrates, saturated fat, trans fat and omega 6-fatty acids. These foods cause the immune system to be overactive leading to inflammation i.e. fatigue, joint pain and damaged blood vessels. - Consuming sugary drinks such as soda and foods with refined carbohydrates like white bread, releases inflammatory messengers called cytokines in the body. - Corn oil, safflower oil, sunflower oil and other vegetable oils are high in omega-6s. The body needs omega-6s, but too much of this can cause an imbalance in omega-3 and omega-6 resulting in more inflammation. - Completely avoid products with trans fat. These include margarine, vegetable shortening and coffee creamers. If you see “partially hydrogenated oils” in the list of ingredients on the product’s label, it contains trans fat. Trans fat is associated with high LDL cholesterol and inflammation. - Red meat and processed meat, such as hot dogs contain saturated fat. Saturated fat in the body triggers inflammatory response from the immune system. For people with celiac disease or gluten intolerance, it is necessary to go on a gluten-free diet because gluten triggers gut inflammation. It’s a problem with the immune system that directs an attack on the small intestine with the presence of gluten. The main foods to avoid include wheat, barley and rye. If you find it hard to tell which foods cause inflammation and which do not, your best bet is to eat fresh, unprocessed foods then cook them yourself. Whether this diet works or not to dispel your inflammation problem, an anti-inflammatory diet when followed through regularly can improve overall health and lower the risk for many diseases especially those related to aging. - What Is Inflammation? 13 Ways it Affects Your Health - Health.com Health problems caused by inflammation. - Anti-Inflammatory Diet: What to Eat to Feel Better Changing your diet might reduce your pain by squashing inflammation. WebMD reveals what to eat and avoid. - Foods that fight inflammation - Harvard Health Pro-inflammatory foods include fried foods, sodas, refined carbohydrates, and red meat. Green vegetables, berries, whole grains, and fatty fish are thought… This content is accurate and true to the best of the author’s knowledge and does not substitute for diagnosis, prognosis, treatment, prescription, and/or dietary advice from a licensed health professional. Drugs, supplements, and natural remedies may have dangerous side effects. If pregnant or nursing, consult with a qualified provider on an individual basis. Seek immediate help if you are experiencing a medical emergency.
<urn:uuid:b2ca45e0-9315-4c2c-a09b-f39984e9429c>
CC-MAIN-2020-29
https://caloriebee.com/diets/What-is-an-Anti-Inflammatory-Diet
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657134758.80/warc/CC-MAIN-20200712082512-20200712112512-00560.warc.gz
en
0.926001
1,637
2.9375
3
Gantt chart, commonly used for tracking project schedules, is one of the most popular and useful ways of showing activities, tasks or events displayed against time. A Gantt chart allows you to see at a glance about what the various activities are, when each activity begins and ends, how long each activity is scheduled to last, where activities overlap with other activities and by how much the start and end date of the whole project. The first Gantt chart was devised in the mid-1890s by a Polish engineer named Karol Adamiecki. Today, Gantt charts are useful to show additional information about the various tasks or phases of the project, for example how the tasks relate to each other, how far each task has progressed, what resources are being used for each task and so on. Gantt Chart Software Edraw Gantt Chart Software is an easy to use project management and business diagramming program used by a variety of companies. Through Edraw Gantt template, users can easily create great-looking Gantt charts and project schedules in minutes. Through Gantt data import wizard, you can directly import a data file, then Edraw will generate a Gantt chart automatically for you. This feature gives the easiest way for creating a Gantt Chart, which greatly saves your time and energy. Download Gantt Chart Software Video Tutorial - Edraw Project Introduction Gantt Chart Symbols Edraw Gantt chart templates offer you plenty of special Gantt shapes. You will find these shapes of great help when drawing the Gantt diagrams. Gantt Chart Examples Enjoy a fun and fast drawing experience with this handy and flexible Gantt chart template. How to Create a Gantt Chart
<urn:uuid:6abc6c49-e436-4c47-8880-ae6c4606a8ce>
CC-MAIN-2020-29
https://www.edrawsoft.com/gantt-chart.html
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896374.33/warc/CC-MAIN-20200708031342-20200708061342-00260.warc.gz
en
0.892716
365
2.8125
3