text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Sustainability is a global imperative and a scientific challenge like no other. This concise guide provides students and practitioners with a strategic framework for linking knowledge with action in the pursuit of sustainable development, and serves as an invaluable companion to more narrowly focused courses dealing with sustainability in particular sectors such as energy, food, water, and housing, or in particular regions of the world. Written by leading experts, Pursuing Sustainability shows how more inclusive and interdisciplinary approaches and systems perspectives can help you achieve your sustainability objectives. It stresses the need for understanding how capital assets are linked to sustainability goals through the complex adaptive dynamics of social-environmental systems, how committed people can use governance processes to alter those dynamics, and how successful interventions can be shaped through collaborations among researchers and practitioners on the ground. The ideal textbook for undergraduate and graduate students and an invaluable resource for anyone working in this fast-growing field, Pursuing Sustainability also features case studies, a glossary, and suggestions for further reading. - Provides a strategic framework for linking knowledge with action - Draws on the latest cutting-edge science and practices - Serves as the ideal companion text to more narrowly focused courses - Utilizes interdisciplinary approaches and systems perspectives - Illustrates concepts with a core set of case studies used throughout the book - Written by world authorities on sustainability - An online illustration package is available to professors Pamela Matson is dean of the School of Earth, Energy & Environmental Sciences and the Goldman Professor of Environmental Studies at Stanford University. William C. Clark is the Harvey Brooks Professor of International Science, Public Policy, and Human Development at Harvard University's Kennedy School of Government. Krister Andersson is professor of political science at the University of Colorado at Boulder. "If we are to make peace with nature, the effort will have to come from us all. This very moving book is the finest introduction to the subject I have seen. It does not avoid technicalities, but can be read with equal benefit by the young and the old with no prior knowledge of the complexities we face."--Sir Partha Dasgupta, Frank Ramsey Professor Emeritus of Economics, University of Cambridge "Sustainability can seem like a faraway, nebulous dream. Through a clear framework, iconic case studies, and a beautiful, accessible style, Pursuing Sustainability brings this dream to life. A must-read for anyone concerned about the future health of our planet."--Gretchen C. Daily, cofounder of the Natural Capital Project and author of The Power of Trees "Finally, a beautiful small book bringing together the thinking and practice behind sustainability science in an easily accessible and comprehensive manner, making it clear that this critical field of study for humanity provides an overarching framework for many different areas and competencies dealing with the sustainability challenge. Strongly recommended."--Carl Folke, founder of the Stockholm Resilience Centre, Stockholm University, and director of the Beijer Institute of Ecological Economics, Royal Swedish Academy of Sciences "This is a beautiful, lucid, and desperately needed book about the sustainability challenge. The authors accomplish a mission impossible: providing deep analyses of complex adaptive social-environmental systems while using simple terms and compelling metaphors to expose the crucial steps we need to take for long-term inclusive well-being. A must-read for practitioners and scholars alike."--Hans Joachim Schellnhuber, founder and director of the Potsdam Institute for Climate Impact Research Table of Contents: CHAPTER 1 Pursuing Sustainability: An Introduction 1 CHAPTER 2 A Framework for Sustainability Analysis: Linking Ultimate Goals with Their Underlying Determinants 14 CHAPTER 3 Dynamics of Social-Environmental Systems 52 CHAPTER 4 Governance in Social-Environmental Systems 83 CHAPTER 5 Linking Knowledge with Action 105 CHAPTER 6 Next Steps: Contributing to a Sustainability Transition 129 Appendix A Case Studies in Sustainability 143 Appendix B Glossary of Terms, Acronyms, and Additional Resources 187
<urn:uuid:7a1b5c31-44cd-4302-9309-4f439763f1c0>
CC-MAIN-2016-26
http://press.princeton.edu/titles/10777.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00158-ip-10-164-35-72.ec2.internal.warc.gz
en
0.876712
826
2.5625
3
A resource is any aspect of the computing system that can be manipulated with the intent to change application behavior. Thus, a resource is a capability that an application implicitly or explicitly requests which, if denied or constrained, causes the execution of a robustly written application to proceed more slowly. Classification of resources (as opposed to identification of resources) can be made along a number of axes. The axes could be implicitly requested versus explicitly requested, time-based (such as CPU time) versus time-independent (such as CPU shares assigned), and so forth. Generally, scheduler-based resource management is applied to resources that the application can implicitly request. For example, to continue execution, an application implicitly requests additional CPU time, while to write data to a network socket, an application implicitly requests bandwidth. Constraints can be placed on the aggregate total use of an implicitly requested resource. Additional interfaces can be presented so that bandwidth or CPU service levels can be explicitly negotiated. Resources that are explicitly requested, such as a request for an additional thread, can be managed by constraint.
<urn:uuid:451dff64-16e6-4b6d-8bfc-aed376cb778b>
CC-MAIN-2016-26
http://docs.oracle.com/cd/E19683-01/806-4076/rmintro-27/index.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00095-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929655
218
2.921875
3
The ground is not necessary and has no significance in reality. If the ground is not present, the lower voltage end of the resistor will float with respect to the AC supply, which is fine for most cases. The wave form for each half cycle will be offset by the voltage across the diodes, as current will only start to flow then the voltage at the low-voltage side of the transformer exceeds two diode drops. There will be a small gap of zero potential ( V=IR across the resistor ) across the resistor when the AC crosses between positive and negative. At this point, the voltage of the more positive wire of the low-voltage side of the transformer will be increasing from zero to say 1.4V wrt the other wire of that side of the transformer. The voltage dropped across the diodes increases until they start to conduct then remains constant ( to a first approximation ), they don't drop 0.7V if you only apply 0.1V to them. So the half cycle starts at 0V then stays there until 1.4V is applied to the bridge, then tracks the sine-wave less the 1.4V diode drops until it hits zero again, then stays at zero until the next AC crossing.
<urn:uuid:87ab89b0-b7e8-4edb-a738-e1ba5353d3d1>
CC-MAIN-2016-26
http://electronics.stackexchange.com/questions/107035/significance-of-gnd-in-bridge-rectifier
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.929513
257
2.953125
3
I have heard enough from my students that learning Greek is tough - really tough. Is there any better way to help our students in the seminary? In this respect, Zondervan announces a new addition to its line of biblical resources: Sing and Learn New Testament Greek: The Easiest Way to Learn Greek Grammar. Authored by Kenneth Berding, this resource includes "everything a professor or a student will need: a CD (containing eleven songs and a PowerPoint with paradigm charts for classroom use) and a booklet with the same paradigm charts for students’ personal use. According to Zondervan, Sing and Learn New Testament Greek "provides a way for learning (and remembering!) New Testament Greek grammar forms through simple songs. It is not designed to compete with existing Greek grammar books, but to serve as a required supplemental resource for elementary Greek classes. Indeed, it has been designed to be used alongside of any introductory grammar. A professor can simply assign to his or her students any (or some) of the songs for the paradigms a particular elementary grammar employs. In this way, students will actually remember what they have learned. (As we are all aware, people do not easily forget something learned via song.)" "The entire project includes songs for indicative verb endings, participles, infinitives, imperatives, contract forms, and prepositions, among others. All but the last song can be sung in 15 seconds or less. Parsing is enormously easier through this method. And it is a lot more fun than traditional methods. (Are we allowed to even use the word “fun” in reference to elementary Greek? Absolutely!)" "Beginning Greek students can listen to the CD as they drive to and from school or work, or put it on their iPod." "These songs are so simple that students who have used them complain about waking up in the middle of the night with the songs running through their heads. You’ll never hear that complaint from students who have had to use rote memory to learn grammar forms." I am convinced - I am recommending all Greek-ers to get hold of this resource when it is available in May 2008. You will love learning the articles by singing to the tune of Three Blind Mice; the participles by singing Old McDonald Had a Farm; the imperatives by singing Row, Row, Row Your Boat; and many more. Have fun in learning, or better still, singing, Greek
<urn:uuid:189e33c2-b8eb-42eb-a7ea-2ad345752900>
CC-MAIN-2016-26
http://myhomilia.blogspot.com/2008/04/learning-greek-by-singing.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00098-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949961
509
2.625
3
Horseradish and pigeon droppings. That's the magic hair-growth potion prescribed by Hippocrates. Alas, there are so many myths about hair loss that folks today are almost as clueless as the father of medicine. Keep reading as hair loss expert Dr. Robert Bernstein, clinical professor of dermatology at Columbia University, explodes 10 all-too-common follicle fallacies... Myth: Genes for Hair Loss Come from Mom Genes for hair loss can be inherited from either side of the family, or both sides. Myth: Bald Guys Have Lots of Testosterone An elevated testosterone level isn't the cause of hair loss. Hair loss results when certain hair follicles on the scalp are highly sensitive to another hormone called dihydrotestosterone (DHT). As a result of this extreme sensitivity, hair follicles shrink and eventually disappear. Myth: Hair Loss Happens in "Patches" Men generally don't go bald as a result of losing hair in patches or clumps. Rather, baldness occurs when ordinarily thick (large-diameter) hairs are gradually replaced by fine, thin hairs. Doctors call this process miniaturization. If your hair starts falling out in clumps, it's time to see a doctor. Myth: Poor Circulation Is to Blame Growing hair does require healthy circulation in the scalp. As hair loss occurs, scalp circulation declines - because less blood is needed. But decreased blood flow to the scalp isn't the cause of hair loss. It's the result. Myth: Hats Make You Go Bald There's simply no support for the notion that hats cause hair loss by keeping the scalp from "breathing." Hat or no hat, hair follicles get oxygen from the bloodstream - not the air. Myth: Clogged Pores Cause Hair Loss Clogged pores are associated with acne but not hair loss. If baldness were the result of pore problems, vigorous shampooing could maintain a full head of hair. That's not the case. Myth: Frequent Shampooing Makes Hair Fall Out Men sometimes see hair in the tub and think that shampooing is to blame. Baldness isn't about hair falling out, but about normal hair gradually giving way to fine hair. Myth: Hair Loss Is a Man's Problem Women generally don't go bald, but more than 40 percent experience significant thinning of the hair. Myth: Hair Loss Drugs Affect Only the Crown The drugs Rogaine (minoxidil) and Propecia (finasteride) don't cause regrowth of hair that has already been lost, but they can slow the pace of hair loss. And while initial studies on these drugs involved hair on the crown, they can work anywhere on the scalp where there is thinning (as long as the area is not completely bald). Myth: Hair Loss Stops as Men Age Once hair loss begins, it continues over a person's lifetime. But it's hard to estimate the rate of hair loss. In general, the younger you are when you start to lose your hair, the more likely you are to become completely bald. However, the rate at which hair will continue to fall out is hard to guess.
<urn:uuid:cbecc86e-8ac6-462e-b2b6-4de72cde3842>
CC-MAIN-2016-26
http://www.cbsnews.com/pictures/hey-baldy-10-things-you-need-to-know-about-hair-loss/4/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947006
662
2.734375
3
Three years ago, Kirk deFord stopped driving to work and began biking the six miles to his office at an educational research center in Portland, Ore. "It was primarily environmental concerns that got me started," said deFord. "But I also was sick of paying $12.50 a day to park and buying all that gasoline." DeFord, who is 59, also has seen his auto insurance premiums drop because he drives less. And there's an added bonus, he points out - "a lot more exercise." DeFord is among a growing number of Americans who have learned that energy conservation is not just good for the planet. It's good for the pocketbook, too. Betsy Taylor, executive director of the Center for a New American Dream in Takoma Park, Md., says that even small steps by families - using a bit less air conditioning, eliminating a single car trip a week - can collectively have a big impact on the environment. "If you want to, you can go and live the simple life in a log cabin in the woods," Taylor said. "But most people want to take smaller steps that also matter and have an impact." Since last fall, her nonprofit group has been promoting "Turn the Tide - Nine Actions for the Planet," which recommends such steps as installing efficient showerheads and replacing four standard light bulbs with energy-efficient compact fluorescent lights. Changing the light bulbs alone, she said, can save a family $100 over the lives of the bulbs. Rozanne Weissman of the Washington-based Alliance to Save Energy suggests a number of steps that will produce big environmental bang for the buck: - Improve heating and cooling systems "Heating and cooling generally account for half of the average home energy bill, so there's a big payoff for improving efficiency," Weissman said. Annual maintenance and cleaning the filters can make the equipment run better. Changing the thermostat just a few degrees will cut energy consumption. "And if you need to replace your old system, the new ones are much more energy efficient," she said. - Update the thermostat Modern, programmable thermostats let you heat and cool a house to suit your lifestyle. "You lower the heat during the day when you're not home, and have the thermostat set to start warming it up a half hour before you get back," Weissman said. "Or you can turn the heat down while you sleep, and have it come up a half hour before you wake." The thermostats often sell for less than $100. She added that some families user timers on window air conditioners to turn them off when they aren't needed. - Insulate yourself Basic caulking and weather-stripping can reduce the amount of cooling and heating lost to the outdoors. Good insulation, especially in the attic, can cut heat and cooling losses even more but require a bigger investment. "With so many people refinancing at pretty decent (interest) rates, it's a good time to think about energy-saving projects," Weissman said. - Look for the star Major appliances - from air conditioners to washing machines and computers - that carry the government's Energy Star of approval are the most efficient on the market, and switching to them often can cut a third off your energy bill, Weissman said. Consumers can get information at www.energystar.gov. Some states have special "bounty" programs to encourage the purchase of such products. In New York state, which is eager to reduce peak power demand, consumers who bring in an old but working air conditioner and purchase an Energy Star replacement can collect $75. The www.GetEnergySmart.org site has details. Weissman said that studies done by the alliance indicate that "consumers are sharp and they shop smart" with the aim of finding good products for good prices. "They care a lot about pocketbook issues and comfort in the home," she said. "If those are met, helping the planet at the same time is a great additional benefit." DeFord, who works at the Northwest Regional Educational Laboratory, is among those trying to put the environment first. He keeps his thermostat below 65 in winter and wears a fleece vest to stay warm. He doesn't use air conditioning, saying: "It's a lot easier just to open the windows." Over the past two years, he's been working to increase the insulation in his home, adding more to the walls and the attic. "I can't measure the savings in dollars exactly, but I know it adds up," he said. DeFord likes to share what he has learned, too, by volunteering to lead discussion groups for the Northwest Earth Institute. "I really feel that something has to happen different for the planet or it will fall down around our ears," deFord said. On the Net: © 2016. All Rights Reserved. | Contact Us
<urn:uuid:5390e00b-4a50-486d-aca8-6c8ad0a46c96>
CC-MAIN-2016-26
http://chronicle.augusta.com/stories/2002/05/13/bus_344518.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00066-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964757
1,021
2.53125
3
Q: A friend tells me that you might help identify this infestation on the back of my snapdragon leaves. It appeared as the plants were about to send up their flowering shoots and wilted them before most were able to flower. A: You have an excellent example of snapdragon rust! New infections and disease development are favored by cool nights and warm days combined with abundant dews, light rains, or irrigation. If temperatures remain between 45° and 65°F when plants are wet for six to eight hours, infection by air-or water-borne spores is almost certain. A couple of sprays with chlorothalonil (click for sources) will control it on new leaves but diseased plants and the soil under your plants will harbor the spores for two years. Remove and destroy infected plants when seen and plant other cool season annuals in place of your snapdragons for the next couple of years. Tags For This Article: snapdragon rust
<urn:uuid:842e6093-5bef-4872-8545-435b5570cc7f>
CC-MAIN-2016-26
http://www.walterreeves.com/gardening-q-and-a/snapdragon-rust-identification/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959686
196
2.734375
3
Like China doesn’t already have a monopoly on adorable baby panda pictures, they’ve decided to dress caretakers as giant pandas so that they get used to the idea that they’re pandas. It’s part of a larger scheme to reintroduce them to the wild. They’ve been breeding pandas in captivity since 2003. About 300 live in captivity and 1,500 in the wild. In May they announced they’d set up a sight near Dujiangyan in Sichuan. Is all this panda puppetry really necessary? Well, it couldn’t hurt. But imprinting is more of a serious problem with birds, especially precocial birds, that is, those that hatch eyes open and ready to walk. The most notorious imprinters are Whooping Cranes, which can’t even be raised by Sandhills, lest they refuse to mate with their own kind. Wildlife rehabilitators do occassionally dress up as other animals. The Wildlife Education and Rehabilitation Center in Morgan Hill, CA, designed a plan to have a team of scentless volunteers dress as a mother bobcat to a solo cub, who was otherwise shown mirrors and had only negative experiences with humans. Generally rehabbers don’t have to dress up as their deer, squirrel or raccoon charges; they just put them in with babies of their own kind. If Chinese researchers were more serious about preventing imprinting, they might look first to the programs where foreign tourists pay to handle panda cubs or stay for a week and “volunteer” at the panda center. For $45-$60 you can go on the 4-hour “China Tour to Chengdu Giant Panda Breeding & Research Base and Hold Panda Baby in Your Lap for Photo.” You can also pay to stay for up to a month. Or you can become “Volunteer Nurse Assistant to Bath Wash Panda and Watch Newborn Infant Baby in Incubator Nursery,” which costs $225 for “5-7 minutes” or $1,275 for a whole day. I doubt those programs do any harm. They probably offer up only panda that aren’t going to be released–though it would be hard to tell at that age which would be the best candidates. Even the new center would only have room to train three to five pandas. The “panda nurse” program certainly provides funding for the pandas and possibly an economic incentive to run the program. Plus, just like the dress-like-a-panda surrogate program, it really produces some world-class adorable pictures. Where to See Wildlife in Asia Where to See Bears (Even though pandas aren’t really bears)
<urn:uuid:ad321acd-8fda-4152-afc4-d5c1836d6bb0>
CC-MAIN-2016-26
http://animaltourism.com/news/2010/12/08/china-panda
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956212
581
2.578125
3
A lever (or lever tumbler) is a lock design that uses flat pieces of metal (also known as levers) and a bolt as locking components. In this article, 'lever lock' does not mean a locking lever handle incorporating a cylinder locking device. In most designs, the position of the levers prevents the bolt from retracting. When positioned properly, a gate in the lever allows the bolt to move (shot or withdrawn). Lever locks are historically one of the most popular lock designs, but use has declined as less expensive pin-tumbler locks have gained popularity. Lever locks are popular in Europe (particularly the UK), eastern Europe, and some parts of South America, as residential and commercial door locks and on safes. Safe-deposit boxes in banks around the world use lever designs heavily. A single locking tumbler was used on many Roman metal locks, often in association with wards. Many early door locks had no case, with a bolt and locking tumbler mounted on a backplate. From at least 13C, some locks had these components mounted in a wood stock without a backplate — this lock design is the Banbury lock (the reason for this name is unknown). These designs did not use fences and gates, but rather a simple pivoted tumbler or lever that the key had to move (typically, lift) out of the way in order for the bolt to move. Security was provided by warding. Other locks had a backplate mounted in a wood stock - the [plate] stock lock. In 1778, Robert Barron patented [BP1200] the principle of all modern mechanical security locks — the double-acting movable detainer. His patent describes 'the gating or racking to allow a stump on the tumbler to pass through the bolt, or an opening in the tumbler to allow a stump on the bolt to pass through.' These two (of several possible) realisations of the double-acting movable detainer principle are now usually described as 'lever locks'. Barron's was the first lever lock that used a stump and gates. This technique requires each lever be moved to a precise distance (typically, height) at which the stump can pass through the gate. Overlifting or underlifting a lever leaves it blocking the stump - hence "Double acting"; older locks' levers only needed to be moved upwards to clear — more than that had no effect, as they had already cleared the obstacle. Barron, and after him his son, and others, used only the arrangement of stumps on the tumblers with gates in the bolt tail. This arrangement would prove in the long run less successful than Barron's other suggestion of a stump on the bolt tail and gates in the tumblers. The realisation Barron used is practically limited to 4 tumblers, and most locks had only 2. The other arrangement allows an unlimited number of levers to be stacked on the same pivot, blocking the same stump. The double-acting movable detainer principle is still in use to this day in lever locks, but also including pin-tumbler locks by Linus Yale, Jr. In 1818, Charles and Jeremiah Chubb patented [BP4219] a lock design based on Barron's work. Their version used the placement of stump on the bolt tail and gates in the levers. These levers have 2 pockets, with the bolt stump moving through the gate in the lever fence (or bar) from one pocket to the other, as the bolt moves. This design is commonly associated with the name of Chubb, and is still in use today in many locks. They also added a device called the detector, an extra lever that triggered by overlifting of the main levers. When triggered, the detector would lock the bolt until it was reset [regulated, in Chubbs' word] with a special key. To make it more convenient to use, the Chubb detector lock was modified slightly in 1824 so that it could be reset by the working key, instead of a separate 'regulating' key. The concept of the ‘detector’ was that the lock not only responded to the true key, it also recognised a wrong key or picking attempt, and signalled this to the proper keyholder by a change of state. The concept was invented by Ruxton in 1816 [BP4027] but his realisation was not a practical success. The Chubb lock was the first to have a practical detector, combined with lever tumblers. Chubb later added false notches or serrations on the fences of the levers which prematurely bound components if tension were applied when the component was in the incorrect position. This anti-picking idea was originally introduced on Bramah locks from 1817, and also used on Anthony Strutt's lever lock of 1819 — the first to use end-gated levers. It was later included in security pins and many other lock designs. In 1820, Mallet patented a rotating barrel and curtain that closed off the keyhole when the key was turned and hindered independent movement of picking instruments. This addition helped to prevent decoding. De La Fons would later also be granted a patent for this same idea, in 1846. Although not widely used before 1851, the combined barrel-and-curtain are now commonly used security features of high-security lever locks - the name usually simply abbreviated to 'curtain'. Tucker and Reeves In 1851, a new design surfaced with a bolt that was not rigidly fixed but could shift on one end. Patented by Tucker and Reeves, this design aimed to thwart picking attempts involving pressure on the bolt. The shifting bolt made it harder to feel the gates inside the lock as it shifted. In 1853, the design was refined to include a rotating barrel that prevented movement of the bolt until a key was inserted. Another form of ‘lever’ lock was Thomas Parsons’ balance lock [BP8350] of 1832. This originally had a plurality of levers pivoted around their midpoint (earlier levers were pivoted at one endpoint) below the bolt tail, each lever having a hook (of differing lengths) at both ends. Spring pressure pressed the hooks at one end into a notch in the bolt tail, (locking the bolt against movement). The key steps pushed on the other ends of the levers. The key bit pressed those ends towards the bolt, which had notches for these hooks also. (There are two notches each end of the bolt tail, for the shot and withdrawn positions of the bolt.) The correct key balanced every lever with neither end hooking into the bolt. Because the balance levers take little strain, they can be thin, so that using 7 was common, and up to 20 in some safe locks. This linear lock enjoyed considerable success in the 19th C. A cylinder locking device version made by CAWI appeared in 1951, using essentially the same idea, differently realised. Numerous detail variations in the lever mechanism have been invented. Levers may be arranged to slide rather than pivot. The bolt tail may be within the lever stack (typically, in the middle). Levers all on one pivot may be arranged to pivot in opposite directions (typically, alternate levers). Or there may be a plurality of lever stacks, and a plurality of stumps. Such locks are mainly used for high grade safes. Several anti-pressure devices, and other pick-resisting features, have been invented. There have also appeared several lever cylinder locking devices, of which the Ingersoll Impregnable is notable. It has been made under licence in the USA by Sargant & Greenleaf. 'Simple' should not be equated to 'insecure'. Designs using a double-bitted key with unsprung levers having closed bellies, cheaply made in zinc alloy castings, are widely-used on medium-grade safes in Europe. Lever steps on one bit move the levers, the corresponding steps on the other bit stop the levers moving too far. The levers are end-gated, (allowing a strongly-fixed and well-supported bolt stump) with numerous serrated false notches. Locks of this type are practically impossible to pick tentatively, and inside a safe door are well-protected against force. Usually, re-lockers are also connected, to frustrate disrupting the lock by force or explosive. Many lever locks are less demanding of production precision than cylinder locks, and this has increased the popularity of physically robust lever locks in eastern Europe in the past few decades. Principles of Operation Although this describes the typical arrangement, several other realisations also occur. A stack of levers is placed in the lock. Every lever must be properly moved (typically raised) by the key to allow the bolt stump fixed to the bolt tail to pass through the gates of the levers, retracting or extending the bolt. Each lever may have a different sized belly, or a different gate position to provide differs. See also: Detainers - The primary locking component of lever lock. Each lever is a flat piece of metal with a gate which must be moved to the proper position to allow the stump to pass through and retract or extend the bolt. Each lever is normally impelled by a spring, usually fixed to the lever. Some levers use a thinned belly section referred to as "conning" to ensure the lever interfaces with the correct bitting area on the key. - The stump is a protrusion usually fixed to the bolt. The stump prevents the bolt from being extended or retracted until the levers are properly positioned. Traditional designs have the stump and levers interconnected (pockets are closed, with the stump sitting inside each lever). - Washers are flat (often metal) plates placed between each lever to ensure that each lever is properly raised by each bitting cut. They are not universal, but common in outdoor facing lever locks that require a high degree of reliability, especially in harsh conditions. On some cheaper locks they are replaced by stamped bumps which maintain the spacings without increasing the parts count. - Barrel and Curtain (now combined and usually referred to simply as 'curtain') - This is a component used in the keyhole to help prevent direct access to the levers after the key or pick is rotated in the lock. When the key turns, the curtain blocks the keyhole. The barrel hampers the independent movements of a 2-in-1 pick of the design originally used by A C Hobbs. This protects against casual manipulation of the levers, but does not preclude lockpicking attacks completely. Lever locks (in common with other locks) are vulnerable to a variety of attacks, depending on their design. Tentative picking is increasingly difficult as the number of levers increases. Many security locks also incorporate features which hamper manipulation, and additionally, warding is also sometimes used to this end (just as in pin tumbler locks). Tools to pick and decode lever locks are not as widely available as their pin-tumbler counterparts, largely because the tools required are more laborious to make, and expensive, and are more likely to be specialized to each lock, unlike pin tumbler and wafer tumbler picks. However, devices do exist and can be effective. Often, more specialised tools are made for a size and design for an individual model of lever lock. In general, well-made lever locks incorporating several pick-resisting features are likely to be physically stronger and more resistant to manipulation than comparable pin tumbler cylinder locks. They are likely to be larger, and typically have slightly larger keys. Lever locks in widespread use tend to have fewer differs than comparable pin tumbler cylinder locks, although trial of keys is hindered by the greater weight of keys needed and the slower rate at which they can be tested. Keys for different models of lever locks have a considerable variety of sizes, further impeding trial of keys. One- and two-level masterkeying is used for small suites, and has been much used in institutions in the past. Most lever locks are not well-suited to complex large-scale or multi-level masterkeying. Those with detainers are generally better for this than those with H gate levers. Many security lever locks are well-protected against drilling, so that this attack usually needs more work than most pin tumbler cylinders. Drill points vary from one lock model to another and are not visible externally. Lever locks, especially those with a protective curtain, are highly resistant to severe weather conditions. PULFORD, Graham (2007). High Security Mechanical Locks: An Encyclopedic Reference. ISBN 0750684372.
<urn:uuid:14ed2577-ece4-4ace-910d-12874c44e5f8>
CC-MAIN-2016-26
http://www.lockwiki.com/index.php/Lever
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959643
2,601
3.390625
3
Walk-Through Metal Detectors for Personnel| (Chapter 3 Metal Detection, Continued) Instructions for the scannee The instructions provided to students, employees, and visitors need to be as short and simple as possible. The following example instruction set could be provided to students and employees in the student handbook and should be posted at the entry to the weapon detection area. - Remove any metal items from your body or pockets and put them in your purse or bookbag. - Place hats, carried jackets, purses, bookbags, and briefcases on the conveyer belt for the x-ray machine (or on the table to be searched by an officer). - Stay back from the portal until signaled by the operator to proceed. - Walk at a moderate pace through the portal, one person at a time, being sure to momentarily place your feet on the footprints at the base of the portal before proceeding. - If an audible alarm sounds as you go through the portal, follow the directions of the security officer for further scanning or search. Research Report: The Appropriate and Effective Use of Security Technologies in U.S. Schools
<urn:uuid:a8d7868b-d14f-4622-8629-b2dde06b429f>
CC-MAIN-2016-26
https://www.ncjrs.gov/school/ch3a_7.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913105
234
2.953125
3
Catholic Activity: July 4: Independence Day July 4th, Independence Day, is an American national holiday. This holiday gives us time to thank God for the birth of our nation, and give thanks for gift of life and petition God to preserve our country and rid it of its evils. This is a great day for picnics and spending time with family and friends. Feast Day Cookbook gives many American traditions for this day. This month holds for Americans the celebration of our glorious Fourth, Independence Day, a great national holiday not connected with the feast of a saint (as is Saint Andrew''s Day in Scotland and Saint George's in England), or with a festival of the Church. And yet can it be said that the anniversary of the birth of a nation is ever an entirely secular affair? In this case we do not believe it is so. In man''s aspirations for freedom, there is always a spiritual element, and this was especially true in the thinking of the American signers of the Declaration of Independence at Philadelphia on July 4, 1776. On Thanksgiving Day we give thanks to God that He has provided our citizens with food for the body; at this other particularly American celebration we give thanks that He has allowed our spirit to live. For years the Fourth of July has been marked in every city and town of the United States by patriotic gatherings, parades, and speechmaking in the principal square; the national anthem and other songs are sung (which sound especially well when shrilled by young and untrained voices); and martial airs are played by the local band. But the firecrackers of our childhood are no more, a pity and a blessing too. The slogan of a safe and sane Fourth is now becoming a fixed rule everywhere, and in these days the fireworks are set off at night by competent and careful manipulators. Last Independence Day we attended such a display — one of many thousands throughout the country — and sat on a hilltop watching the fireworks. Around us children chattered and lighted sparklers; when some particularly dazzling skyrocket burst red and blue and white against the night sky, there was clapping from the crowd. Last of all appeared the usual "set piece" — the American flag with Roman candles clustered about it. All stood up as a voice in the crowd began "The Star Spangled Banner"; the singing grew louder and louder as more people joined in. The peaceful evening and the rockets'' harmless glare, the voices of free people singing a free song, the knowledge that that freedom had been defended in the past and might have to be defended again on nights far from peaceful and with weapons far from harmless — all produced an emotion that could perhaps be called sentimental. But devotion to the truth that made us free, and alone will keep us free, was still there, right in the midst of the sentiment. Independence Day food is often of the picnic variety, as is right for a holiday usually spent in the open. But there are traditional dishes originating in George Washington's Virginia. One such is a breakfast specialty, Rice Waffles. Another dish of the day is poached salmon with egg and caper sauce, served with green peas and mashed potatoes. Not only is this the traditional time for serving the first salmon of the season, but we learn that this menu of soft foods was prepared for the Father of our country because of the discomfort caused him by his ill-fitting set of false teeth! And of course the day's dessert everywhere has long been a triangle or a circle of watermelon. Never, never, we hope, will it become the small new variety just developed, we hear with a sense of shock, with no seeds at all. The color combination surely should all be kept in the true watermelon — the black seeds, with the red, the white, and the green. Further, we hear, the experts are working not only to produce a seedless watermelon, but one with a very thin green rind. When that happens, what will happen to one of the nation's delicacies, the watermelon pickle? Another dessert in favor on the Fourth of July from the very beginning of these United States is the Independence Day Cake. This very properly had its origin in Philadelphia, and every heirloom cookery book has its recipe. Tall and frosted in white, it is surrounded with a wreath of gilded leaves, made in early days of the boxwood so popular in colonial hedges. It is a cake of victory, of snowy purity, its wreath reflecting the gold of the seal of the Declaration, well suited to a day which made this a free land for free men. Activity Source: Feast Day Cookbook by Katherine Burton and Helmut Ripperger, David McKay Company, Inc., New York, 1951
<urn:uuid:2ccf461c-c45e-4ccf-acdb-d96cf2b4d39f>
CC-MAIN-2016-26
http://www.catholicculture.org/culture/liturgicalyear/activities/view.cfm?id=1116
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396147.66/warc/CC-MAIN-20160624154956-00166-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962434
977
3.03125
3
This site is maintained by the EMS Energy Institute. If you have questions about this site, please contact email@example.com In his days as a Ph.D. student at Penn State, Professor of Energy and Environmental Engineering Semih Eser took to snacking on red fruits he would pluck off of the trees on the east side of University Park campus. Although rumored to be poisonous, Eser had known about Cornelian cherries from a young age, because he grew up with them in Turkey. Not only did he consider the tart fruits to be great for nibbling, but he also developed them into research, making activated carbons out of the cherry stones for purifying water. “This is nature’s wonder...If you can open these up, create porosity on the surface that runs all the way across, then these are very nice granules that you can use for cleaning water,” says Eser, who published a paper detailing how carbon can be activated in one step by sending steam through the fruit stone as it is heated. “I was actually making the feedstock while eating the cherries and keeping these and washing them,” he says, with a laugh. A research affiliate of the Penn State Energy Institute, Eser has since continued investigating desirable carbons as well as undesirable carbons. “Carbon is an incredible element,” he says. “You can have graphite and diamond—two extremes of the spectrum.” Eser received his bachelors and masters degrees in chemical engineering at the Middle East Technical University in Ankara, Turkey. He moved to the United States in 1981 and earned his Ph.D. in fuel science from Penn State in 1987. He served as head of the department of energy and environmental engineering for five years. In the area of undesirable carbons, Eser and his research team are making advancements to address the problem of carbon deposition from jet fuel engine components. When jet fuels are heated to elevated temperatures and come into contact with the metal surfaces of the engine, a chemical reaction occurs that produces carbon deposits or filamentous carbon. The metal surfaces of the engine components are corroded and long carbon fibers start to form. “It’s almost like grass growing within the internal surfaces of a pipe and that’s dangerous,” he says. “Not only is it providing a physical block, but it acts like a catalyst surface that starts reacting with fuel and then we have a secondary growth on these filaments or fibers.” As the fibers of carbon thicken, the jet’s fuel lines are at risk of being blocked and the system could potentially fail. One unique contribution that Eser’s research team has made to this area is a technique called temperature program oxidation, which has led them to prove that different kinds of carbons are depositing on the surface. Ultimately, the technique characterizes what kinds of deposits are taking place by looking at how readily carbonaceous deposits will burn and by monitoring the evolution of the carbon dioxide. “What we have also found in our work is that...all jet fuel samples, whether it is a civilian or military, JET A or JP8, contain sulfur in various concentrations,” he says. “Sulfur in essence acts as an initiator of surface degradation where you have fuel interacting with the surface.” Therefore, it’s important to find metals or design alloys that do not readily react with sulfur. Eser and his research group have tested and found one super alloy called Inconel 718 to perform very well due to stabilizing elements found in the composition, such as aluminum and titanium and niobium. In addition, there are techniques to reduce deposition that relate to the fuels, including: stabilizing the fuel with additives, altering the engine conditions so temperature of the fuel stays lower, and developing different fuel formulas. Coatings or thin films less than a micron thick have also been developed to prevent the fuel from coming into contact with the active surface metals, and Eser says that an invention disclosure provisional for one such coating is in the works. While some types of carbons are cause for concern, Eser is also looking at desirable carbons and how to maximize the benefits of their applications. One such application is making needle coke to develop graphite electrodes, which serve as good heat conductors in melting down scrap iron and steel. Iron and steel are among the most widely recycled materials the world and it’s much more inexpensive to reuse steel and iron rather than to mine new iron ore. Scraps are commonly melted in an electric arc furnace. But according to Eser, making the electrodes is no easy task. Baking and graphitizing them is an incredibly involved process and can take weeks, sometimes months. “The key research I’ve been involved in is to make them last as long as possible,” he says. “All they’re doing there is conducting electricity, but of course since the temperatures are so high, you tend to lose part of it and then you need to replace the electrodes.” To increase the strength and efficiency of the electrodes, Eser is assessing the chemistry of a specific phase in the electrode production process called carbonaceous mesophase. Another obstacle is finding ways to successfully remove sulfur before needle coke is made, since sulfur can damage electrodes as they bake. Two publications have been submitted by Eser and his colleague, G. Wang, to Energy and Fuels and are currently under review. The first focuses on determining the molecular composition and how it effects the formation of the liquid crystalline phase, or carbonaceous mesophase. The second focuses on the effect of removing sulfur on mesophase development during carbonization. Eventually, Eser says he hopes to “retool his research activity to more effectively address the critical urgency of dramatic increases in the efficiency of fossil fuel use and developing new means of renewable energy conversion.” For now, he continues to investigate the desirable and undesirable varieties of this fascinating element—one that plays both a hero and a villain.
<urn:uuid:b9fbb0f4-adaa-4b90-ab07-5ef8105e4511>
CC-MAIN-2016-26
http://www.energy.psu.edu/print/news/archives/2007/Faculty-Spotlight-Semih-Eser.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393463.1/warc/CC-MAIN-20160624154953-00196-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964906
1,280
2.796875
3
In 1492 Columbus sailed the ocean blue. On Oct. 11, 518 years later, banks are closed and there's no mail. To many, Christopher Columbus Day is nothing more than the cap on a three-day weekend in October. But to the Italian-American community, this holiday has been a 500-year fight to get recognized. Everyone knows that in 1492 Columbus first felt the soil of the New World, but there are not nearly as many people that realize the history of the holiday since that fateful landing. It is believed that the anniversary (Oct. 12) of Columbus reaching San Salvador in Central America was first formally celebrated by the Society of St. Tammany (also known as the Colombian Order) in New York City in 1792,when they held a dinner in his honor. At that time, New York was the first place to erect a statue of Columbus. It was about this time that the name of Columbus was becoming heralded throughout America. Many institutions began changing their name out of respect to the explorer. New York's King's College changed its name to Columbia and the nation's capital was named the District of Columbia. In 1866, the Italian population of New York organized the first mass celebration. Growing efforts were being made by groups to have Columbus Day declared a national holiday, most notably by the first Catholic fraternal order, the Knights of Columbus, which was organized in 1882. The Knights of Columbus, who have been involved in Jersey City's Columbus Day parade since its beginnings in 1950, is now an international society with 1.5 million members and more than 10,000 councils. Their motto is to uphold the ideals of Columbianism: "charity, unity, fraternity and patriotism." The quadricentennial of Columbus' arrival did the most to raise the awareness of Columbus Day around the country. President Benjamin Harrison issued a proclamation appointing that day as "... a holiday for the people of the United States ... to express honor to the discoverer, and their appreciation of the great achievements of four completed centuries of American life." This anniversary also brought about the building of Columbus Circle at the southwest corner of Central Park in New York, accompanied by a statue. The Knights of Columbus kept lobbying states to make it a legal holiday, and in 1909 New York was the first state to sign it into law. The first government-supported Columbus Day was celebrated with a massive parade in Columbus Circle, and had replicas of Columbus' ships sailing in New York Harbor. It was that year that New Jersey joined in by legalizing the holiday. Read more Columbus Day stories here In 1934, President Franklin Roosevelt urged a nationwide observance of Columbus Day, and in 1937, he proclaimed every Oct. 12 as Columbus Day. In 1971, President Nixon declared it a federal public holiday on the second Monday in October. Columbus Day is now observed in all but nine states. In three states it is known as Discovery Day and in Michigan it is known as Landing Day. Oct. 12 is also celebrated as Columbus Day in some parts of Canada, in Puerto Rico, in Central and South American countries and in Italy and Spain. Back in 2010 we are going to make Veal Piccata to celebrate. ½ cup all purpose flour 2 teaspoons salt ½ teaspoons freshly ground black pepper 4 veal scallops, about 3/4 pound, pounded to a thickness of 1/8-inch 1 ½ tablespoons vegetable oil 5 tablespoons butter 1 cup dry white wine ½ cup chicken stock 1 garlic clove, chopped 1 lemon, juiced, or more to taste, (about 2 tablespoons) 2 tablespoon capers, drained 1 tablespoon chopped parsley leaves, optional, plus sprigs for garnish In a shallow bowl or plate combine the flour, 1 1/2 teaspoons of the salt and pepper and stir to combine thoroughly. Quickly dredge the veal scallops in the seasoned flour mixture, shaking to remove any excess flour. Heat the oil in a large skillet over medium-high heat until very hot but not smoking. Add 1 1/2 tablespoons of the butter and, working quickly and in batches if necessary, cook the veal until golden brown on both sides, about 1 minute per side. Transfer to a plate and set aside. Deglaze the pan with wine and bring to a boil, scraping to remove any browned bits from the bottom of the pan. When the wine has reduced by half, add the chicken stock, chopped garlic, lemon juice and capers and cook for 5 minutes or until the sauce has thickened slightly. Whisk in the remaining 1/2 teaspoon of salt, remaining 3 1/2 tablespoons of butter and the chopped parsley. When the butter has melted, return the veal scallops to the pan and cook until heated through and the sauce has thickened, about 1 minute. Garnish with parsley sprigs and serve immediately. How did Christopher Columbus finance his trip to America?
<urn:uuid:ba6eb684-d70a-45dc-9bef-2d2cd28da6b8>
CC-MAIN-2016-26
http://www.irishcentral.com/culture/food-drink/giligans-gourmet-veal-piccata-recipe-to-celebrate-columbus-day-173103151-237754391.html?page=1
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961509
1,028
3.578125
4
Air Quality Resources You will need Adobe Reader to view many of these publications. - Nowak, D.J., D.E. Crane, and J.C. Stevens. 2006. Air Pollution Removal by Urban Trees and Shrubs in the United States. Urban Forestry and Urban Greening 4: 115- 123. [PDF] - Presents the factors behind pollution removal by trees, estimates the amount of air pollution removal for three Florida cities, and presents strategies for managing urban trees for air quality improvement. Escobedo, F. 2007. Urban Forests in Florida: Do They Reduce Air Pollution? FOR 128. School of Forest Resources and Conservation, Florida Cooperative Extension Service, Institute of Food and Agricultural Sciences, University of Florida. [PDF]
<urn:uuid:7ad4fff7-25c4-448e-b4d4-d02f4e4d945c>
CC-MAIN-2016-26
http://www.sfrc.ufl.edu/urbanforestry/Resources/resources_air_quality.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00110-ip-10-164-35-72.ec2.internal.warc.gz
en
0.830228
157
2.625
3
The use of carbon fibre composite materials is spreading wider than its traditional motoring base, to aerospace and beyond. As a result, new, powerful methods of non-destructive inspection (NDI) or non-destructive testing (NDT) are required to ensure that materials have the necessary strength to perform effectively in their expanding roles. Applied Computing & Engineering Ltd (AC&E) is at the forefront of new developments in robot simulation software that ensure composite materials are safe. Carbon fibre composite is an effective high strength, low weight material. It has been used in smaller aircraft for years but never larger passenger planes. The drive to reduce the cost of air travel, meet environmental responsibilities and cope with the rising cost of fuel has led the aircraft industry to demand lighter passenger aircraft with improved performance. Carbon fibre composite is the obvious material of choice, but ensuring the production process eliminates manufacturing defects in the material is a challenge – and not one that can be overcome by conventional techniques. The porosity problem The process of manufacturing carbon fibre composite (baking many layers of carbon fibre coated tape in an autoclave) hardens the material, but can lead to gaps or bubbles opening up between the tape layers leaving the structure vulnerable. Such imperfections determine the porosity of the material and can affect mechanical performance, which is why porosity values must typically be lower than 2.5%. Spotting bubbles and cracks in a black, opaque material, however, isn’t easy. Scanning at the limits Ultrasound scanning, of the sort used in pre-natal care, is traditionally used to determine porosity in carbon fibre materials. Yet the size and shape of, for example, an aircraft panel demands new capabilities of the scanning software. Additionally, ultrasound scanning requires the use of water as a sound conducting medium meaning inspection of the part has to be fast, but at a suitably high resolution to pick up any flaws or Manually operated devices can’t deliver the required speed and accuracy, and the traditional choice of a Cartesian axes machine no longer offers the accuracy required for scanning more complex shapes like engine nacelles and structural stiffeners. Computer simulation based off-line programming (OLP) methods for robots have been available for some time but early OLP techniques were developed predominantly for the automotive industry and are not suitable for programming NDI robots. Simulation software specialist AC&E has a reputation for delivering more from scanning robots. Its experts are enhancing robot off-line programming software and the results it delivers, in order to make NDI faster and more accurate than has previously been possible. These developments are a natural progression for a company long experienced in tailoring its simulation software to clients’ robot systems. From its base at Sci-Tech Daresbury in North West England, AC&E works with leaders in robot manufacturing such as Fanuc, Motoman, Kuka, ABB and Natchi, to apply its technology to organisations across Europe. Nissan uses AC&E software in its painting and spot welding and AC&E has been helping EADS develop an NDI robot programming methodology for its factories in France Now its new software is helping manufacturers working with carbon fibre composite achieve the greater scanning accuracy they need. New software, new standards To create simulation software suitable for passenger aircraft AC&E knew the resolution of the scan would need to increase dramatically. At the same time the speed of the scan would need to increase to meet throughput requirements. And since the scanned components are irregular shapes, AC&E knew that advanced collision anticipation and avoidance also had to be a key part of the software. The solution was to avoid contact with the subject of the scan. AC&E Technical Director, Yash Khandhia explains: “Typically, in robot programming, you ‘teach’ the robot to operate in simple plains. Where there is an obstacle there can be a collision causing costly damage to both robot and component. In contrast, AC&E’s software automatically programmes the robot to scan the structure without making contact with it. This allows complex structures, as well as flat panels, to be scanned without risk.” This new generation of automated scanning software does more than avoid collisions. It achieves the faster scan times at greater resolutions that the new carbon fibre composite applications Yash Khandhia: “AC&E encounters different NDI requirements and procedures depending on who we are working with. Currently the basic minimum sizes of scanning for defects in composites are equivalent to 6mm x 6mm or a flat bottom hole with a 6mm diameter. However we are beginning to see projects where a 3mm diameter test is required. When this happens the number of points to be checked in areas such as aircraft wings will number in the hundreds of thousands. Our customized software automatically programmes a robot for this level of inspection in a way that is unique in the Aerospace and beyond AC&E expects its new software system to find applications far beyond its current uses in automotive and aerospace manufacturing. “We expect this form of NDI software will appeal to the shipbuilding industry, particularly pleasure boats,” says Yash Khandhia. “We also expect it to play a pioneering role in defence related manufacturing, for example in the construction of unmanned drones, which are extensively carbon fibre composite.” Faster. Deeper. Smarter What began as a challenge to meet the evolving requirements of manufacturing became something far more involved. AC&E has applied its expertise in NDI programming to ensure the carbon composite materials used in tomorrow’s aircraft, boats and defence projects have the strength to perform safely and effectively. In doing so they have transformed NDI with software that scans faster, without collisions, at higher resolutions than ever before.
<urn:uuid:f2b10b0a-916b-4171-af3c-6b2a51a23310>
CC-MAIN-2016-26
http://www.acel.co.uk/the-next-generation-of-robot-simulation/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92055
1,263
3.09375
3
In this week's "Action Comics" #14, world renowned astrophysicist Neil deGrasse Tyson pinpointed Krypton, Superman's home planet, within the universe. And we're not just talking about the fictional DCU: We're talking the actual known universe. The red dwarf star designated for having the ability to support a Krypton-like planet is located in the constellation Corvus, which is 27.1 light years from Earth. The star, designated LHS 2520, possesses a red, highly turbulent surface, somewhat cooler and smaller than the Sun. For amateur astronomers, the exact coordinates are: Right Ascension: 12 hours 10 minutes 5.77 seconds Declination: -15 degrees 4 minutes 17.9 seconds Proper Motion: 0.76 arcseconds per year, along 172.94 degrees from due north How's that for accuracy? Well, thank Dr. Tyson and Sholly Fisch, the writer of the latest "Action" co-feature "Star Light, Star Bright." Dr. Tyson has a well-documented history popularizing science, making the subject accessible and exciting to the public. The Director of the Hayden Planetarium at the American Museum of Natural History, where "Star Light, Star Bright" takes place, Dr. Tyson recently made headlines by convincing film director James Cameron to alter the night's sky as seen in "Titanic" due to astronomical inaccuracies. The adjustment was made for this year's 3-D re-release of the all-time highest grossing film. Fisch, President and Founder of MediaKidz Research & Consulting, a consulting firm that provides educational content development, hands-on testing and writing for children's media, was hand-picked by Grant Morrison to write the co-features in "Action Comics" after his run on "The All-New Batman: The Brave and the Bold." Prior to founding MediaKidz in 2001, Dr. Fisch was Vice President for Program Research at Sesame Workshop where he oversaw curriculum development, formative research, and summative research for a broad range of multimedia endeavors. Fisch and "Action Comics" editor Wil Moss collaborated with Dr. Tyson on the creation of the story, illustrated by Chris Sprouse and Karl Story, and the real-world science which made it all happen. To celebrate this momentous discovery, Fisch spoke with CBR News about the project's secret origin and how the validity of science, when possible, is crucial in telling superhero stories to readers, young and old. CBR News: You've done it. You've found Krypton. How did you enlist the services of Dr. Tyson to assist in your pursuit? Sholly Fisch: It started with the story idea, which is Superman making trips to an observatory. Over the course of the story, we learn that part of it -- and this is only a minor spoiler -- is that he is looking for Krypton. In talking about it with Wil Moss, my editor, he suggested that maybe we should approach Dr. Tyson because he's very much into the popularization of science, as am I, because I've done a lot of that stuff, too. We got in touch with him to basically fill out some of the background and make things a little more accurate. And also, to see if he would be interested in actually appearing in the story. We spoke to him on the phone and not only was he interested, through the course of the conversation, he said, "Well, you know, if you'd like, we could probably find you some stars that would be in about the right place and that meet the specifications. Would you want us to that?" And Wil and I, not being idiots, said "Sure." And he went off and within a week or two, he came back with a list of five or six red stars that were about the right distance from Earth and the right size and basically said, take your pick. We went through and decided which would be the best fit for Superman and the rest is soon to be history. You mentioned that you also have a long history of popularizing science for young children working with DC Comics, Sesame Workshop or your own company, MediaKidz. How important is it to be true to real science, whenever possible, when writing comics and/or fiction in general? The bottom line answer is that I do think it's important to try and be as true to life as you can regardless. On one level, from the standpoint of just telling the stories, and telling the stories well, if you can have a solid grounding in the real world then it makes the fantastic stuff pop all the more. It's one thing when you have a hero that can move mountains with his mind and it's set in a world where anything can happen as opposed to having a hero that can do that in a world that follows pretty much the same that the real world does. It becomes much more striking in that case. And it helps with the suspension of disbelief. If you are in a world where anything can happen at any time, then if something happens, you go, "Oh. Okay." But if it's in a world that you know is following some rules, then it has that much more impact. From the other side, as you said, I've spent an awful lot of years trying to help kids learn all sorts of things through media. And I've worked on a lot of science stuff. At the moment, actually, one of my many jobs is as an education consultant on the PBS "Cat in the Hat" show, which is a science show for preschoolers. I feel strongly -- and I know the same thing holds true for Dr. Tyson, which is why he got involved with this in the first place -- anything that we can do to help bring a little bit of science into people's lives and do it in a way that makes them care about it, that's just good for everybody. It's good for education and it's good for stimulating people's interest to try and find out more about this stuff. It's good all around. Is this President Obama helping Spider-Man capture Chameleon, or is Dr. Tyson's role in "Star Light, Stay Bright" fit within ongoing continuity? No, this is very much in continuity. The Justice League is in the story. Superman is in the story. But now we've introduced the cast and crew of the Hayden Planetarium as another little piece of the DC Universe. Again, trying to balance what I should say and what I shouldn't say, I guess the best way to capture it is to say, this is one of those things that's been happening between the panels for a long time. It's established in the story that Superman has been coming to the Hayden Planetarium roughly once a year. It's essentially when all of the conditions are right -- all the orbital patterns and all that -- to be able to get a straight shot between Earth and Krypton. This is something that he's been doing for a while and it's just never been mentioned before because it's happening between the adventures. Now, we're getting a little bit more of a peak into that side of his life. There is also some character development and some emotional hook. And at the same time, inserting the real life science that if anyone that wants to pull out their telescope and look at the right corner of the sky can certainly do it. You probably won't be able to see 27 light years from here and actually see Krypton's sun or anything like that, unless you have an incredible telescope, but you can do it. It provides a little bit more insight into Superman and it's a little bit more of a link to the real world. You mentioned these visits have been going on for years. Now that Dr. Tyson and Superman have found Krypton, will the storyline continue in future issues? I would say, read the story! [Laughs] That will help to explain why you will or won't be seeing more of this. I can also say that it won't be popping up again in the immediate future. What long-term implications there might be of the relationship and of the things established in the story... we can leave open for now.
<urn:uuid:6b83ff1c-2f0c-4f27-85e8-c8d621fa31c8>
CC-MAIN-2016-26
http://www.comicbookresources.com/?page=article&id=42077
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00142-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971497
1,699
2.640625
3
FORT MERRILL. Fort Merrill, located on the right bank of the Nueces River where the Corpus Christi to San Antonio road crossed the river, fifty miles above its mouth, was founded on March 1, 1850, by Capt. Samuel M. Plummer and companies H and K of the First United States Infantry. Lumber and logs used in the construction of the fort were shipped in from New Orleans, and the soldiers of the garrison erected the buildings. The fort probably was named in honor of Capt. Moses E. Merrill, who was killed in the Mexican War battle of Molino del Rey on September 8, 1847. Companies I and E of the Rifle Regiment were the regular garrison until April 26, 1853, when they were transferred to Fort Ewell, leaving only two noncommissioned officers and thirteen men at Fort Merrill. After 1853 the fort was garrisoned only intermittently. When W. G. Freeman inspected it on June 21, 1853, Lt. Alexander McRae was in command, but the garrison was so small that it could do no more than night sentinel duty. The fort was abandoned on December 1, 1855. Fort Merrill is off U.S. Highway 281, three miles northwest of Dinero in Live Oak County. Image Use Disclaimer All copyrighted materials included within the Handbook of Texas Online are in accordance with Title 17 U.S.C. Section 107 related to Copyright and “Fair Use” for Non-Profit educational institutions, which permits the Texas State Historical Association (TSHA), to utilize copyrighted materials to further scholarship, education, and inform the public. The TSHA makes every effort to conform to the principles of fair use and to comply with copyright law. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml If you wish to use copyrighted material from this site for purposes of your own that go beyond fair use, you must obtain permission from the copyright owner. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Handbook of Texas Online, Thomas W. Cutrer, "Fort Merrill," accessed June 26, 2016, http://www.tshaonline.org/handbook/online/articles/qbf37. Uploaded on June 12, 2010. Published by the Texas State Historical Association.
<urn:uuid:fda0f149-f5f9-4f70-892c-50a4e1721e96>
CC-MAIN-2016-26
https://www.tshaonline.org/handbook/online/articles/qbf37
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00197-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962651
498
3.125
3
|Scientific Name:||Nanger dama| |Species Authority:||(Pallas, 1766)| Gazella dama (Pallas, 1766) |Taxonomic Notes:||Usually included in the genus Gazella, Dama Gazelle is here included in the genus Nanger, along with Soemmering's Gazelle N. soemmerringi and Grant's Gazelle N. granti, following (Groves 2000 in press, Grubb 2005). Cano (1984) recognized three subspecies (provisionally retained by Scholte in press).| |Red List Category & Criteria:||Critically Endangered A2cd; C2a(i) ver 3.1| |Assessor(s):||Newby, J., Wacher, T., Lamarque, F., Cuzin, F. & de Smet, K.| |Reviewer(s):||Mallon, D.P. (Antelope Red List Authority) & Hoffmann, M. (Global Mammal Assessment)| The sustained decline due to uncontrolled hunting and habitat loss has continued and is now estimated to have exceeded 80% over 10 years. Extensive field surveys have been made since 2001, but all subpopulations encountered are very small, with all at risk from unmanaged large-scale hunting, and the total population certainly numbers well less than 500 individuals. Decline is expected to continue based on ongoing hunting and unpredictable arrival of large hunting parties with high destructive potential from the Gulf states. The Dama Gazelle is following the same trail into extinction in the wild as the Scimitar-horned Oryx. |Previously published Red List assessments:| |Range Description:||Formerly widespread in the Sahara and Sahel zones, but their range and numbers have been extremely reduced. In North Africa, Dama Gazelle are now probably extinct, although they may survive in the Drâa (where observations were made by nomads in 1993) (Cuzin 1996; Aulagnier et al. 2001). It is also possible, though increasingly unlikely, that they may survive in very small numbers along the border between southern Morocco and Mauritania (Cuzin et al. in press). They may also survive in the Tassili de Tin Rehror in southern Algeria (K. De Smet pers. comm.). In Tunisia, they are believed to have occurred in the south and to have disappeared before the 20th century (Smith et al. 2001).| South of the Sahara, Dama Gazelle are still present in eastern Mali, Air and Termit/Tin Toumma in Niger, and in the Chadian Manga and Ouadi Rimé Ouadi Achim Nature Reserve in Chad (Scholte in press, and references therein); however, aerial and ground surveys of Termit-Tin Toumma in 2007 failed to record any Dama Gazelles (Wacher et al. 2007). They are now thought to be extinct in Mauritania, and are probably extinct in Nigeria, Burkina Faso, and Libya (see Scholte in press for summary, and references therein). There are no recent confirmed records from the Sudan, although East (1999) mentioned it could still occur at low densities in Northern Darfur and Northern Kordofan. Native:Chad; Mali; Niger Regionally extinct:Libya; Mauritania; Morocco; Nigeria; Tunisia |Range Map:||Click here to open the map viewer and explore range.| |Population:||Numbers of Dama Gazelle have declined drastically since the 1950s and 1960s. The early 1970s population in the Ouadi Rime - Ouadi Achim Faunal Reserve in Chad, one of the former strongholds of the species, was estimated at 10,000-12,000 individuals, but today the species is very rare in this reserve (J. Newby, in Scholte in press). Known remnant populations are all very small and extremely fragmented; the only known populations of any size are in Manga (Chad), eastern Air (Niger), and the Mali/Niger border area. In all areas surveyed, numbers have been very low and the size of observed gazelle groups very small (range=1-5 individuals) (Lamarque et al. 2007). Subpopulations probably number around 20 individuals in all cases, are separated by hundreds of kilometers, and the total current wild population is certainly less than 500 individuals (J. Newby pers. comm.).| |Current Population Trend:||Decreasing| |Habitat and Ecology:||Inhabits Sahelian grasslands, sparsely wooded savanna and sub-desert steppes with Acacia and Panicum vegetation; usually avoids really sandy areas, but will frequent low mountains and mountain plateaus, probably as refugia. In southern Morocco, it was found in areas without any Acacia, but with dense shrub cover (Cuzin 2003).| |Major Threat(s):||The main threats to this species include uncontrolled hunting (by nomads, military and by Arab hunting parties), and habitat loss and degradation due to overgrazing by domestic livestock (and the impact of expanded livestock rearing due to well construction in preferred habitats). Prolonged drought is also having an impact on pasture quality (Lafontaine et al. 2005; Scholte in press).| Listed on CMS Appendix I, and included in the CMS Sahelo-Saharan Antelopes Action Plan (Lafontaine et al. 2005). It is listed on CITES Appendix I. The Réserve partielle de faune du Bahr-el-Ghazal (Chad), west of the present Ouadi Rimé Ouadi Achim N.R., and the Aïr-Ténéré N.P., harbour the remaining viable Dama Gazelle populations. Both reserves have suffered from military unrest resulting in the collapse of conservation infrastructure (Scholte in press; K. de Smet pers. comm. 2007). Dama Gazelle are present in captivity, but the number of founders is limited (Sausman 1998; Thuesen 1998). Animals from Almeria breeding facility in Spain were introduced to an enclosure (R'mila Royal Reserve) in Morocco (130 present in 2007; Cuzin et al. in press) and gazelles from München Zoo (originally bred at Almeria) were released into an enclosure in Souss-Massa N.P. (12 animals in 2006); these semi-captives are intended to form part of a reintroduction programme in Morocco. All of the animals from Almeria stock originate from Western Sahara. Elsewhere, Dama Gazelle were released into the 2,000-ha Bou-Hedma N.P. in Tunisia in the early 1990s (Abaigar et al. 1997) where around 17 were present in 2006 (T. Wacher pers. comm.); gazelles have also been reintroduced to Guembeul Faunal Reserve in Senegal (Cano et al. 1993) and a reintroduction programme in Ferlo North Reserve is underway (7 animals). |Citation:||Newby, J., Wacher, T., Lamarque, F., Cuzin, F. & de Smet, K. 2008. Nanger dama. The IUCN Red List of Threatened Species 2008: e.T8968A12941085.Downloaded on 30 June 2016.|
<urn:uuid:4be1704c-efdb-406c-b4a6-7de76a8f5ea0>
CC-MAIN-2016-26
http://www.iucnredlist.org/details/summary/8968/0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397873.63/warc/CC-MAIN-20160624154957-00109-ip-10-164-35-72.ec2.internal.warc.gz
en
0.894043
1,584
2.53125
3
Search the Health Library Get the facts on diseases, conditions, tests and procedures. I Want To... Find a Doctor I Want To... Find Research Faculty Enter the last name, specialty or keyword for your search below. Johns Hopkins Researchers Slow Progression of Huntington's Disease in Mouse Models - 12/18/2011 Johns Hopkins Researchers Slow Progression of Huntington's Disease in Mouse Models Working with genetically engineered mice, Johns Hopkins researchers have discovered that a gene (SIRT1) linked to slowing the aging process in cells also appears to dramatically delay the onset of Huntington’s disease (HD) and slow the progression of the relentless neurodegenerative disorder. HD in humans is a rare, fatal disorder caused by a mutation in a single gene and marked by progressive brain damage. Symptoms, which typically first appear in midlife, include jerky twitch-like movements, coordination troubles, psychiatric disorders and dementia. Although the gene responsible for HD was identified in 1993, much is still unknown about the biology of the disease. There is no cure, and there are no effective treatments. In studying two separate mouse models of HD, the Johns Hopkins team found that mice bred with Huntington’s disease and a greater than usual amount of the enzyme whose blueprint is carried by the SIRT1 gene had improved motor function and reduced brain atrophy. Other studies have suggested SIRT1 has anti-aging and anti-inflammatory properties that scientists are only beginning to understand. “Our research opens new avenues in the fight against HD, suggesting that if we target SIRT1, we may be able to find drugs that offer help to patients for whom we currently have really nothing that works,” says Wenzhen Duan, M.D., Ph.D., an associate professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine. A report on the findings by Duan and her international team will be published online in Nature Medicine. In previous work with HD mice, Duan and her colleagues found that calorie restriction (reducing calories by about 30 percent through alternate day feeding) slowed the disease progression and extended lifespan. SIRT1 activity was associated with the increased longevity, owing to its ability to reduce hyperglycemia and improved glucose tolerance while mitigating metabolic problems in the animals. That experience with SIRT1 and HD mice led Duan to look more closely at the possible connection between the enzyme and the mutation in the huntingtin gene (HTT), which causes HD. The mutation results in the production of an abnormal and toxic version of the huntingtin protein. Although HTT is expressed all over the body, the disease does its characteristic damage in the part of the brain that controls movement, most notably in the medium spiny neurons. Duan and her colleagues have determined that SIRT1 preserves the function of these medium spiny neurons and that extra SIRT1 seems to prevent a decline in levels of brain-derived neurotrophic factor, or BDNF, which acts as nutrition for brain cells. People with HD tend to have low levels of BDNF. People with a family history of HD can be tested for the gene that causes it long before the onset of symptoms, but many choose to not be tested, Duan says, because nothing can be done to prevent or treat the symptoms. The research was supported by the Hereditary Disease Foundation, CHDI, the National Institutes of Health, and the National Institute of Aging Intramural Research Program. Other Johns Hopkins researchers involved in the study include Mali Jiang, M.D., Ph.D.; Jiawei Wang, M.D.; Jinrong Fu, Ph.D.; Lan Xiang, Ph.D.; Qi Peng; Zhipeng Hou; Nicolas Arbez, Ph.D.; Shanshan Zhu, Ph.D.; Katherine Sommers; Jennifer Qian; Jiangyang Zhang, Ph.D; Susumu Mori, Ph.D.; Kellie L.K. Tamashiro, Ph.D.; Susan Aja, Ph.D.; Timothy H. Moran, Ph.D.; and Christopher A. Ross, M.D., Ph.D. For more information: For the Media Media Contact: Stephanie Desmon
<urn:uuid:fe718076-7827-4523-a646-15167e192c60>
CC-MAIN-2016-26
http://www.hopkinsmedicine.org/news/media/releases/johns_hopkins_researchers_slow_progression_of_huntingtons_disease_in_mouse_models
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00122-ip-10-164-35-72.ec2.internal.warc.gz
en
0.910875
872
2.671875
3
Blackbeard was the most notorious and feared pirate in the history of piracy and his reign of terror throughout the colonies and the Caribbean is the stuff of legend. Standing well over six feet tall and built like a damned tree, Blackbeard was a fierce and determined pirate that was more than capable of beating the living shit out of anyone that pissed him off and pretty much anyone else who was stupid enough to get in his way. Little is known about Blackbeard's past. It is believed that his real name was Edward Teach and that he was born in Bristol, England around 1680, but most of that is merely speculation. What is known about him is that he got his start serving as a British privateer under the command of the pirate Benjamin Hornigold, battling the French in Queen Anne's War (also known as the War of Spanish Succession). During this time he made a reputation for himself as being super awesome. While scouring the Caribbean waters looking for asses to kick, Hornigold's ships came upon a 300-ton French slave ship, Le Concorde. Being the awesome pirates that they were, Hornigold's crew stormed the ship and after a brief battle managed to take control of it. Hornigold was totally pumped about taking command of such a huge ship but thought that it was probably in his best interest to take advantage of the pirate amnesty that European countries were now offering so he wouldn't get hanged from the neck like a chump, and he retired from piracy and left Le Concorde under the command of Blackbeard, who was of course the biggest badass in his crew. Blackbeard decided that the 300-ton ship was not as totally X-treme as it could be, so he outfitted it with forty cannon and recruited a crew of three hundred badass toothless, hook-handed, peg-legged, face-punching pirates to run it and renamed it Queen Anne's Revenge, which was a way more awesome name than Le Concorde. Eventually Blackbeard managed to add three more shallow-bottomed sloop ships to his fleet and he was ready to sail the seas and cut some throats. Blackbeard and his crew earned notoriety by plundering any vessels that they came across while sailing the seas of the Caribbean. If he raised his jolly roger and the other ship had the good sense to surrender without a fight, Blackbeard would just sack the ship and let everyone go free. However, if they were dumb enough to fire a broadside at him Blackbeard would raid the ship, loot it, sink it and kill everyone on board. In 1717 he became famous among pirate circles for defeating the British 30 gun Man-o-war H.M.S. Scarborough in a naval duel, sinking the H.M.S. Great Allen and capturing the British vessel Adventure to serve in his pirate fleet. Blackbeard's badassitude extended far beyond his just his pirate profession. As I mentioned before, he was freaking huge, plus he had a hugeass black beard that he took his pirate name from. Whenever he would go into battle he would place slow-burning hemp ropes under his hat and woven into his beard and would light the ends on fire so that he looked like an insano-bot madman whose head was on fire. He was heavily armed, carrying six fully-loaded pistols on three bandoliers across his chest, several knives at various locations and his huge-ass cutlass which was enough to bust heads on its own. Basically, he ruled. He was also quite the ladies' man as well. Over the course of his tenure as pirate captain he married over fourteen different women throughout various locations in the Caribbean and fathered forty children. It is believed that his only "official" wedding was to 16 year old Mary Ormond, since that wedding was conducted by the Governor of North Carolina and didn't take place on the deck of the Queen Anne's Revenge like the previous thirteen had. He was fiercely loyal to his wives though and did not take well to being dissed. When one of his wived divorced him and gave a ring symbolizing her love to some punkass sailor bitch, Blackbeard hunted his vessel down, sacked it, cut off the guy's hand (with ring still attached) and mailed it to his ex-wife in a box. Blackbeard was also freaking crazy. When one of his men doubted his meanness, Blackbeard shot his first mate just to prove how badass he was. One time he took his entire crew below deck and light a huge brimstone fire to see who could take being cooped up with all the smoke the longest. Blackbeard won. Blackbeard was pretty much allowed to do whatever he wanted, since he would give the Governor of North Carolina Charles Eden a portion of his treasure in exchange for amnesty. When British ships-of-the-line would seek him out, he would use his shallow-bottomed ships to retreat to small coves where large-drafted warships couldn't follow. However, after Blackbeard and his crew blockaded and sacked Charleston, the largest port city in the Southern colonies, people finally started to have enough of his bullshit. The Governor of Virginia put a bounty on his head and contracted British Lieutenant Robert M. Maynard to go and kick some pirate ass. Maynard caught up with Blackbeard in the small cove of Ocracoke or "Teach's Hole" on November 22, 1718. Blackbeard was on board his sloop Adventure when two British sloops sailed in towards him. Blackbeard got pissed and waited until the enemy ships were almost ontop of him before firing a broadside right into their faces. Both of the British sloops were heavily damaged and Blackbeard decided to take the advantage. He boarded the H.M.S. Ranger but quickly realized that he had been lured into a trap as limeys started jumping out from everywhere and attacking his men. Blackbeard met Lt. Maynard on the deck of the Ranger and they engaged in an extended sword duel. During the battle, some chump Brit slashed Blackbeard in the neck, but that didn't even phase him. He just kept fighting it out and finally died from loss of blood as he was pulling the hammer back on one of his pistols. Later examinations of his body revealed that he had five bullets lodged in his body and that he had been stabbed twenty times before he finally went down. Lt. Maynard decapitated the dead Blackbeard and put his head on the brow of his boat, which is pretty cool I guess. To this day, few pirates ever came close to the reputation of Blackbeard. He was the best of the best; the most badass guy in the most badass profession this side of ninjas, vikings or space marines. The Complete List About the Author
<urn:uuid:0dd35ea9-213d-4c9c-b42d-b8e8572eb099>
CC-MAIN-2016-26
http://www.badassoftheweek.com/blackbeard.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.991536
1,392
2.875
3
The Testing Center has gathered some useful resources for a student taking the math and English (COMPASS) placement tests. It is important to prepare for the tests. Reviewing material you already know can help you place into a higher math or English class, saving you time and money. Over your college career, that could mean a savings of several quarters of study and several thousand dollars! The links at left are a brief selection of possible study guides intended to provide help. Take some time to look at different sources and find what works for you. There is not an official study guide for the ESL test. You can prepare for the test by reading, writing, speaking, and listening to English as often as you can, before you come to test. If you have just arrived from another country, you may want to take some time hearing and using English on a daily basis before coming to test. Practice reading in English, using idioms, grammar exercises, English crossword puzzles and more. GED preparation books are available in most bookstores and libraries. Books by the Steck-Vaughn Company are widely used. North also provides GED preparation classes. Contact the ABE/ESL advisor for information. Online preparation help is also available at various websites: www.GEDonline.org/ and www.gedpractice.com/
<urn:uuid:5a50e82a-2235-4f19-9d86-1a1ce9dcc96c>
CC-MAIN-2016-26
https://northseattle.edu/study-guides
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950459
272
2.640625
3
Ask an Expert: The Causes of Depression < Ask an Expert: All Topics - What are differences between depression and "the blues?" [1:32] - What causes depression? [0:54] - Why is the age of onset (the first depressive episode) dropping? [1:09] - Why do women have higher rates of depression? [1:03] Learn more about the causes of depression: Why is the age of onset (the first depressive episode) dropping? DR. CHARLES NEMEROFF: So whether or not the rate of depression is increasing might be a matter of debate. What isn't a debate is the age of onset of depression is clearly dropping, so that thirty years ago, the average person had their first depressive episode in their late forties or early fifties. The average age of onset right now in The United States is 24 years of age. And you know what? We don't know why that is. There are a lot of theories about it. They relate to increased levels of stress, the sort of 24/7 society we live in, more concerns about terrorism, more concerns about global warming, and more concerns about just the family structure, concerns that the nuclear family isn't really there to provide support. We know that social support is a tremendous buffer against these stressors. So it's probably a combination of factors.
<urn:uuid:2e788d40-86e7-4044-a6c6-51a2bc5d14e3>
CC-MAIN-2016-26
http://www.pbs.org/wgbh/takeonestep/depression/ask-causes_3.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962029
288
2.546875
3
Husky Stadium Has Long History Beyond Football Early September means college football. And down along Montlake Boulevard, the University of Washington Huskies are getting ready to play in their remodeled and expanded stadium. Though most of the structure is new, there’s been a stadium on this same spot since 1920. And in nearly a century, it’s played host to a lot more than football games. In 1923, President Warren G. Harding gave a speech at the stadium about the future of Alaska. The speech would barely be remembered, if at all, other than the fact it was to be Harding’s last public address. The President was seriously ill, and he died in San Francisco a few days later. Twenty years after that, FDR was president and the nation was at war. Civilians on the home front all over the US were pressed into service, to be ready for an enemy invasion. In Seattle, local civil defense authorities created a remarkably elaborate simulated attack, using the Husky gridiron as the stage. The public was invited to come and watch. Bombers flew overheard, mock buildings burst into flame, and make-believe chemical weapons rained down on pretend victims played by students. Medics and firefighters rushed to the aid of people and property, with fire trucks and ambulances, and an army of volunteers, pitching in to help. It would be decades before anything quite so dramatic would take place at Husky Stadium again. But one reliable source for edge-of-the-seat suspense has always been the annual match up with cross-state rivals the Washington State Cougars. In 1962, the game was officially renamed “The Apple Cup” and it was played that year in Spokane. A year later, the Apple Cup came to Seattle. It was the weekend before Thanksgiving, and the game was set for Saturday. And then the news came: President Kennedy had been assassinated. In the wake of JFK’s death that Friday in Dallas, the 1963 Apple Cup was postponed. A week later, the Huskies beat the Cougs at Montlake, 16 – 0. Husky Football has been something of a rollercoaster throughout the history of the stadium—from the highs of Rose Bowl appearances and national championships, to the lows of losing streaks and off-field troubles for coaches and players. But Husky Stadium also created a little history of its own, though it was in the stands and not on the field. During a game against Stanford on Halloween 1981, many credit former Husky Yell King Robb Weller for invention of the loud and synchronized cheer known as “The Wave.” It wasn’t long before the Montlake-born Wave reached the shores and sports fans of every continent. And back home, Husky fans have kept cheering, even through the tough times, and kept coming back to Husky Stadium for football, soccer and track and field. In the mid 80s, the University of Washington decided to capitalize on the demand and add more seating capacity with a major addition on the north side of the field. The bigger capacity was a boon to ticket revenue, and also helped Seattle land the 1990 Goodwill Games. This Cold War-era Olympics-like event brought athletes from around the world and thousands of spectators to the Pacific Northwest. Husky Stadium hosted the opening ceremonies, and Ronald Reagan was the keynote speaker. And though the former President and actor got a lot of applause that day, the crowd did not break out into a spontaneous “Wave.”
<urn:uuid:5e509e34-eaad-4789-96ba-d13df5bed6f8>
CC-MAIN-2016-26
http://kuow.org/post/husky-stadium-has-long-history-beyond-football
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.97586
727
2.59375
3
- About Us - News & Events - Faculty & Research - Degrees & Programs - Supporting SAS James Petersson uses customized amino acids to track the movements of proteins. April 28, 2011 Ever wondered what exactly is going on inside a cooking egg to change it from its clear goopy consistency to an edible white? Like the majority of cellular activity, a protein is front and center. In this case, however, the protein is actually behaving erroneously, misfolding, in order to go through its metamorphosis. "DNA is the permanent record for everything in our body. Proteins, made up of amino acids, are doing the real work," says James Petersson, Assistant Professor of Chemistry. "You can think of these amino acids as beads on a string. These beads have properties that cause them to interact in certain ways, and therefore fold into specific shapes. It is the protein's shape which ultimately defines its function." While pursuing his doctorate, Petersson studied the proteins that are responsible for the communication between neurons. After earning his Ph.D., he continued along this research path with synthetic amino acids, testing whether proteins would still hold up if their natural backbone had been almost completely replaced by man-made materials. "Following my doctoral and post-doctoral research, the Penn Chemistry department was a natural choice given its renown in organic synthesis. It was also important to me to have a research hospital nearby that was very involved in biophysics and other potential applications of the work we do." When Petersson came to Penn he decided to head in a new direction with the exploration of not just synthesized protein building blocks, but entire synthesized proteins. Petersson's lab has developed a unique way of tracking protein movement, one that involves a subtle change to an amino acid. All amino acids are connected by an amide bond. Petersson's lab takes this bond and substitutes sulfur in for oxygen—a single atom substitution that is a very subtle change to the protein overall. These modified amide bonds, called thioamides, don't emit light, but they can quench fluorescence from other probes and thus be used to chart the movement of the protein. The key, Petersson says, is to cram as many probes into as many locations as possible. "In motion capture for CGI movies, you have a guy in a suit with a bunch of labels and the camera tracks those and tries to reconstruct movement. This is essentially what we want to do with proteins: to be able to track the labels and reconstruct the motions. Say you have a small protein with multiple helices, if you place probes in certain ones and track the distance between them using thioamides, it reveals how they're moving and communicating. And just like with motion capture, the more labels you have, the better the tracking." "In motion capture for CGI movies, you have a guy in a suit with a bunch of labels and the camera tracks those and tries to reconstruct movement. This is essentially what we want to do with proteins: to be able to track the labels and reconstruct the motions. " – James Petersson Protein tracking has very real implications, one of which involves the synthesizing of misfolding proteins in diseases such as Alzheimer's. Amyliod-β, for instance, is normally a monomer, but with Alzheimer's it starts to cluster, creating fibers that stack. The fibers then form long strands, choking neurons off. By introducing the thioamide and fluorophore into a synthesized Alzheimer's protein, Petersson's lab is tracking the toxic misfolding of these proteins. "We see similar misfolding in prion diseases like mad cow. Once one misfold occurs, it propagates other misfolds—a domino effect if you will. Using our methods, it's possible we can recreate models of these diseases as well, in hopes of gaining insight into how the proteins behave." Petersson says the future is wide open when it comes to the use of protein tracking techniques. Eventually labs like his may be able to use similar fluorescence-finding methods to set off reactions in antibody samples that could help diagnose diseases like HIV/AIDS. "The mystery surrounding the folding of amino acids into specific proteins is a hugely important question in biochemistry, one our lab, along with many others around the world, are working everyday to solve." School of Arts & Sciences Office of Advancement If you would like to contact someone about this or any other issue of Frontiers, please email:
<urn:uuid:f0c9baca-c23d-4738-8c37-2b8c0ffe0838>
CC-MAIN-2016-26
http://www.sas.upenn.edu/series/frontiers/hunting-games
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00118-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963746
933
2.9375
3
Despite a more health-conscious public, the obesity epidemic is still in full swing throughout the U.S., and young Hispanics may be at a higher risk of both obesity and the health problems that come along with it, according to a new study from the American Heart Association. It’s the first ever large-scale study on body mass index (BMI) and heart disease risk factors among the U.S. Hispanic adult population. Published in the Journal of the American Heart Association, it found that young Hispanics 18 percent of women and 12 percent of men had BMIs over 35, indicating that they were severely obese (a BMI of 30 and up signifies obesity). What’s concerning about all of this is that the average ages of the women involved was 41, while it was 40 for men. The study involved more than 16,300 Hispanics from diverse backgrounds, who lived in either the Bronx, N.Y., Chicago, Miami, or San Diego. They looked at all the risk factors associated with obesity and its related diseases, including BMI, cholesterol, blood pressure, and diabetes. They found that younger participants, ages 25 to 34, were the most likely to have a BMI over 40, which translates into a weight of over 240 pounds for someone who’s five-foot-five. Of those who were severely obese, more than half had unhealthy levels of high-density lipoprotein, the “good” cholesterol, while also exhibiting inflammation. Forty percent of all participants had high blood pressure, and over 25 percent had diabetes. Falling in line with other statistics regarding Mexican obesity, Mexicans involved with the current study were among the largest groups, with 37 percent of them being classified as obese. Cubans and Puerto Ricans followed, with 20 percent and 16 percent being obese, respectively. “This is a heavy burden being carried by young people who should be in the prime of their life,” said lead author Dr. Robert Kaplan, a professor of epidemiology and population health at Albert Einstein College of Medicine in New York City, in a press release. “Young people, and especially men — who had the highest degree of future cardiovascular disease risk factors in our study — are the very individuals who tend to neglect the need to get regular checkups, adopt healthy lifestyles behaviors, and seek the help of health care providers.” Kaplan said that, because they’re already obese at such a young age, they’re likely to experience even worse effects as they age. “We should be investing heavily in obesity research and prevention, as if our nation’s future depended on it,” he said in the release. With health care costs already on the rise, obesity-related costs currently total around $147 billion annually. If those costs rise even more, our nation’s economic future may very well depend on cutting down on obesity. Prevention is critical, but it must be done early. Just today, the CDC released new info on the amount of kids, ages 12 to 15, who spend their time watching TV or on the computer. Like past studies have found, kids who spent more of their time watching TV (over two hours) were most likely to be obese, with risk increasing as screen time increased. Teaching kids to partake in more stimulating activities is way better, and combined with a healthy diet and physical activity will surely lower future obesity rates.
<urn:uuid:28ad9415-cf9a-4a15-9945-dc0461fed0f8>
CC-MAIN-2016-26
http://www.medicaldaily.com/obesity-rates-among-young-hispanics-are-too-high-setting-them-sicker-days-later-292296
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398075.47/warc/CC-MAIN-20160624154958-00149-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972325
699
2.640625
3
by Geri Walton ~ March 22nd, 2009 Today, researchers are discovering more and more information that indicates there is a link between poor nutrition and health problems. If you think about it, if poor nutrition contributes to things such as cancer, diabetes, and cardiovascular disease, then why wouldn’t proper nutrition contribute to good health? You may also wonder what good nutrition means because what is touted as beneficial one day seems to be bad the next. To help, here are five nutrition tips often linked to good health: - Choose Good Oils. Americans tend to eat too many foods that are full of trans fats, saturated fats, or polyunsaturated omega-6 fatty acids. In fact, most people can receive health benefits by simply replacing or substituting omega-3s (found in fish) and omega-9s (found in olive oil and avocados) for omega-6s (found in most plant oils). Choosing the right omega oils can result in better physical and mental health, as well as reduce your risk for health problems, such as cardiovascular disease, diabetes, and obesity. To learn more about omega oils, read Omega Oils and Their Health Benefits and to learn more about oils in general, read The Skinny on Fats. - Consume a Variety of Foods. You may have heard you’re supposed to eat a variety of foods, and it’s true. You gain two things by doing so. First, you consume a wider variety of vitamins, minerals, and antioxidants by eating different foods, and, second, you reduce your risk of becoming allergic to the foods you eat because eating the same foods day in and day out contributes to food allergies. Elson Haas, M.D. is a holistic proponent and author of Staying Healthy With Nutrition, 21st Century Edition: The Complete Guide to Diet & Nutritional Medicine. Haas suggests you rotate foods so that do not eat the same food more than once every four days. So, for example, if you eat broccoli on Tuesday, Haas suggests you not eat it again until Saturday—this means you would skip it Wednesday, Thursday, and Friday. Rotating foods is good practice because it also encourages you to eat a greater variety of foods and as vitamins, minerals, and antioxidants are synergistic—their combined effect is greater than the sum of their individual effects—it may further increase your chances of good health. - Drink Healthy Water. We are made up of at least 60 percent water, and, as it is a basic for life, you need to make sure the water you drink is healthy. Many holistic practitioners object to water that comes directly from your tap. They claim it is full of toxins and additives, such as chlorine and fluoride—both of which may be detrimental to your health. They suggest you drink wholesome spring water or water that is filtered through a reverse-osmosis procedure. To learn more about water, read Pros and Cons: Tap Water Versus Bottled Water and My Water Decision. - Eat Numerous Small Meals. One reason people may be overweight is because they fail to keep their metabolism high enough to burn the calories they eat. One way to achieve a high metabolism is to eat several small meals throughout the day, rather than three large ones. The reason why this works is if your body knows it’s going to get food regularly, it remains in high gear, but if you have two or three large meals and are starving between meals, it’s a signal to your body to slow down and conserve energy, which reduces your metabolism. Additionally, if you eat small meals throughout the day, you’re less likely to be hungry and less likely to overeat. Another problem with eating large meals is that it causes sluggish digestion, and sluggish digestion robs your body of proper nutrients and encourages vitamin and mineral deficiences that create illness. So, as a rule of thumb, you should avoid going more than about three hours without eating and then when you eat, eat small meals. - Take a Multi-vitamin and Multi-mineral Daily. Research sometimes indicates vitamins or minerals are not beneficial. However, many times the research is based on the effects of a single vitamin or mineral and because vitamin and mineral benefits are synergestic, it is not clear that a study based on a single vitamin or mineral is valid. Moreover, critics of these same vitamin and mineral studies often point out that the amount used in the studies is often well below levels known to be beneficial. Based on these reasons, holistic practitioners usually suggest you take a multi-vitamin and multi-mineral daily. Additionally, even if you eat a balanced diet, it is hard to acquire optimum levels of vitamins and minerals, and, in case you didn’t know it, the established recommended daily values are not based on optimal nutrition levels, they’re based on avoiding deficiencies not providing optimum nutrition. Therefore, many holistic practitioners maintain you need greater amounts of vitamins and minerals than the quantities recommended. One way to acquire them is to eat a balanced diet and take a multi-vitamin and multi-mineral daily. Another reason to take a multi-vitamin/multi-mineral is people sometimes think they just need a single vitamin or mineral. However, they may be doing themselves more harm than good. If you get too much of any single vitamin or mineral, it may eventually cause a deficiency in another vitamin or mineral because vitamins and minerals work in balance. So, it’s always best to make sure you take a multi-vitamin and multi-mineral and then add more vitamin C, calcium, or whatever vitamin or mineral you require, because you will be less likely to create a vitamin or mineral deficiency. In most cases, good nutrition is a matter of choice just as poor nutrition is a matter of choice. By choosing good oils, consuming a variety of foods, drinking healthy water, eating numerous small meals, and taking a multi-vitamin/multi-mineral daily, you increase your chances of good health. So, decide to incorporate these five nutrition tips today and increase your chances of enjoying a long life and total optimum health.
<urn:uuid:33a0bc75-4fae-4952-8fa2-28dda70ad931>
CC-MAIN-2016-26
http://www.newrinkles.com/index.php/nutrition/nutrition-tips
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959703
1,259
3.203125
3
Energy & Water Efficiency Make Solar & Wind Energy a Better Investment The best first step toward renewable energy is to make sure your home is energy and water efficient. A big energy bill probably means you will need a big solar system to supply all that power with renewable energy. Investments to improve your home's energy and water efficiency will make an investment in renewable energy even smarter. Also, saving energy and water improves your home by making it more comfortable. Insulation keeps a home cool in the summer and warm in winter, and energy efficient appliances can make a home quieter. Skylights and windows also help bring in that beautiful natural light. An energy audit is the first step to assess how much energy your home consumes and to evaluate what measures you can take to make your home more energy efficient. An audit will show you problems that may, when corrected, save you significant amounts of money over time. During the audit, you can pinpoint where your house is losing energy. Audits also determine the efficiency of your home's heating and cooling systems. An audit may also show you ways to conserve hot water and electricity. You can perform a simple energy audit yourself, or have a professional energy auditor carry out a more thorough audit. » more info |Energy Audit Resources||Handbooks| |» Home Energy Saver Energy Audit/Calculator » Do it Yourself Home Energy Audit Checklist » EnergyStar Calculators White/Reflective Roofs »Learn How Cool Roofs (a white roof or roof coating) will reflect more of the sun's heat so that your attic and your house stay cooler. Flat roofs are especially good candidates, because you can't see them from ground level. Weatherization Assistance: The Weatherization Assistance Program enables low-income families to permanently reduce their energy bills by making their homes more energy efficient. During the last 30 years, the U.S. Department of Energy 's (DOE) Weatherization Assistance Program has provided weatherization services to more than 5.6 million low-income families. To learn more about the Weatherization Assistance program please visit the DOE's Weatherization Information page One of the best ways of reducing the high cost of your energy bills for heating and cooling is to weatherize your home or apartment. The US government's Weatherization Assistance Program is available to conduct home energy audits, and to weatherize your home. Weatherization typically costs $2500, and the government foots the bill. As a result, energy bills are cut, on average, by one-third, and can mean savings of hundreds of dollars a year. Most assistance is provided to low-income families, but some is available for higher-earning families as well. |Weatherization Assistance: Instructions| |Step 1:||Contact your state or local agency (see the link to the list in the Resources, below). You may want to read a bit about the program first, to see if you're likely to be eligible (again, use the links in Resources)| |Step 2:||Submit an application. Applications are pretty simple, and usually take only about 20 minutes to fill out. You'll have to include proof of income with your submission. For many states, submission are handled in person, at a local office.| |Step 3:||If you are eligible, your weatherization agency puts you on a waiting list.| |Step 4:||Schedule a professional energy consultation for an energy audit and analysis of your energy bills| |Step 5:||Schedule the actual weatherization work.| |Step 6:||Enjoy a more comfortable home with significantly lower energy bills.| Weatherization Links & Resources - State Weatherization Contacts - Overview of the Weatherization Assistance Program - By State: How to Apply for Weatherization Assistance - Weatherization Training Centers The Profitability of Energy Efficiency Upgrades According to a study by Lawrence Berkeley National Labs, application of the 10 energy efficiency measures, below, in a typical home yields nearly an impressive 16% overall return on investment. This diagram provides a representative view of the high profitability of energy efficiency upgrades. Note that the home evaluated here is located in an average U.S. climate and has a heat pump, electric water heater, clothes washer, clothes dryer, and dishwasher. The example cost-effectively surpasses the 30% savings target for existing homes under PATH (The Partnership for Advancing Technology in Housing). In fact, all of these measures yield a higher return on investment than an ordinary bank account, and most are as or even more profitable than the stock market has been in recent history. The efficiency savings shown above include the effect of income taxes. This makes the savings even more attractive, because you can keep all the money you save on your energy bills, but have to pay hefty taxes on most ordinary investment income. Source:Lawrence Berkeley National Labs Note: Values shown are in 1997 dollars, and actual costs may have changed. However return on investment percentages (ROI %) should have remained roughly the same over time, and perhaps improved as utility rates have increased (in some cases significantly) while energy-saving measure costs have reduced in many cases. To learn more try the » Home Energy Saver Energy Audit Online Calculator |Energy Efficiency Upgrade||Purchase Price||Annual Bill Savings||Simple Payback (yrs)||Rate of Return| |Fluorescent Lamps & Fixtures||$200||$80||2.5||41%| |ENERGY STAR Clothes washer||$194||$66||2.9||37%| |ENERGY STAR Programmable Thermostat||$107||$29||3.7||30%| |Water Heater Tank Wrap (R-12)||$85||$23||3.7||28%| |ENERGY STAR Refrigerator||$97||$23||4.2||27%| |ENERGY STAR Heat Pump||$692||$126||5.5||19%| |ENERGY STAR Dishwasher||$29||$5||5.5||18%| |Air sealing to 0.5 air changes per hour||$522||$38||13.7||9%| |Increase wall and attic insulation||$1,784||$111||16.1||8%| |Total bill savings as % of baseline bill||36%| Another Comparison of some energy-saving home improvements Portland General Electric (Oregon) developed the table below. It provides a comparison of some popular single-family-home, energy-efficiency improvements that reduce energy bills. The return on investment (ROI) is based on 8 cents per kilowatt-hour. Months refers to the number of months worth of savings it takes to pay for the modification. (Source: Portland General Electric. April 2003) |Return On Investment (ROI) Estimates| for household energy efficiency improvements |3||High efficiency showerhead||400%| |13||Fireplace pillow-stops air leakage up chimney||91%| |14||Bathroom faucet aerator||84%| |17||Attic insulation (R-0 to R-38)||69%| |23||Compact fluorescent bulb||53%| |23||Kitchen faucet aerator||51%| |25||Wrap 15' hot and cold water heater pipes||48%| |38||Replace incandescent porch light fixture with CFL bulb||32%| |43||Attic insulation (average)||28%| |44||Duct insulation and sealing||27%| |68||Wall insulation (R-0 to R-25)||18%| |88||Floor insulation (R-0 to R-13)||14%|
<urn:uuid:79fe17a5-3980-49a6-b8bd-457674bd165e>
CC-MAIN-2016-26
http://estimator.solar-estimate.org/index.php?verifycookie=1&page=rightforme&subpage=efficiency&external_estimator=
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00075-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885658
1,647
2.59375
3
October 11, 2010 Many Americans Do Not Eat Enough Grains New research shows that people who eat sufficient amounts of whole grains have higher quality diets overall, but it also shows that, at least in the United States, not very many people actually eat whole grains as much as they should. The study, published in the October issue of the Journal of the American Dietetic Association, found that less than 5 percent of American adults between 19 and 50 surveyed between 1999 and 2004 said they eat at least three whole grain servings every day.During this period, there were no exact guidelines for how much whole grains should be eaten daily, noted study author Dr. Carol E. O'Neil of the Louisiana State University in Baton Rouge. New dietary guidelines were implemented in 2005, suggesting Americans should eat three servings of whole grains daily. There is strong evidence that consuming whole grains is associated with a lower risk of heart disease, stroke, type 2 diabetes, obesity and possibly some types of cancer, although it remains unclear what the mechanism is behind the beneficial effects of whole grain. Whole grain is grain with the outer portion of the kernel still intact. The researchers looked at the data to assess the link between whole grain use and diet quality. The study included 7,039 men and women between 19 and 50 years old and another 6,237 people over the age of 50. They found that the younger group ate less than two-thirds of a serving of whole grains daily, on average, while the older people ate just over three-quarters of a serving. The fraction of people who ate the most whole grain also consumed more fiber, healthy fats, and vitamins and minerals, while taking in less sugar, unhealthy fat, and cholesterol, the researchers found. But because the study only looked at a single point in time, it could not access the health effects of the subjects' eating habits. "We can only say that consumption of whole grain is associated with improved nutrient intake or diet quality," O'Neil told Reuters Health. "We know from previous studies that consumption of whole grains is associated with a generally healthier lifestyle." Even with more specific guidelines in place telling Americans how much whole grain they should be consuming, it is unlikely that the percentage of people eating more whole grain has changed much since the survey was completed, O'Neil said, noting: "People just don't eat whole grains, although an increasing number of whole grain foods are available." Many people do not have a clue what whole grains are, what types of foods contain them, and why they are good for you. O'Neil recommends people check the MyPyramid Website (http://www.mypyramid.gov/) to educate themselves on whole grains. She also recommends the Whole Grain Council site (http://www.wholegrainscouncil.org/find-whole-grains). On the Net:
<urn:uuid:1f994581-f25a-4cf1-aef4-e73ca256d135>
CC-MAIN-2016-26
http://www.redorbit.com/news/health/1929327/many_americans_do_not_eat_enough_grains/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968717
587
3.046875
3
In part, philosophers have no one but themselves to blame for the low state to which their discipline has fallen—thanks especially to the logical positivist and analytic strain that has been dominant for about a century in the English-speaking world. For example, the influential twentieth-century American philosopher W.V.O. Quine spoke modestly of a “philosophy continuous with science” and vowed to eschew philosophy’s traditional concern with metaphysical questions that might claim to sit in judgment on the natural sciences. Science, Quine and many of his contemporaries seemed to say, is where the real action is, while philosophers ought to celebrate science from the sidelines. Note from KBJ: Quine was wrong. Philosophy is not "continuous with science." Science is a first-order discipline, the aim of which is to discover how things are in the natural world. It has its own concepts, methods, and argumentative standards. Philosophy is a second-order discipline that takes first-order disciplines such as science as its subject matter. It, too, has its own concepts, methods, and argumentative standards. (Any overlap is accidental.) Many philosophers are in love with science to the point where they think philosophy is continuous with, or even a part of, science. What they don't realize is that scientists don't see it that way. Many scientists view philosophy as a useless (and even counterproductive) enterprise. A true philosopher is critical of science, not enamored of it. A true philosopher seeks to show just where and why science ends and other fields, such as theology and philosophy, begin. A true philosopher is as skeptical about the claims of science as he or she is about the claims of law, morality, or religion. A true philosopher would be wary of linkages between science and state.
<urn:uuid:1be80005-5c22-43e7-972d-215d703fbf2e>
CC-MAIN-2016-26
http://keithburgess-jackson.typepad.com/blog/2012/12/austin-l-hughes-on-philosophy.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00160-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971942
370
2.921875
3
- Browse All GHANAIAN YOUTH OVERVIEW - Total population: 19,894,014 (Ranked 50th in the world by the US Census Bureau). - Population density: 212 per square mile. - Children 0-14: 41.2%—8,192,103. - Teenage 10-19: 24.4%—4,854,667. - Youth between 15-24: 11%—2,187,123. - Seniors Over 70: 2%—405,861. - Male to female ratio: 99.2 males per 100 females. - Birth rate: 29.81 per 1,000 people. - Life expectancy at birth: 55.38 for males and 59.62 for females. - Infant mortality rate: 74.77 per 1,000 live births. Beginning age 4 Duration, 2 years Beginning age 6 Duration, 6 years Beginning age 12 Duration, 7 years The Ministry of Education is responsible for the administration of education in Ghana. It has two agencies, the Ghana Education Service and the National Council on Tertiary Education (NCTE), which are responsible for all levels of education. The goals of the Ministry of Education are to: - Provide basic education for all students. - Educate and train students in the areas of science, technology and creativity. - Develop middle and top level management through higher education. - Ensure that all citizens are literate and self-reliant. The Ministry of Education follows a program called fCUBE, or Free, Compulsory, Universal, Basic Education. Since 1987, Ghana has been working to increase the number of students in primary schools. Additionally, there have been improvements in technical and professional training in secondary and higher education. Illiteracy is an issue, as 3.4 million people over 15, out of a total of 11.7 million over 15, are considered illiterate (almost 30 percent). Ghana is working hard to improve the quality of and access to education for women. To this end, it created the Girl’s Education Unit (GEU), a division of the Ghana Education Service. The GEU seeks to educate girls in order to: - Ensure equality of access to education and educational opportunities. - Enable girls to contribute effectively to the development of Ghana as a nation. - Improve the status of girls and women. - Develop social capital for girls and women. In accordance with the fCUBE program, the GEU is to achieve the following goals by 2005: - Increase enrollment of girls in basic education (primary) to equal the enrollment of boys and to ensure that girls continue into secondary education. - Reduce female dropouts from 30% to 10% in primary schools, and from 21% to 15% in secondary schools. - Increase the transition rate of girls from junior to senior secondary schools by 10% by the end of the fCUBE program. - Increase the participation of girls in science, math and technology. The vast majority (99.8%) of the Ghanaian population is black African. Among the black Africans are the following tribal groups: The remaining .02% of the population is European. The total population in Ghana is 19.8 million (9.9 million males, 9.99 million females, or a ration of 99.2 men per 100 women). The under 15 population is 8.2 million (41% of the total population). The birth rate is 29.81 per 1,000 people, and the infant mortality rate is 57.43 deaths per 1,000 live births. The death rate is 10.22 deaths per 1,000 people, so the overall growth rate is about 1.87%. The life expectancy is 56 years for men and almost 59 for women (an overall life expectancy of 57.4 years). Estimates place literacy among those over 15 at 64.5% (76% for men, 53.5% for women). The Gross Domestic Product (GDP) for Ghana is $35.5 billion (a per capita GDP of $1900). The GDP growth rate, as of 1999, was 4.3%. Estimates place 31.4% of the population under the poverty line, and household income or consumption by percentage at 3.4% for the poorest 10% of the population and 27.3% of the richest 10%. The inflation rate is 12.8%. The labor force is estimated at 4 million (60% in agriculture, 15% in industry, 25% in services). Twenty percent of the population is thought to be unemployed. Officially, Ghana is a presidential/parliamentary democracy, but it is not truly democratic, because of the influence the president has. All citizens over 18 have the right to vote. Each member of parliament has the right to introduce bills, but all bills since 1996 have been introduced by the attorney general’s office, which is strongly supportive of the president. The courts enjoy a good deal of autonomy, but are under the sway of the government, particularly in cases of freedom of the press. Freedom of the press is guaranteed by the constitution but is not always enforced. The president, and the government, are fond of using the country’s libel laws to suppress information in the media. Freedom of assembly is guaranteed under the constitution as well, and is usually enforced. Unemployment is a major social issue in Ghana. Estimates put the unemployment rate at 20% (compare to a national average of about 5% in the US). Other social issues revolve around the willingness of the government to actually enforce the rights afforded by the constitution. External groups feel that the government has too much authority and sway over the life of its people, and that it is abusing that power. It is also thought that domestic violence is an issue, but it is largely unreported. Women are not equal in practice, though afforded many of the same rights as men. AIDS is also an issue in Ghana, as in many other sub-Saharan African nations. Estimates place the number of people infected with AIDS at 340,000, or 1.7% of the overall population (3.6% of the adult population. The number of children (those under 15) with AIDS is 14,000, or .17% of the child population. There are three major religious groups in Ghana. 38% of the population is thought to be adherents to indigenous beliefs, 30% are Muslims, and 24% are Christian. Jonathan Ketcham cCYS Linking Ghanaian youth worldwide. GHANAIAN YOUTH RESOURCES GHANAIAN YOUTH OVERVIEW
<urn:uuid:d8d8d2f8-5545-4f70-a083-2f5152c43edc>
CC-MAIN-2016-26
http://www.urbanministry.org/wiki/ghanaian-youth-overview
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00144-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932171
1,385
3.046875
3
NOTE: Experienced investigators looking for specific information can proceed to: This section provides an introduction to the analysis of data obtained from using small extracellular electrodes to record neural activity. We begin this section by assuming that the electrophysiology has already been done -- i.e., you've manufactured or purchased your electrodes (and by some unknown and miraculous set of circumstances they work), and you've gotten a nervous system to cooperate by permitting you to record neural activity without introducing too much noise. What now? Basically what you have after your electrophysiological recordings is "raw" data, which consists of a large set of waveforms from each electrode. Since each electrode samples an area -- not necessarily small with respect to the size of neurons -- the waveforms from any one electrode may consist of spikes (i.e., "action potentials") from numerous cells. (See Figure 1.) What you'd really like to do (if you're fussy, like us) is to distinguish between spikes of different neurons. In other words, it's time to do some sorting. (Intracellular electrodes, on the other hand, record the membrane potential of a single cell and thus have no need for sorting procedures. But for various reasons, intracellular recording in vivo from mammalian nervous systems is extremely difficult and won't be discussed here.) It is generally accepted that the action potentials of most neurons have basically the same shape. However, the action potential of different cells, as seen by the recording electrode, will be distinguishable due to differences in the size and shape of the neurons as well as their varying orientation with respect to the electrode. This means that different neurons will produce at least slightly different recorded waveforms when they fire an action potential. Unfortunately (there's always an unfortunately somewhere), since we're recording very small signals, noise can be a problem and can make the sorting process rather difficult -- noise introduces variability, which means that even waveforms from the same cell are not identical each time they're recorded. We won't explain the sorting procedure here in any detail, but basically it involves finding some sort of measurements, or variables, that can help you distinguish between different waveforms. For instance, you can do principal component analysis on the waveforms, or perhaps something simpler, like finding peak-to-peak values. After you have identified these variables, you plot each waveform using the variables as axes; what you hope (and sometimes it actually happens!) is to see clusters of points, representing waveforms that are similar with respect to the variables. You are then justified in taking a cluster as representing a single cell, if the cluster is obviously distinct from the other waveforms in the sample. Thus, each waveform in the cluster is labeled as coming from the same cell. Now what you have is some waveforms grouped into bunches based on their shape, and you also know at what time each waveform occurred. You don't really care much about the waveform shape in subsequent analysis; that was important only in distinguishing between the spikes of different cells (and between a spike and noise). So what you do next is simply plot the spikes of each neuron as a function of time, thus creating what's known in the business as spike trains. An example is shown in Figure 2, which consists of a segment of two simultaneously recorded spike trains, labeled NEURON 1 and NEURON 2. The vertical bars represent a spike. Such plots are also called rasterplots. Rasterplots are important in giving you a detailed look at the data, but there are other ways to more compactly present your results. One very popular method is to make histograms . Before describing histograms, however, we need to make a slight digression to discuss two basic types of electrophysiological experiments, since data is analyzed a little bit differently depending on which type was performed. In some experiments, neurons are recorded without stimulation, i.e., their spontaneous activity is recorded; in these experiments, you are basically recording neural activity while the nervous system is "resting". In other experiments, you may wish to record while the nervous system is doing something, and in such cases you generally provide some sort of stimulation which you can control (the important variables to control are things like the intensity of the stimulation and its time duration). In these experiments, the neurons you are recording from are "activated" or stimulated, if you use a stimulus which will excite the neuron (in other words, if you use a stimulus to which the neuron is tuned). For cells in the visual system, for example, presentation of a bar or spot of light in the appropriate part of the animal's visual field will excite the cell, elevating its firing rate. Now to describe a histogram: it's just a plot of the binned data as a function of time. For spontaneous recording, you simply break the long spike train into small time segments, which are called bins, and add the spikes in each bin. Figure 3 shows an example of this process. The ellipsis (three dots) on each side indicates that the figure has been abbreviated for display purposes. For stimulated recordings, something a bit different happens. In most cases, the stimulus in such recordings is repeated numerous times, producing identical "trials" in the experiment. This is done because there is quite a bit of variability in most neurons' responses, even to the same stimulus; in other words, even when given identical stimuli a neuron's spike train can be vastly different across trials. You therefore need to get a sufficiently large sample of a neuron's response in order to make statistical inferences about its average response to that stimulus. In order to make a histogram under these circumstances, you cut out a segment of the spike train corresponding to each trial, and line them up based on the time of the "stimulus marker" (which is a time stamp that you perspicaciously placed during your recording to mark when you began delivering a certain stimulus). Figure 4 shows an example of this process. The time that the stimulus ended is also known to you, of course, and this influences your choice of the length of a trial segment. When so constructed, the histogram is called a peri-stimulus time histogram or PSTH for short. (If you only analyze the data following the stimulus, the histogram is called a post-stimulus time histogram -- but this distinction isn't really important. Besides, the acronym is the same in both cases.) So you now know about rasterplots and histograms, which are basically ways to display single unit data. Is there other information that can extracted from single unit data? There is, in fact, and for experimenters who are so impoverished that they can only record from one cell at a time, it's the only sort of information they can squeeze out of their data. (Note: Some experimenters lump all the data from their single electrode recordings into "one neuron", which is actually a combination of the neurons which that electrode sampled. In other words, they fail to sort their data. Such "single unit" data is occasionally referred to as dirty data.) We won't spend much time discussing some of the single unit analytical methods, but some of the more important can be briefly described: One of the simplest things to do is to normalize the PSTH; you do this by dividing the bin counts by the binsize (in whatever units of time you wish to use), and by the number of trials if you used a repeated stimulus. This results in a display of the average firing rates of the neurons -- i.e., a graph of the number of spikes per unit time over the course of the experiment or stimulus response. You can also make a bar plot of the interspike intervals, which are simply the time between spikes of the spike train. For example, if there were spikes at a time of 240 milliseconds (with respect to some reference marker), 249 ms, and 262 ms, then you would increment by one count the bin containing 9 ms intervals and the bin containing 13 ms intervals. Note that for a spike train of n spikes, you would have n-1 interval values. Different types of spike trains will have different distributions of interspike intervals. One further analytical tool is the autocorrelogram (which is related to the crosscorrelogram described below); this function lets you discern the fine time structure, if any, in the spike train of a single neuron. But there are much more interesting kinds of data analysis that you can do with multiple unit data...
<urn:uuid:cd4be5cb-4d13-4cbd-8b95-22ee0ab9a4ab>
CC-MAIN-2016-26
http://mulab.physiol.upenn.edu/analysis.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955835
1,760
2.890625
3
Central Excise is a levy (tax), levied on a commodity (manufactured within the country) by the Union Government by an Act of Parliament (usually in the Finance Bill, in the presentation of the Budget in the Parliament, generally on the last working day of February every year) by notifying under a Tariff. It is an indirect tax paid by the manufacturer, who passes its incidence to the customers. “Excise Duty” is levied the moment the process of manufacture is complete. Objectives of Central Excise Act, 1944 1. To collect excise duty on manufactured goods more conveniently 2. To reduce collection costs 3. To control wasteful expenditures 4. To avoid tax evasion by appropriate control measures 5. To promote industrial growth in backward areas 6. To support local industries 7. To collect high revenues Nature of Excise Duty Ÿ Govt. has constitutional powers to levy Excise Duty Ÿ Power to impose excise on alcoholic liquors, opium, and narcotics is granted to State Govt. Ÿ Power to impose excise on other items is granted to Central Govt. Basic Conditions for Excise Liability Ÿ Following four conditions must be satisfied to levy Excise Duty on any article:- Duty is on goods (movable and marketable) - Goods must be excisable (included in CETA, 1985) - Goods must be manufactured or produced - Manufacture or production must be in India Ÿ Levy means imposition and assessment but does not include collection of tax. Thus, duty is levied as soon as taxable event occurs, but collection can take place anytime - before, at the time or even after the taxable event. Ÿ Taxable event is manufacture or production in India. Ÿ Duty is payable by the manufacturer or producer of excisable goods. In case where goods are allowed to be stored in a warehouse without the payment of duty, the duty liability is of the person who stores the goods. Ÿ Rate of duty is as applicable on date of removal i.e. clearance from factory Ÿ Goods have to be classified and valued in the state in which the goods are removed from the factory. Any further processing done afterwards is not relevant. Ÿ Duty liability arises even when goods are not sold or free replacements are given during warranty period. Ÿ Duty is payable even when not collected from consumers. Ÿ Duty is payable even if duty was paid on raw materials. Ÿ Duty can be levied on Govt. undertakings. Ÿ Duty is considered as a manufacturing expense and is included as an element of cost for inventory valuation, like other manufacturing expenses. Types of Excise Duty Ÿ Basic Excise Duty (BED) or CENVAT Ÿ Special Excise Duty (SED) Ÿ Excise Duty on clearances by EOU / SEZ in Duty Tariff Area Ÿ National Calamity Contingent Duty (NCCD) Ÿ Duties under other Acts Ÿ Cess under other Acts The word “goods” has not been defined under the Central Excise Act. Article 366(12) of the Constitution defined “goods” as “goods include all materials, commodities, and articles.” This definition is quite wide for the purpose of Central Excise Act. As per judicial interpretation, for purpose of levy of Excise duty, an article must satisfy two requirements to be “goods” i.e. Ÿ Goods must be movable - immovable property or property attached to earth is not “goods” and hence duty cannot be levied on it. Ÿ Goods must be marketable - item must be such that it is capable of being bought or sold and must be known in the market. This is the test of “Marketability” The word “manufacture” is not defined completely in the Act. Definition in section 2(f) is inclusive. “Manufacture” includes any process - Ÿ incidental or ancillary to the completion of manufactured product, or Ÿ which is specified in relation to any goods in the Section or Chapter notes of the First Schedule of CETA, 1985 as amounting to manufacture, or Ÿ which, in relation to goods specified in third schedule to the CEA, involves packing or repacking of such goods in a unit container or labeling or re-labeling of containers or declaration or alteration of retail sale price or any other treatment to render the product marketable to consumer Thus, manufacture means Ÿ Manufacture specified in various Court decisions i.e. new and identifiable product having a distinctive name, character or use must emerge, or Ÿ Deemed Manufacture E.g. Manufacture of table from wood, conversion of pulp into base paper, conversion of sugarcane to sugar, etc. The word “Manufacturer” shall be understood accordingly and shall include not only a person who employs hired labor in the production or manufacture of excisable goods, but also any person who engages in their production or manufacture on his own account. Section 2(d) of Central Excise Act defined Excisable Goods as “Goods specified in the Schedule to Central Excise Tariff Act, 1985 as being subject to a duty of excise and includes salt.” Thus, unless the item is specified in the CETA as subject to duty, no duty is levied. Job work means processing or working upon raw materials or semi-finished goods supplied to job worker, so as to complete a part of whole of the process resulting in the manufacture or finishing of an article or any operation which is essential for the aforesaid process. Ÿ Job worker need not register with the Department of Central Excise. Ÿ He need not maintain records as required by the Act. Ÿ Job worker is not required to pay duty. Ÿ However, if the process amounts to manufacture, he can pay duty and this duty paid by job worker will be available as a credit to the manufacturer who has sent material for job work. Classification of Goods There are thousands of varieties of manufactured goods and all goods cannot carry the same rate or amount of duty. It is also not possible to identify all products individually. It is therefore necessary to identify the numerous products through groups and sub-groups and then to decide the rate of duty. This is called “Classification” of products, which means determining of heading or sub-hading under which the particular product will be covered. Ÿ The Central Excise Tariff Act, 1985 classifies all the goods under 91 chapters (actually 96 chapters out of which 5 are blank - 1, 6, 10, 12 and 77) and specific code is assigned to each item. There are over 1,000 tariff headings and 2,000 sub-headings. Ÿ India adopted the International convention of Harmonized System of Nomenclature (HSN), called Harmonized Commodity Description and Coding System developed by World Customs Organization w.e.f. 28.2.1986 Ÿ CETA contains two schedules - the first schedule gives basic excise duties (i.e. CENVAT duty) leviable on various products, while the second schedule gives list of items on which special excise duty is payable. Second schedule contains only a few items. Ÿ Central Excise Tariff is divided into 20 sections. A “section” is a grouping of a number of Chapters which codify a particular class of goods. E.g. Section XI is “Textile and Textile Articles” and within that section, Chapter 50 is Silk, Chapter 51 is Wool, Chapter 52 is Cotton and so on. Ÿ Each chapter is further divided into various headings depending on different types of goods belonging to the same class of products. E.g. Chapter 50 relating to Silk is further divided into 5 headings - 50.01 relates to silkworm cocoons, 50.02 relates to raw silk, 50.03 relates to silk waste, 50.04 relates to silk yarn and 50.05 relates to woven fabric of silk. The headings are sometimes divided into further sub-headings. E.g. 5004.11 means silk yarn containing 85% or more by weight of silk or silk waste while 5004.19 means containing less than 85% by weight of silk or silk waste. Ÿ All excisable goods are classified using 4 digit system and 2 more digits are added for further sub-classification whenever required. In above example, first two digits i.e. 50 indicates the Chapter number, next 2 digits i.e. 01 or 02 relate to heading of goods in that Chapter and the last 2 digits indicate sub-heading. Determination of Tariff Headings Central Excise Tariff has four columns - Ÿ Heading number Ÿ Sub-heading number Ÿ Description of goods Ÿ Rate of Duty Rules for Interpretation of Schedule are given in the Tariff itself. These are termed as “General Interpretative Rules” (GIR). These rules are briefly explained below - Ÿ Rule 1: The titles of Sections and Chapters are provided for ease of reference only; for legal purposes, classification shall be determined according to the terms of the headings and any relative Section or Chapter Notes and, provided such headings or Notes do not otherwise require, according to the provisions hereinafter contained. Ÿ Rule 2(a): Any reference in a heading to goods shall be taken to include a reference to those goods incomplete or unfinished, provided that the incomplete or unfinished goods have the essential character of the complete or finished goods. Ÿ Rule 2(b): Any reference in a heading to a material or a substance shall be taken to include a reference to mixtures or combinations of that material or substance with other materials or substances. Any reference to goods of a given material or substance shall be taken to include a reference to goods consisting wholly or partly of such material or substance. Ÿ Rule 3: When by application of sub-rule (b) of rule 2 or for any other reason, goods are prima facie classifiable under two or more headings, classification shall be affected as given in rule 3(a), 3(b) or 3(c). Ÿ Rule 3(a): The heading which provides the most specific description shall be preferred to headings providing a more general description. However, when two or more headings each refer to part only of materials or substances contained in mixed or composite goods or to part only of items in a set, those headings are to be regarded as equally specific in relation to those goods, even if one of them gives a more complete or precise description of the goods. Ÿ Rule 3(b): Mixtures, composite goods consisting of different materials or made up of different components, and goods put up in sets, which cannot be classified by reference to rule 3(a), shall be classified as if they consisted of the material or component which gives them their essential character, insofar as this criterion is applicable. Ÿ Rule 3(c): When goods cannot be classified by reference to (a) or (b), they shall be classified under the heading which occurs last in the numerical order among those which equally merit consideration. Ÿ Rule 4: Goods which cannot be classified in accordance with the above rules shall be classified under the heading appropriate to the goods to which they are most akin. Ÿ Rule 5: For legal purposes, the classification of goods in the sub-headings of a heading shall be determined according to the terms of those sub-headings and any related Sub-heading notes and, mutatis mutandis, to the above rules, on the understanding that only sub-headings at the same level are comparable. For the purpose of this rule, the relative Chapter and Section Notes also apply, unless the context otherwise requires. Valuation of Goods Excise duty is payable on one of the following basis - Ÿ Specific duty, based on some measure like weight, volume, length, etc. Ÿ Duty as a % of Tariff Value fixed u/s 3(2) Ÿ Duty based on Maximum Retail Price printed on carton after allowing deductions Ÿ Compounded Levy Scheme Ÿ Duty as a % on Assessable Value fixed u/s 4 (ad valorem duty) It is a duty payable on the basis of certain unit like weight, length, volume, thickness, etc. Calculation of duty payable is comparatively easy. In view of simplicity, many goods were covered under “specific duty”. However, the disadvantage is that even if selling price of the product increases, the revenue earned by Govt. does not increase correspondingly. Hence, most goods are covered under “Ad valorem” duty. Presently, specific rates have been announced for - Ÿ Cigarettes (length basis) Ÿ Matches (per 100 boxes / packs) Ÿ Sugar (per quintal basis) Ÿ Marble slabs and tiles (square meter basis) Ÿ Color TV when MRP is not marked on package or when MRP is not the sole consideration (based on screen size in cm) Ÿ Cement clinkers (per ton basis) Ÿ Molasses resulting from extraction of sugar (per ton basis) In some cases, tariff value is fixed by the Govt. from time to time. This is a “Notional Value” for purpose of calculating the duty payable. The tariff value may be fixed on the basis of wholesale price or average price of various manufacturers as the Govt. may consider appropriate. Provision of fixing tariff value is used very rarely as frequent changes become necessary when prices rise. Presently, tariff values are fixed for - Ÿ Pan masala packed in retail packs of less than 10 gm per pack Ÿ Tariff value of readymade garments falling under heading 6101.11 or 6201.00 has been prescribed as 60% of the retail sale price of such goods as specified on the package. Compounded Levy Scheme Central Govt. may, by notification, specify the goods in respect of which an assessee shall have the option to pay Excise duty on the basis of specified factors relevant to the production of such goods and at specified rates. This is termed as “Compounded Levy Scheme”. It is devised for administrative convenience as a simplified scheme. It is an optional scheme i.e. the manufacturer can opt to pay duty as per normal rules and procedure also. Under this scheme, the manufacturer has to pay prescribed duty for specified period on the basis of certain factors relevant to the production, like the size of equipment, etc. After making the lump-sum periodic payment, the manufacturer does not have to follow any procedure of excise regarding storage and clearances of goods. Presently, this scheme is applicable to stainless steel pattas / pattis and aluminum circles. These articles are not eligible for SSI exemption. In case of cold rolled stainless steel pattas / pattis, the manufacturer has to pay Rs. 15,000 per cold rolling machine per month. In case of aluminum circles, duty is payable @ Rs. 7,500 per month if length of roller is 30 inch or less and @ Rs. 10,000 per month where length of roller is more than 30 inch. Value based on Retail Sale Price The provisions for valuation on MRP basis are as follows - Ÿ The goods shall be covered under provisions of Standards of Weights and Measures Act or Rules. Ÿ Central Govt. has to issue a notification in Official Gazette specifying the commodities to which the provision is applicable and the abatements permissible. Central Govt. can permit reasonable abatement (deductions) from the “retail sale price. Ÿ While allowing such abatement, Central Govt. shall take into account excise duty, sales tax and other taxes payable on the goods Ÿ The “retail sale price” should be the maximum price at which excisable goods in packaged forms are sold to ultimate consumer. It includes all taxes, freight, transport charges, commission payable to dealers and all charges towards advertisement, delivery, packaging, forwarding charges, etc. If under certain law, MRP is required to be without taxes and duties, that price can be the “retail sale price.” Ÿ If more than one “retail sale price” is printed on the same packing, the maximum of such retail price will be considered. If different MRP are printed on different packages for different areas, each such price will be “retail sale price” for purpose of valuation. Ÿ Tampering, altering or removing MRP is an offense and goods are liable to confiscation. If price is altered, such increased price will be the “retail sale price” for the purpose of valuation. Duty based on Value Excise duty is payable on the basis of value called “ad valorem duty”. This “assessable value” is arrived at on the basis of Section 4 of the CEA. The basic provisions of new Section 4(1)(a) state that “assessable value” when duty of excise is chargeable on excisable goods with reference to value will be “transaction value” on each removal of goods if following conditions are satisfied - Ÿ The goods should be sold at the time and place of removal. Ÿ Buyer and assessee should not be related. Ÿ Price should be the sole consideration for the sale. Ÿ Each removal will be treated as a separate transaction and “value” for each removal will be separately fixed. Transaction Value as Assessable Value Following are the main requirements for transaction value - Ÿ Price actually paid or payable. Ÿ Price is for the goods. Ÿ Price includes any amount that the buyer is liable to pay to, or on behalf of the assessee. Thus, payment made by buyer to another person, on behalf of assessee, will be includible. Ÿ The payment should be “by reason of, or in connection with the sale.” These terms have always been construed strictly in judicial interpretation. Ÿ The amount may be payable at the time of sale or at any other time. Such time may be before or after sale. Ÿ Any amount charged for, or to make provision for, advertising or publicity, marketing and selling organization expenses, storage, outward handling, servicing, warranty, commission or any other matter is includible. However, these expenses are includible only when the aforesaid conditions are satisfied i.e. (a) the amount should be paid or payable to assessee or on behalf of assessee and (b) payment should be by reason of sale on in connection with sale. Ÿ Amount of duty of excise, sales tax and other taxes, if any, actually paid or actually payable on such goods is to be excluded while calculating “transaction value”. The amount may be payable any time in the future. Inclusions in Transaction Value Ÿ Packing charges Ÿ Design and Engineering charges Ÿ Consultancy charges relating to manufacturing Ÿ Compulsory after Sales Service / service in warranty period Ÿ Pre-delivery inspection charges for vehicles Ÿ Loading and handling charges within the factory Ÿ Royalty charged in franchise agreement Exclusions from Transaction Value Ÿ Trade Discounts Ÿ Outward handling, freight and transit insurance charges Ÿ Notional Interest on security deposit / advances Ÿ Installation and Erection expenses Ÿ Interest on Receivables Ÿ Bank charges for collection of sale proceeds If “assessable value” cannot be determined u/s 4(1) (a), it shall be determined in such manner as may be prescribed by rules discussed below - Value nearest to time of removal if goods are not sold - If goods are not sold at the time of removal, then value will be based on value of such goods sold by assessee at any other time nearest to the time of removal, subject to reasonable adjustments. Thus, this rule is applicable in case of removal of free samples or supply under warranty claims. Goods sold at different place - sometimes, goods may be sold at place other than the place of removal e.g. in case of FOR delivery contract. In such cases, actual cost of transportation from place of removal up to place of delivery of the excisable goods will be allowable as deduction. Cost of transportation can be either on actual basis or on equalized basis. Valuation when price is not the sole consideration - If price is not the sole consideration for sale, the “Assessable Value” will be the price charged by assessee, plus money value of additional consideration received. The buyer may supply any of the following directly or indirectly, free or at reduced cost. Ÿ Materials, components, parts and similar items Ÿ Tools, dyes, moulds, drawings, blue prints, technical maps, charts and similar items used Ÿ Material consumer, including packaging materials Ÿ Engineering, development, art work, design work and plans and sketches undertaken elsewhere than in the factory of production and necessary for the production of the goods.In such cases, value of such additional consideration will be added to the price charged by assessee to arrive at the “transaction value.” Sale at depot / consignment agent - when goods are sold through depot, there is no sale at the time of removal from factory. In such cases, price prevailing at depot (but at the time of removal of factory) shall be the basis of Assessable Value. The value should be “normal transaction value” of such goods sold from the depot at the time of removal or at the nearest time of removal from factory. Valuation in case of Captive Consumption - Captive consumption means goods are not sold but consumed within the same factory or another factory of same manufacturer (i.e. inter-unit transfers). In case of captive consumption, valuation shall be done on the basis of cost of production plus 10% Ÿ Direct material cost + Direct labor cost + Direct expenses = Prime Cost Ÿ Prime Cost + Production Overheads + Administration Overheads + R&D Cost (Apportioned) = Cost of Production Ÿ Cost of Production + Selling Cost + Distribution Cost = Cost of Sales Ÿ Cost of Sales + Profit = Selling Price Administrative Structure of Excise Department Ministry of Finance (Government of India) Central Board of Excise and Customs (CBE&C - Board) Chief Commissioner of Central Excise Commissioner of Central Excise (for each Commissionerate of Central Excise) Additional Commissioner of Central Excise Joint Commissioner of Central Excise Deputy / Assistant Commissioner of Central Excise (for each division) Superintendent (for each range) (He is the lowest rank of Gazetted Officer) Inspector (non-Gazetted officer) Board - CBE&C: It has its headquarters in New Delhi. This Board consisting of six / seven members, headed by Chairman, has powers to administer the Excise Act. Chairman of the Board is empowered to distribute work among him and other members and specify cases which will be considered jointly by the Board. Chief Commissioner of Central Excise: India is divided into 34 zones. Each “zone” is under the supervision of Chief Commissioner of Central Excise who has administrative powers to control the Commissioners and Commissioner (Appeals) within his zone. In the interiors i.e. non-coastal areas, the Chief Commissioner of Central Excise looks after customs work also. Commissioner of Central Excise: Each “zone” covers various Commissionerates and Commissioner of Central Excise is the administrative-in-charge of the “Commissionerate”. Presently, there are 92 Commissioners and 71 Commissioner (Appeals). Commissioner has unlimited powers of adjudication. Additional Commissioner of Central Excise: There may be one or more Additional Commissioner in a Commissionerate. Restrictions on powers of Additional Commissioner have been placed through administrative instructions. Additional Commissioner thus has restricted powers of adjudication. Joint Commissioner of Central Excise: this post was created in May 1999, subsequent to implementation of report of fifth pay commission. (This post is equivalent to earlier Deputy Commissioner). Deputy / Assistant Commissioner of Central Excise: Each Commissionerate of Central Excise is divided into divisions and each division is under the administrative control of Deputy / Assistant Commissioner of Central Excise. Assistant Commissioner (Senior Scale) is designated as Deputy Commissioner. However, both have same powers. Superintendent and Inspector: The division under each Deputy / Assistant Commissioner of Central Excise is further divided into various ranges and each range is under control of Superintendent of Central Excise, who is of the rank of a Gazetted Officer. Inspectors work under Superintendent and have some powers. Inspector is not a Gazetted Officer. Summary of Procedures: Ÿ Every person who produces or manufactures excisable goods is required to get registered unless exempted. If there is any change in information supplied in form A-1, the same should be supplied in form A-1. Ÿ Manufacturer is required to maintain Daily Stock Account (DSA) of goods manufactured, cleared and in stock. Ÿ Goods must be cleared under Invoice of assessee, duly authenticated by the owner or his authorized agent. In case of cigarettes, invoice should be countersigned by Excise officer. Ÿ Duty is payable on a monthly basis through TR-6 challan / Cenvat credit by 5th of following month except in March. SSI units have to pay duty on monthly basis by 15th of following month. Ÿ Cenvat records and return by 10th of following month. Ÿ Monthly return in form ER-1 should be filed by 10th of following month. SSI units have to file quarterly return in form ER-3. EOU / STP units to file monthly return in form ER-2. Ÿ Assessees paying duty of Rs. 1 Cr or more per annum through PLA are required to submit Annual Financial Information Statement for each financial year by 30th November of succeeding year in prescribed form FR-4. Ÿ Every assessee is required to submit information relating to Principal Inputs every year before 30th April in form ER-5 to Superintendent of Central Excise. Any alteration in principal inputs is also required to be submitted to Superintendent of Central Excise in form ER-5 within 15 days. Only assessees manufacturing goods under specified tariff headings are required to submit the return. Even in case of assessees manufacturing those products, only assessees paying duty of Rs. 1 Cr or more through PLA are required to submit the return. Ÿ Every assessee who is required to submit ER-5 is also required to submit monthly return of receipt and consumption of each Principal Input in form ER-6 to Superintendent of Central Excise by 10th of following month. Ÿ Every assessee is required to submit a list in duplicate of records maintained in respect of transactions of receipt, purchase, sale or delivery of goods including inputs and capital goods. Ÿ Inform change in boundary of premises, address, name of authorized person, change in name of partners, directors of Managing Director in form A-1. These are core procedures which each assessee has to follow. There are other procedures which are not routine - Ÿ Export without payment of duty or under claim of rebate Ÿ Receipt of goods for repairs / reconditioning Ÿ Receipt of goods at confessional rate of duty for manufacture of excisable goods Ÿ Payment of duty under Compounded Levy Scheme Ÿ Provisional Assessment Ÿ Warehousing of goods Ÿ Appeals and settlement Registration is compulsory for every manufacturer or producer of excisable goods and warehouse where goods are stored without payment of duty. Application of registration in form A-1 should be submitted in office of jurisdictional Assistant / Deputy Commissioner in duplicate. The requirements of registration are as follows - Ÿ Separate registration is required for each premises, if person has more than one premises. Ÿ Registration is not transferable. If business is transferred, fresh registration has to be obtained by the transferee. Ÿ Registration certificate shall be granted within 7 days of receipt of duly completed application. Registration certificate will be issued in prescribed form RC. Ÿ Change in constitution of partnership firm or Company shall be intimated within 30 days of change. In case of change, fresh registration is not required. Ÿ If the manufacturer ceases to carry on operations for which he is registered, he should apply for de-registration. Ÿ Registration can be revoked or suspended if the holder of registration or any person in his employment commits breach of any provisions of CEA or Rules or has been convicted u/s 161 of Indian Penal Code. Ÿ If there is any change in information given in the form, it should be informed in the form itself. Daily Stock Account of Stored Goods (DSA) A daily stock has to be maintained by every assessee in a legible manner, indicating particulars regarding Ÿ Description of goods manufactured or produced Ÿ Opening balance Ÿ Quantity manufactured or produced Ÿ Inventory i.e. stock of goods Ÿ Quantity removed Ÿ Assessable value Ÿ Amount of duty payable Ÿ Particulars regarding to duty actually paidThe first page and last page of such account book shall be duly authenticated by the producer or manufacturer or his authorized agent. All such records shall be preserved for 5 years. The quantity should be in the same unit quantity code in which rate is expressed. Goods which are fully manufactured and entered in DSA are liable for duty. However, if goods entered in DSA are lost or destroyed in storage by natural causes or by unavoidable accident or are unfit for consumption or marketing, remission of duty can be given by Commissioner on application. Goods can be confiscated and penalty can be imposed if DSA is not maintained up to date and there is overwriting and cutting in accounts. Removal of Goods Goods have to be cleared from factory under an Invoice. Invoice shall contain Ÿ Registration Number Ÿ Name of consignee Ÿ Description and classification of goods Ÿ Time and date of removal Ÿ Mode of transport and vehicle registration number Ÿ Rate of duty Ÿ Quantity and Value of goods Ÿ Duty payable on goods Ÿ Other details like name and address of assessee and consignee Invoice should be serially numbered. The serial number can be given either by printing or franking machines. Hand-written serial numbers shall not be accepted. The serial number should start from 1st April and continue for the whole financial year. Invoice shall be in triplicate and should be marked as follows Ÿ Original for Buyer Ÿ Duplicate for Transporter Ÿ Triplicate for Assessee Before making use of invoice book, serial numbers should be intimated to Range Superintendent. There should be only one invoice book in use at a time. Separate sets of invoices can be maintained with different serial numbers with the permission of the Assistant / Deputy Commissioner. General permission has been granted to use two different invoice books - one for removals for home consumption and other for removal of exports. Each foil of Invoice shall be duly pre-authenticated by the assessee or any duly authorized person. In case dispatch is cancelled, the assessee should keep the cancelled copies for record purposes as these are serially numbered and should be accounted for. Intimation of cancellation of invoice should be sent to Range Superintendent on the same day of possible. If excisable goods are used within the factory (captive consumption), the date of removal will be the date on which the gods are issued for use within the factory. In case of goods consumed captivity in continuous process, one Invoice pay be made per day. Payment of Duty Duty is payable on a monthly basis by 5th of the following month except in March where duty is payable on 31st March. Duty can be paid through Personal Ledger Account (PLA) and /or Cenvat Credit. Any assessee who has obtained a 15 digit ECC number from Superintendent can operate a current account. The PLA is credited when duty is deposited in a bank by TR-6 challan and duty is required to be paid by making a debit entry in the PLA on a monthly basis. PLA contains the following details Ÿ Serial number and date Ÿ Details of credit like TR-6 challan number, date and amount - separately for each sub-head of excise duty like basic duty, special duty, additional duty, etc. Ÿ Details of debit, and PLA has to be maintained in triplicate using indelible pencil and both-sided carbon. Each entry should be serially numbered and should start on a separate line - separate line for each debit and credit entry - form 1st April every financial year. Mutilations or erasures of entries is not allowed. If any correction is necessary, the original entry should be neatly scored out and attested by the assessee. Four copies of the TR-6 challan are submitted to the authorized Bank marked Original, Duplicate, Triplicate and Quadruplet. Two copies are returned by Bank duly stamped and two are retained by Bank of which one is sent to Excise authorities directly for their accounting and cross verification of credit entries made by assessee. TR-6 challan requires details like Ÿ Serial number Ÿ Name, address and code number of assessee Ÿ Excise Commissionerate, Division and Range Ÿ PLA number, name of commodity Ÿ Account head of duty (0037 for Customs duties, 0038 for Central Excise and 0044 for Service Tax) Ÿ Amount deposited in cash / cheque / demand draft CENVAT Credit is a credit of duty paid on raw materials, capital gods and services used in relation to manufacture of excisable goods or in relation to services provided on which Service Tax is payable. This credit is available on input goods, input services and capital goods. Input goods eligible for Cenvat Credit Ÿ All goods (except High Speed Diesel Oil [HSD], Light Diesel Oil [LDO] and petrol) used in, or in relation to, the manufacture of the final products. The input may be used directly or indirectly in or in relation to the manufacture of final product. The input need not be present in the final product. Ÿ Input includes lubricating oils, greases, cutting oils and coolants, accessories of final products cleared along with the final product, goods used as paint, packing material or fuel, or for generation of electricity or steam used in or in relation to manufacture of final product or for any purpose, within the factory of production. Ÿ Input also includes goods used in manufacture of capital goods which are further used in the factory of manufacturer. Input service eligible for Cenvat Credit Ÿ Setting up, modernization, renovation or repairs of factory, premises of provider of output service or an office relating to such factory or premises Ÿ Advertisement or sales promotion Ÿ Market research Ÿ Storage up to the place of removal Ÿ Procurement of inputs Ÿ Activities relating to business, such as accounting, auditing, financing, recruitment and quality control, coaching and training, computer networking, credit rating, share registry and security, inward transportation of inputs or capital goods and outward transportation up to the place of removal. Capital Goods eligible for Cenvat Credit Ÿ Tools, hand tools, knives, etc. falling under chapter 82. Machinery covered under chapter 84. Electrical machinery under chapter 85. Measuring, checking and testing machines, etc. under chapter 90. Grinding wheels and the like goods falling under sub-heading no. 6801.10. Abrasive powder or grain on a base of textile material falling under 08.02. Ÿ Pollution control equipment Ÿ Components, spares and accessories of the goods specified above Ÿ Moulds and dyes Ÿ Refractories and refractory material Ÿ Tubes, pipes and fittings thereof, used in the factory Ÿ Storage tank Similarities between Cenvat on Inputs and Capital Goods Ÿ Credit of basic, special, CVD, NCD, AED(GSI), AED(TTW) and Education Cess is available. Ÿ Should be used in factory (can be sent out to job worker for further processing, repair, reconditioning or any other purpose but should be brought back within 180 days) Ÿ Credit can be utilized for payment of any duty / service tax on final product of final services or on inputs / capital goods if removed as such, etc. Ÿ If inputs are removed as such, an amount is payable equal to Cenvat credit availed. Ÿ Education Cess, AED(TTW) and NCCD can be utilized for payment of corresponding duty on final products / inputs only and not for payment for any other duty. Basic duty, Special duty and AED(GSI) are interchangeable. Ÿ Cenvat credit is not allowed in respect of exempted final products, or final products on which duty paid is Nil. Ÿ Invoice, Bill of Entry, Supplementary Invoice etc. are eligible documents for taking credit. Ÿ Transfer of credit in case of merger, sale, lease or transfer of whole factory is permissible. Ÿ Recovery can be made if credit wrongfully taken. Ÿ Demand has to raised within one year. If such wrong credit is availed or utilized on account of fraud, willful misstatement, collusion or suppression of facts or with intent to evade payment of duty, demand can be raised within five years. Ÿ If inputs / capital goods are manufactured in northeast region of India of industry in Kutch district of Gujarat or in State of Jammu and Kashmir, Cenvat credit is available even if the manufacturer gets refund of duties paid by him. Distinction between Cenvat on Inputs and Capital Goods |All inputs (except HSD, LDO and petrol) are eligible||Only capital goods are eligible| |Inputs are required to be used “in or in relation to manufacture”||Capital goods should be “used in factory”. Purpose for which it is used is irrelevant| |Credit is available as soon as input is received in factory||Up to 50% credit is available in current year and balance in subsequent financial year/s| |There is no such provision in respect in Cenvat on inputs||Assessee cannot claim depreciation on excise duty portion of value of capital goods| |Cenvat credit on inputs can be refunded if final product is exported and assessee does not claim duty drawback||Cenvat on capital goods cannot be refunded if final product is exported, but credit can be used for clearance of other final products| |If assessee opts out of Cenvat, he has to pay / reverse credit of duty availed on inputs lying in stock on the day he opts out of Cenvat||This provision does not apply to Cenvat on capital goods| |Inputs can be sent directly to place of job worker from supplier-manufacturer||Capital goods have to be brought in factory and then sent to job worker| Concession for SSI units Since Excise is a duty on manufacture, it is payable even by a small unit manufacturing goods. However, it is Govt.’s policy to encourage the growth of small units. Moreover, it is administratively inconvenient and costly to collect revenue from numerous small units. A SSI is a unit having annual turnover less than Rs. 3 Cr. All industries irrespective of their investment or number of employees are eligible for concession. In fact, even a large industry will be eligible for concession if its annual turnover is less than Rs. 3 Cr. The SSI unit need not register with any authority. A unit is entitled for exemption only if its turnover in previous year was less than Rs. 3 Cr. Units who turnover crosses Rs. 3 Cr in 2005-2006 can still claim exemption but will have to pay regular duty from 1st April 2006. SSI units have been given three types of exemptions - Ÿ SSI Unit can avail full exemption up to Rs. 100 lakhs and pay normal duty thereafter. Such units can avail Cenvat credit on inputs only after reaching turnover of Rs. 100 lakhs in the financial year. Ÿ SSI units intending to avail Cenvat credit on inputs on all its turnover have to pay 60% duty on first 100 lakhs and 100% duty for subsequent clearances. Ÿ SSI Unit can also pay full 100% and avail Cenvat credit. Turnover to be included Ÿ Turnover of goods exempted under other notification Ÿ Goods manufactured in rural areas with other’s brand name Ÿ Captive consumption not exempt if used in manufacture of final product which is exempt under any other notification Ÿ Export to Nepal and Bhutan Ÿ Goods cleared with payment of duty Ÿ Goods cleared under Compounded Levy Scheme Turnover to be excluded Ÿ Export other than to Nepal / Bhutan Ÿ Export under bond through merchant exporter Ÿ Deemed exports Ÿ Turnover of non-excisable goods Ÿ Goods manufactured with other’s brand name cleared on payment of duty Ÿ Intermediate products when final products are eligible for SSI exemption Ÿ Intermediate product when final product exempt under ant other notification Ÿ Job work amounting to manufacture done under specified notifications Ÿ Job work or any process which does not amount to manufacture Ÿ Strips of plastic used within the factory Ÿ Inputs brought by assessee and cleared Ÿ Turnover as trader along with own manufactured goods Letter of Undertaking In relation to Central Excise, following are the concessions / incentives for exports: Ÿ Exemption from duty on final product (or refund or duty paid) Ÿ Exemption / refund of excise duty paid on inputs. For exports under bond, the manufacturer exporter can furnish a letter of undertaking (LUT) in form UT-1. The manufacturer exporter need not execute a bond. The LUT once given is valid for 12 calendar months. It is not necessary to submit LUT for each consignment. Though manufacturer exporter is not executing bond, submission of proof of export is required. The LUT will not be discharged unless proof of export is submitted or duty is paid upon deficiency in interest. Show Cause Notice (SCN) Excise Officer can ask manufacturer to pay the difference of duty by issuing a show-cause notice. After considering the representation from the person concerned, the Central Excise Officer can determine the amount of duty payable and then the person chargeable to duty has to pay the amount. Ÿ SCN is necessary but not issuing it is only irregularity Ÿ Simple letter asking to pay duty is not a notice Ÿ SCN is required even if assessee has admitted liability and agreed to pay duty Ÿ No notice is assessee voluntarily pays the amount Ÿ SCN has to be served on the person chargeable of duty within one year from “relevant date” which will be one of the following:- Return is to be filed within 5 days of close of month. The date of filing will be “relevant date”. - If return is not filed, then the date on which return should have been filed i.e. 10th of a month will be “relevant date”. - If no return is required to be filed, then date of payment of duty is “relevant date”. - If the demand is on account of erroneous refund, the relevant date is the date on which refund has been made. Ÿ However, this period will extend up to 5 years if the non-payment of duty or short payment is by reason of fraud, collusion, willful misstatement or suppression of facts, or contravention of any provision of Excise Act or rules made with an intention to evade payment of duty. Requirements of Show-cause Notice Ÿ SCN to Manufacturer only Ÿ Essential details should be given Ÿ Penalty or Confiscation must be mentioned if it is proposed Ÿ Allegations must be mentioned Ÿ Copies of documents to be given Adjudicate means to hear or try and decide judicially and adjudication means giving a decision. Excise authorities are empowered to determine classification, valuation, refund claims and the tax / duty payable. They are also empowered to grant various permissions under rules and impose fines, penalties, etc. this is called “departmental adjudication”. Uncontrolled authority may cause great damage to an assessee and hence opportunity of appeal against the order has been provided. Departmental authorities have original adjudication powers as follows: Ÿ Superintendent - remission of duty for loss of goods up to Rs. 1,000/- Ÿ Deputy / Assistant Commissioner - remission of duty for loss of goods up to Rs. 2,500/- ; issuance of registration certificates; Cenvat credit / duty up to Rs. 5 lakhs. Ÿ Joint Commissioner - Cenvat credit / duty above Rs. 5 lakhs and up to Rs. 20 lakhs; remission of duty for loss of goods up to Rs. 5,000/; matters related to export under bond or under claim of rebate; loss of goods during transit to warehouse without upper monetary limit Ÿ Additional Commissioner - Cenvat credit / duty above Rs. 20 lakhs and up to Rs. 50 lakhs; remission of duty for loss of goods up to Rs. 5,000/; matters related to export under bond or under claim of rebate; loss of goods during transit to warehouse without upper monetary limit Ÿ Commissioner - Cenvat credit / duty without upper limit; remission of duty for loss of goods without any limit. When the order is given by officer below the level of Commissioner, appeal against such order will lie with Commissioner (Appeals) and appeal against order given by Commissioner will lie with CESTAT (Central Excise Custom and Service Tax Appellate Tribunal) Appeal can be made to High Court against order of Tribunal if the case involves substantial question of law, except in cases relating to rate of duty and valuation. Ÿ If duty is not paid when it ought to have been paid, interest is payable at the rates specified by Central Govt. by notification in official gazette. Such rate cannot be less than 10% and not more than 36%. Ÿ The interest is payable from the 1st day of the month following the month in which the duty ought to have been paid. Ÿ The actual rate of interest is 13% w.e.f 12-9-2003 Ÿ If assessee pays duty on order or instruction of CBE&C voluntarily within 45 days of such order, he is exempted from payment of interest. However, if he pays only a part of the amount but pays the amount reserving the right to appeal, the interest is payable from the month following the month in which the duty ought to have been paid. Ÿ Relaxation of payment of interest is applicable only when the CBE&C issues a general order. This relaxation does not apply if assessee pays duty on receipt of SCN or pays duty on his own. There are 3 types of penalties in Central Excise: Ÿ Civil Liability Ÿ Criminal Liability Ÿ General Penalty Civil Liability - it will arise when the provisions of the act are violated. In this case, the penalty involves confiscation of goods and monetary penalty. It is imposed by Excise Authority as per the provisions of the Central Excise Rules. Criminal Liability - it involves imposition of fine and imprisonment. It is granted by Criminal Court or prosecution as per the provisions of the Act. General Penalty - if goods are removed in contravention of Act, rules or notification or goods are not accounted for or goods are manufactured, produced or stored without applying for registration or excise rules and notifications have been contravened with an intention to evade the duty, general penalty is applicable. It includes confiscation of goods and penalty up to duty payable or Rs. 10,000 whichever is higher. An assessee can claim refund of duty if due to him. Normally refund can be filed for various reasons like - Ÿ Excess payment of duty due to mistake Ÿ Forced by department to pay higher duty Ÿ Finalization of provisional assessment Ÿ Export under claim of rebate Ÿ Duty paid under protect / pre-deposit of duty for appeal (appeal decided in favor of assessee) Ÿ Refund of Cenvat credit if final product is exported Ÿ Unutilized balance in PLA If the manufacturer has charged excise duty to his buyer, it means hat he has passed on the burden to the buyer and has already recovered duty from his customer. In such cases, refund of duty will lead to “unjust enrichment” as the manufacturer will get double benefit - first from customer and then from Govt. However, in majority of cases, it is not practicable to identify individual customers to pay refund to him. At the same time, the duty illegally collected and cannot be retained by Govt. In such cases, the refund is transferred to a Consumer Welfare Fund for protection and benefit of consumers. Confiscation means the gods become property of Govt. and Govt. can deal with it as it wants. Following can be confiscated - Ÿ Contravening goods Ÿ Conveyance for transport of goods / smuggled goods Ÿ Packages in which contravening excisable goods are packed Ÿ Goods used for concealing contravening excisable goods Ÿ Contravening goods with form changed - even if mixed without other goods and cannot be separated Ÿ Sale proceeds from sale of contravening excisable goods Ÿ No confiscation of container obtained on hire Seizure means goods are taken into the custody by the department. The property of goods remains with the owner. If goods are liable for confiscation, the same can be seized by Excise officers If seized goods are to be confiscated, SCN must be given within 6 months of seizure of goods. Panchnama must be made for seizure of goods and seized goods must either be kept in police station or in the custody of the Excise Department. Payment of Duty under Protest Sometimes it happens that the classification of goods, Assessable value determined by excise authorities in adjudication proceedings, etc. are not agreeable or acceptable to the assessee. In such cases, the assessee can file an appeal and in the meanwhile can pay duty under protest. The following procedure needs to be followed: Ÿ Write a letter to Assistant / Deputy Commissioner stating that he desired to pay duty under protest and give grounds for paying duty under protest. Ÿ Obtain dated acknowledgement which will be proof that assessee has paid duty under protest from that date . Ÿ After submission of the aforesaid letter, he can pay duty under protest only till his appeal or revision is decided. Ÿ An endorsement “duty paid under protest” should appear on all excise invoices or monthly / quarterly return. If lump sum is paid in respect of past demand, fact of duty payment under protest should be mentioned in PLA, Cenvat Credit Account and Daily Stock Account. Ÿ As per ER-1 form of monthly/ quarterly return, number of invoices on which duty is paid under protest should be indicated in the return. Most of factories are under “Self Removal Procedure” and there is no physical control over production and clearance of goods. Assessment is mainly based on returns submitted by assessee. Department has evolved various checks and counter-checks to ensure that excise duty is not evaded. For Central Excise purpose, Audit means scrutiny of the records of the assesses and the verification of actual process or receipt, storage, production and clearance of goods with a view to check whether the assessee is paying the Central Excise duty correctly and is following Central Excise procedures. There are 3 types of audits carried out - Ÿ Departmental Audit - an Audit section is attached to each Commissionerate. Some audit parties are functioning under Commissionerate headquarters while some may function at important industrial centers where Joint Commissioner or Additional Commissioner has been posted. Audit of assessee’s factory is carried out by visit by “audit party”. The Audit Party usually consists of 2 / 3 inspectors and a Deputy Office Superintendent, headed by Excise Superintendent. AC / DC and senior officers are associated with the audit of large units. The audit parties visit the factories periodically. Audit done by these audit parties is called “departmental audit”. Ÿ Central Excise Revenue Audit (CERA) - Comptroller and Auditor General of India also carries out audits of all assessees. These are called CERA. These audit parties audit the accounts of excise a well as customs assessees. Constitution specifies that the report of C&AG shall be submitted to the President of India, who causes these to be laid before each House of Parliament. Frequency of CERA Audits is as per the importance they attach and availability of time to CERA audit parties. Assessee is required to produce to audit parties - records, cost audit report and income tax audit report. Ÿ Special Audit (Valuation Audit and Cenvat Credit Audit) - Valuation audit can be ordered at any stage of enquiry, investigation or any proceedings before Assistant / Deputy Commissioner regarding assessable value of goods manufactured but assessee. Audit of Cenvat credit availed or utilized by a manufacturer can be ordered by if Commissioner has “reason to believe” that Cenvat credit availed is not normal or the credit has been availed on account of fraud, willful misstatement, suppression of facts or collusion.
<urn:uuid:13b9b92a-398f-413a-9002-52d67996238b>
CC-MAIN-2016-26
http://www.icwaportal.net/2009/09/notes-of-central-excise.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.927659
11,418
3.5
4
The conventional view of development in human infancy is that objective awareness of the surrounding world is gradually constructed during the first 2 years through the infant's actions on the environment. However, recent work on the perceptual abilities of young infants indicates that even newborns perceive objective properties of their surroundings, detecting depth and displaying perceptual constancies that have hitherto been attributed only to older infants. In consequence it is necessary to revise our model of infant development. Since evidence points to objective perception from birth there is no need to postulate developmental processes that lead to its construction during development. However, as infants gain new capabilities for acting on the world, they have to develop knowledge of how these actions relate to the perceived world. It is suggested that this sort of knowledge is constructed through active experience.
<urn:uuid:2032bff1-da60-416e-8193-ae1d193407ae>
CC-MAIN-2016-26
http://www.research.lancs.ac.uk/portal/en/publications/perceptual--intellectual-development-in-infancy(0efc28d1-fd55-4e99-9ca2-630942939a27).html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.971563
153
2.71875
3
A surgeon performs open-heart surgery on a patient without ever leaning over the operating table. In fact, the surgeon isn’t even standing in the same room, let alone the same city. He sits comfortably at a console, where he uses a robotic arm from afar to maneuver instruments into tiny incisions on the patient’s Sound like medical wizardry? This is the magic of computer Although the above example is a dramatic one, few aspects of everyday life are untouched by computer science. Every car built today contains a computer that monitors the engine and alerts drivers with a "Check Engine" light when problems arise. Smart phones translate our movements on a touch screen. Shopping websites like Amazon.com track customer habits and make recommendations based on previous purchases. Computer science has been shaping society for decades. With a computer science degree, you too have an opportunity to impact the world - and have fun in the process. There’s a lot more to it than code and hardware! Here are a few myths about computer science and a handful of reasons why you should pursue this dynamic, exhilarating field of study. Computer science is logical and structured, rooted in mathematics, but it requires as much creativity as the arts. You might say computer scientists are technological artists. They think in multiple dimensions, hunt for creative solutions to complex problems, and design unique software. Like muralists who assemble large-scale paintings, computer scientists are interpreting the world around them and building virtual landscapes. Don’t be fooled by the name. You won’t be programming for a lifetime simply because you study computer science. Computer science is versatile and applicable to many fields like marketing, finance, retail, and criminal justice. For instance, you might work for a company that uses data mining software to predict customers’ buying patterns and figure out how to improve marketing and Remember that technology is rapidly changing, and you will need to change with it. A career in computer science requires you to embrace innovation and adapt quickly to change. It’s true that some computer scientists spend endless hours writing code. But computer science requires a good deal of teamwork, interaction, and interpersonal skills. Many people with a computer science degree work on development teams and are trained to respond to users’ needs. Computer science is the intersection of people and technology, and there is no better example of how that plays out than in social media. Facebook has become such a powerful connector that it helped spur a revolution in Egypt, giving protesters a space to organize. Today, Facebook boasts 750 million active users worldwide – and that phenomenon all started thanks to one computer programmer, Mark If you aspire to be part of the creative economy, consider a computer science degree. You might have the potential to invent the next hit social media site, devise a medical robot to improve lives, or design a tool to identify terrorist hot spots. Any route points you to an exciting and rewarding career. Saint Leo University, the oldest Catholic college in Florida (1889), ranks as one of the top universities in the South, according to U.S. News & World Report's "America's Best Colleges" list. Saint Leo’s main campus, located 30 miles north of Tampa, educates more than 1,900 traditional students, part of a total enrollment of more than 15,000. Saint Leo University ranks as one of the nation's ten leading providers of higher education to the military and is a nationally recognized leader in online To learn more about Saint Leo University’s Bachelor of Computer Science degree options, visit http://www.saintleo.edu/academics/undergraduate/majors-minors/bs-computer-science.
<urn:uuid:fe6d21dc-0497-4176-8d79-19a3f59453f8>
CC-MAIN-2016-26
http://www.saintleo.edu/academics/undergraduate/majors-minors/bachelor-of-computer-science-degree/shattering-common-myths-about-a-computer-science-degree.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00171-ip-10-164-35-72.ec2.internal.warc.gz
en
0.916504
823
2.96875
3
Over the past several years, it has become a something of a tradition for the Congress and President to claim that the federal government will not be able to pay its bills unless the debt ceiling is raised. In fact, the debt ceiling has been increased, or suspended, a total of seventy-nine times since 1940. However, the United States government, which has had a national debt since it was created (with the exception of 1835) did not always have a debt ceiling, which should actually be called a money pit. Prior to 1917, any debt accrued by the federal government would need to be specifically approved by the Congress. The initial debt ceiling, passed in the Second Liberty Bond Act, was set at $9.5 billion in Treasury bonds and $4 billion in one-year certificates. This removed some of the Congressional oversight from the Secretary of the Treasury. Until 1939, when Congress created an overall aggregate limit on the national debt, increases in the national debt were simply amendments to the Second Liberty Bond Act. The national debt now stands at approximately $17 trillion, with the debt ceiling being completely eliminated until March 2015. So, why is the federal government so far in debt? Some people make the claim that the debt ceiling has to raise to continue the system because money is created out of thin air by the Federal Reserve and loaned out to the government with interest. The government must take out a new loan to cover the costs, putting the system further into debt. Because the money is debt. The problem is central banking, and until that goes away, this will keep happening because “that is what’s supposed to happen.” While I am no supporter of central banking, this claim ignores the fact that the federal government was in debt before the federal reserve was created, the federal government was in debt before the US Treasury began issuing paper currency, and the federal government was in debt even before George Washington was inaugurated. The United States government remains in debt not because of a central bank, as Tom Knapp points out, “The Congressional Budget Office estimates that for fiscal year 2014 the US government will spend $514 billion more than it steals in tax revenue. But that government could cut its military spending by $514 billion and still be the third largest ‘defense’ spender in the world (behind only Russia and China).” The United States government remains in debt because the Congress will not pass and the President will not sign a balanced budget, because they fear being seen as not supporting the troops. They won’t do this because they don’t have to. The problem is that the people making the spending decisions are not on the hook to pay for the costs!
<urn:uuid:0e0cd8fc-68f5-4c42-9c01-1af59f8982e3>
CC-MAIN-2016-26
http://hammeroftruth.com/2014/the-how-and-why-of-the-debt-ceiling/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00027-ip-10-164-35-72.ec2.internal.warc.gz
en
0.976236
562
3.84375
4
Training Topics: ADHD 1 The purpose of this activity is to provide primary care clinicians with tools designed to help them identify, diagnose and treat juvenile depression and ADHD in children and youth (ages 21 and under). Primary Care Clinicians including, Pediatricians, Family Physicians, Internists, Psychiatrists, and Child Psychiatrists Identifying and Diagnosing ADHD Speakers: Michael Naylor, MD, Toya Clay, MD, Kamaria Bond, LCSW, CADC Module 3: Identifying and diagnosing ADHD workshop - Identify screening tools for ADHD and common co-morbidities - Employ screening tools for ADHD and interpret scores - Recognize diagnostic criteria for ADHD and common co-morbidities - Cite resources for evidence-based information for ADHD - Cite resources for school advocacy and family support services for ADHD and common co-morbidities The University of Illinois at Chicago (UIC) College of Medicine is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The University of Illinois at Chicago (UIC) College of Medicine designates this educational activity for a maximum of (5.0) AMA PRA Category 1 credits TM. Physicians should only claim credit commensurate with the extent of their participation in the activity. Continuing Education Units (5.0) will be available to Social Workers and affiliated reciprocal agencies. Illinois Board License 159-000112. Note: Nursing CEU's are not available at this time.
<urn:uuid:4269c017-02ba-4dd2-8d2e-ede09177f3ac>
CC-MAIN-2016-26
http://www.psych.uic.edu/docassist/ADHD01.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00132-ip-10-164-35-72.ec2.internal.warc.gz
en
0.915914
311
2.796875
3
Iran has crossed a new nuclear threshold, but it's one the Obama administration isn't worried about. On Saturday, technicians began loading low-enriched uranium fuel supplied by Russia into Iran's first civilian nuclear reactor, and if all goes smoothly, the Bushehr plant could start producing electricity under United Nations monitoring late this year or early next. "The International Atomic Energy Agency regularly inspects the Bushehr Nuclear Power Plant in Iran. Iran began moving fuel assemblies to the plant's reactor compartment on 21 August 2010," Ayhan Evrensel, a press officer for the International Atomic Energy Agency, said in a statement Saturday. "The agency is taking the appropriate verification measures in line with its established safeguards procedures." Bushehr embodies what the administration and many experts consider an ideal solution to the Iranian nuclear dispute: The Islamic republic benefits from the peaceful nuclear energy to which it's entitled by international law, but the fuel comes from elsewhere, negating Iran's need to make its own via enrichment, a process that also can produce highly enriched uranium for nuclear bombs. Moreover, under a 2007 accord negotiated by the Bush administration, the spent fuel rods will go back to Russia after they've cooled to prevent Iran from harvesting them for plutonium, the other essential component of nuclear weapons.
<urn:uuid:1edc0d85-39b8-4131-a633-7b42d9f000c3>
CC-MAIN-2016-26
http://tvnewslies.org/tvnl/index.php/news/energy/15631-iran-begins-fueling-nuclear-reactor--and-thats-good-news.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00175-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950011
258
2.65625
3
A compound found in the peels of citrus fruit has the potential to lower cholesterol more effectively than some prescription drugs, and without side effects, according to a study by U.S. and Canadian researchers. A joint study by the U.S. Department of Agriculture and KGK Synergize, a Canadian nutraceutical company, identified a class of compounds isolated from orange and tangerine peels that shows promise in animal studies as a potent, natural alternative for lowering LDL cholesterol (bad cholesterol), without the possible side effects, such as liver disease and muscle weakness, of conventional cholesterol-lowering drugs. The findings will be described in the May 12 print issue of the Journal of Agricultural and Food Chemistry, a peer-reviewed publication of the American Chemical Society, the world's largest scientific society. The compounds, called polymethoxylated flavones (PMFs), are similar to other plant pigments found in citrus fruits that have been increasingly linked to health benefits, including protection against cancer, heart disease and inflammation. The study is believed to be the first to show that PMFs can lower cholesterol, the researchers say. "Our study has shown that PMFs have the most potent cholesterol-lowering effect of any other citrus flavonoid," says Elzbieta Kurowska, Ph.D., lead investigator of the study and vice president of research at KGK Synergize in Ontario, Canada. "We believe that PMFs have the potential to rival and even beat the cholesterol-lowering effect of some prescription drugs, without the risk of side effects." PMFs are found in a variety of citrus fruits. The most common citrus PMFs, tangeretin and nobiletin, are found in the peels of tangerines and oranges. They are also found in smaller amounts in the juices of these fruits. Using hamster models with diet-induced high cholesterol, the researchers showed that feeding them food containing 1 percent PMFs lowered levels of LDL cholesterol by 32 to 40 percent. Previous animal studies by others have shown that similar flavonoids, particularly hesperidin from oranges and naringin from grapefruit, also may have the ability to lower cholesterol, although not as effectively as PMFs, according to Kurowska. Treatment with PMFs did not appear to have any effect on levels of HDL cholesterol, or good cholesterol, the researcher says. No negative side effects were seen in the animals that were fed the compounds, she adds. The researchers are currently exploring the compound's mechanism of action on cholesterol metabolism. They now suspect, based on early results in cell and animal studies, that it works by inhibiting the synthesis of cholesterol and triglycerides inside the liver. A long-term human study of the effect of PMFs on high LDL cholesterol is now in progress. While drinking citrus fruits is full of health benefits, taking PMF supplements could be an easier way to lower cholesterol, since a person would have to drink 20 or more cups a day of orange or tangerine juice to have a therapeutic effect, Kurowska estimates. KGK Synergize already has developed a nutrition supplement containing PMFs combined with a form of vitamin E that seems to enhance the compound's effect, according to Kurowska. Marketed as a cholesterol-lowering agent under the trade name SytrinolTM, the supplement recently became available in the U.S. USDA's Citrus and Subtropical Products Laboratory in Winter Haven, Fla., and KGK Synergize Inc. provided funding for this study. The online version of the research paper cited above was initially published April 21 on the journal's Web site. Journalists can arrange access to this site by sending an e-mail to email@example.com or calling the contact person for this release.
<urn:uuid:06b438fb-104b-481b-8663-8b15f7de0751>
CC-MAIN-2016-26
http://www.eurekalert.org/pub_releases/2004-05/acs-otp051104.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954775
787
2.671875
3
Family Literacy Program "A program created to encourage First Nations people to embrace literacy in all its forms including First Nations literacy." The Family Literacy Programs’ goal is to reintroduce First Nations literacy to our community. Some First Nations people never had an opportunity to learn traditional teachings or to recognize that what they were learning at home was literacy. Today’s more inclusive definition of literacy includes not only reading and writing but also story-telling, oral history, painting, and song and dance. Haahuupa – Traditional Teachings Bi-weekly potlucks followed by traditional teachings have included topics such as cedar bark work, art, song and dance, shawl making, drum making, traditional tool making, eagle teachings, longhouse teachings, and the construction of a longhouse model. The topics for this program are chosen by the participants. Bi-monthly lunch group for avid readers to come together to discuss books they are currently reading. This monthly social is an informal gathering for Elder’s to share knowledge, stories and history. Elders are invited by a traditional speaker who is guided by Tseshaht protocol. Easy access to books in a familiar location First Nations people frequent. Although the Photography Club has ended we were fortunate enough to have Norm Silverstone, a well know Port Alberni photographer, volunteer to continue teaching our students by sharing his knowledge with them. Children’s “Potlatch” Book This program worked with Elders to draft and publish a children's book which identified the Nuu-chah-nulth names for the different kinds of Nuu-chah-nulth feasts. The purpose of the book was to both preserve these traditional Nuu-chah-nulth names for future generations as well as to encourage the use of the names of the different feasts by making them accessible to Nuu-chah-nulth children. Toward this goal books were provided to elementary schools within Nuu-chah-nulth territories. Eye of the Wolf Josephine Johnston of Quu?asa Clinical Counselling delivers this highly cultural and traditional workshop. The workshop is aimed at cultural learning, healing, and growing. Museum Field Trip A field trip to the Royal British Columbia Museum in Victoria. Participants helped raise funds so they attend the IMAX and have supper coming home. N’iwaasin ciciqii – “It’s our Language” A meeting of Elder Advisors, a facilitator, and interested participants to learn and document Nuu-chah-nulth phrases. A timely workshop as you can only harvest cedarbark during certain months of the year. Included in the harvesting trip are teachings around harvesting.
<urn:uuid:edc1b2a2-f21d-4e67-8e4b-5a2f9abd8d6d>
CC-MAIN-2016-26
http://www.pafriendshipcenter.com/content/family-literacy-program
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956722
581
3.5
4
The Computer Revolution/Internet/VoIP Voip is the method in which the internet can carry voice transimission from one source to another. These may include PC to PC in which two computers use microphones. Another idea is PC to phone. Anyone can call from a personal PC to any phone line anywhere in the world. Voice over Internet Protocol (VoIP) VoIP is the process of placing phone calls over the Internet. One example of this is a computer to computer call via the “Skype” service, which allows you to connect and talk with someone from another computer. Vonage is a VoIP provider which is a permanent setup designed to take the place of traditional landline phones. One advantage of this type of service is that it is low cost. The big disadvantage, is that it cannot function during a power outage, or if your Internet connection goes down.
<urn:uuid:7a5b0fc1-69ae-48fc-9877-e02a1923b3ca>
CC-MAIN-2016-26
https://en.wikibooks.org/wiki/The_Computer_Revolution/Internet/VoIP
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941696
181
3.671875
4
Levels: Grades 3-6 Topics: material strength, architecture Description: Ms. Hsu's third grade class challenged the stars of ZOOM to construct a geodesic dome out of rolls of newspaper and to test its strength. In turn, the ZOOMers invited viewers to try the activity at home or in the classroom and share their results. How many magazines or books will your dome support? More Great Resources from Education World's Learning Machine See more great science resources in our Science Machine Archive. Then visit our other Learning Machine archives: The Math Machine The Reading Machine Article by Cara Bafile Copyright © 2009 Education World
<urn:uuid:834068ec-a879-402b-bb44-a80c3b799473>
CC-MAIN-2016-26
http://www.educationworld.com/a_tech/sciencemachine/sciencemachine034.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.857559
139
3.578125
4
Jim Beeghley is an educational technologist, a blogger, podcaster and expert in using technology to teach history with a particular interest in the American Civil War. He'll be contributing a short series to PennLive on how to navigate the history and learn about Gettysburg through technology and the Internet.By Jim Beeghley, Mechanicsburg During the 100th and 125th anniversaries of the Battle of Gettysburg, individuals had to travel hours or sometimes hundreds of miles to talk to a Gettysburg historian or to examine primary sources. With today’s technologies and high speed Internet access, you can now tap these historians with the click of a button in a browser, especially with the creation of social media sites such as Facebook, YouTube and Twitter. Many Civil War historians, authors, national parks and museums are now entering the world of socials media to share and connect with the world. Let’s take a closer look at these three popular websites and how we can learn more about the Battle of Gettysburg. With over 600 million daily active users, Facebook is easily the largest social network around. If you are on Facebook, then you can easily connect with others and learn about the Battle of Gettysburg. Many noted Civil War Historians such as Gary Adelman and Tim Smith have Facebook pages where they share primary sources, facts and information about Gettysburg and the Civil War. In addition to historians, various authors have setup a presence on Facebook and share their knowledge. For example, you can visit the page of my family’s favorite book, The Complete Gettysburg Guide by J.D. Petruzzi. Other noted authors include Scott Mingus Jr., Kevin Levin and John Hoptak. Gettysburg National Military Parks and various Gettysburg museums have also setup Facebook pages. Finally you can always visit my Teaching the Civil War Facebook page to find ideas and resources for teaching the Civil War. To find more resources on Facebook, you can start by typing in “Battle of Gettysburg” into the search box. Facebook is a great place to learn more about the Gettysburg and the Civil War, watch behind the scenes videos, see exclusive photos and check out some cool primary sources. YouTube is another social networking site where individuals and organizations can upload and share videos. Some of the videos on YouTube can easily be used to learn more about the Battle of Gettysburg. You can watch videos of Licensed Battlefield Guides in Gettysburg, in depth videos by noted historians such as Dr. Matt Pinsker, videos from Park Rangers in Gettysburg and I have some posted videos of my kids learning about Gettysburg. Twitter is a micro-blogging site that asks users to answer the question, “what are you doing?” in 140 characters or less. This social networking site boasts millions of users, many of which are Civil War and Gettysburg historians, authors and other experts. You can follow me at @fifer1863. I also recommend these users: You can also use the Search bar to look for “Gettysburg” or "Civil War" and all of the tweets with the words "Gettysburg or “Civil War” will appear. Hashtags are a way to group tweets together so they are easier to follow. You can follow hashtags like #civilwar, #gburg150 or #cw150. As you can see it is very easy to use these social networking sites to bring subject matter experts into your classroom. In Jim's first Gettysburg 150 education post, he looked at how to navigate through the treasure of Civil War photos that the Library of Congress has archived. He also gave different tips and tricks when studying them. Check out his post: 'The Case of the Moved Body' at the Library of Congress
<urn:uuid:393c543b-499d-4a7b-b866-e4119c744cdc>
CC-MAIN-2016-26
http://blog.pennlive.com/gettysburg-150/2013/06/gettysburg_150_education_throu.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00048-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928636
777
2.515625
3
A machine was built by the Farman company and pedalled by Gabriel Poulain over the specified distance in both directions early on the morning of 9th July 1921 with Robert Peugeot watching, with a distance of 11.98 metres. (Incidentally, two weeks later, the long-jump record was reset at 7.69 metres by Edwin Gourdin on 23rd July 1921 in Cambridge Massachusetts. ) The Poulain Farman machine was undoubtedly a human-powered-vehicle. It was a biplane with a span of 20 feet (6 m) and a wing-area of 132 square feet,(12.08 m2) (i.e. larger than some wings built for the purpose of true human powered flight in the 1960's). There was a fairing around the person and bicycle. There was no propeller and there were apparently no aerodynamic controls. The total weight was 201 lbs.(91 Kg) The lifting force, (lift) produced by a wing is mainly a function of the area of the wing(s), the density of the air, the speed of the wing relative to the air and the shape of the wing section. The other factor is the viscosity of the air, (see Reynolds number in Glossary). The section shape and its angle relative to the motion determines the factor Cl or "lift coefficient" in the formula L = Cl x ro/2 x V2 x S where L = lift, ro = air density , V = velocity and S = area. Cl is a pure number (dimensionless), hence if one converts to a different system of units its value is unaltered. In round terms, one might assume that Poulain's wings achieved a lift coefficient of 1. Assuming also the typical sea-level value of air-mass-density of 1/420 in the ft/lb/sec system, and knowing the wing area and the weight that was needed to be lifted, one may state :- 210 = 1 x 1/840 x V2 x 132 which gives V = 36.5 ft/sec or 25 mph However, to have travelled 11.98 metres (39.3 feet), he would have needed to be moving faster than this when leaving the ground. A rough estimate of this extra necessary speed can be made by assuming a glide-ratio of 5/1, (typical for hang-gliders), that is he could have travelled 39.3 feet forward while losing 39.3/5 = 8 feet in altitude whilst maintaining the same speed. The extra energy needed is the same as that needed to climb this height. The calculation is done most simply by converting the forward speed into its equivalent height using height = 1/2 x 1/g x V2 = 1/2 x 1/32.2 x 36.5 x 36.5 = 21 ft Hence total equivalent height is 21 + 8 = 29 ft which is equivalent to a speed of half of 1/g times the square root of 29 = 43 ft/sec (29 mph). Hence, assuming a lift coefficient of unity this would imply a minimum flying speed of 25 mph, and if we assume a glide-ratio of 5 then Poulain would have needed to achieve 29 mph just before take-off to provide the momentum to carry through the air over the distance. Poulain was a racing cyclist and an experienced pilot. Is it a bike ? is it a plane ? No, it was a machine which had been optimised over nine years purely for the purpose of winning the Peugeot prize, and was demonstrably the appropriate vehicle for the purpose. Poulain and the Farman company succeeded with this simple layout against a competition of machines with flapping wings and propellers, some of them being tricycles or having other appendages adding to the weight and drag. There is no record of anyone else operating this machine, or of whether it was stable or whether Poulain personally had gradually to acquire the specific knack of controlling it. Clearly, without some sort of drive when airborne, one will not get very much further than this, but note that all those of the early aircraft which intended unaided take-off, and some of the later ones, used drive to the ground-wheel as well as to the propeller. Dr Alexander Lippisch, a prolific designer of sailplanes and other aircraft, built an ornithopter (see Glossary) in 1929. This was always launched like a glider (Lippisch 1960). The principle Lippisch used relied on the wings twisting during the flapping cycle. In general, on an aircraft the centre of pressure of the lift will not remain on the axis of the spar during flight. This offset loading will tend to warp the wing. On almost every other aeroplane this is a problem which must be overcome, usually by making the wing structure stiff enough to resist this torsion. But on this aeroplane, Lippisch tried to make use of this effect; the extra, and different, forces on the wing during the downstroke would hopefully warp it more. Hence, the effect of the wing flapping would propel the plane on a similar principle as a fish's tail propels a fish. However, for one reason or another it did not work. Perhaps the wing was too torsionally stiff - again the opposite of what is unfortunately more common. Lippisch added flexible extensions behind the trailing edge and it was then found that flapping of the wings slightly prolonged the flights, but he could not understand the still disappointing results until he realised that the pilot, Hans Werner Krause, was not really pulling very hard, and didn't see the point of it. He then offered to pay Krause's rail fare to see his girl friend for the weekend, if he were to fly from the usual launch point over a specified puddle about 300 yards away. The course was covered on the first attempt. MUSKELFLUG INSTITUT (Institute of Muscle-Powered-Flight) This was set up in 1935, within the Gesellschaft Polytechnic, Frankfurt. Oskar Ursinus, director saw as the prime question the determination of power available (from a a person`s muscles). A prize was offered for the first flight in Germany over a 1 km course. The data from his tests on muscle-power were made available to designers in 1936. Unfortunately no further research could be carried out by the Institute because of the onset of war. This was the only relatively successful contender for the prize offered by the Muskelflug-Institut. Helmut Haessler finalised his design in 1935. His estimate of the available power was too high. Eventually, since the results of the tests from the Institut were not published he and a colleague Franz Villinger performed their own tests on human-power by having one cyclist tow another who read a spring balance on the handlebars attached to the tow-line. " It was not realised until our own tests and those of the Muscle Flight Institute, which was founded later, had been done, that the earlier data gave more than double the actual power." (Villinger 1960) None of these human-power data mention the weight of the person producing it. Franz Villinger and Helmut Haessler were both experienced in aircraft through their employment at Junkers. The neatness of the configuration and the similarity to a sailplane conceal some subtle points. The length of the drive is very short and the propeller-support-pylon and wing do not interfere aerodynamically as they do on some later machines. (See below re Interference Drag). The frontal area is desirably low, although this meant that it was most awkward for the pilot to get into or out of the Mufli.
<urn:uuid:2e3abaf5-101e-42d0-88bb-0183e0901d1b>
CC-MAIN-2016-26
http://www.humanpoweredflying.propdesigner.co.uk/html/body_before_1939.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00173-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969796
1,612
3.375
3
Bioflavonoids, also sometimes referred to as "vitamin P," are super-antioxidants found in many natural foods. Scientists have found that bioflavonoids have specific capabilities to increase bodily health in many different ways. They support strong cell formations and, according to some medical services, even suppress poor cellular growth in order to deliver an anti-carcinogenic effect. Bioflavonoids contribute to good heart health, and combat atherosclerosis, as well as conditions like Alzheimer's disease. Bioflavonoids are found in many of the same foods that contain vitamin C, an essential nutrient for the daily diet, and these super antioxidants complement vitamin C, enhancing its effect on the body. Foods Rich in Bioflavonoids Fresh fruits and vegetables are generally the top choices for getting plenty of bioflavonoids in a diet. Here are some of the most popular ways to get the most of these helpful nutritional elements. - Red Bell Peppers or Sweet Peppers - Red peppers contain three times more vitamin C than orange juice, according to some medical sources. Scientists agree that raw bell peppers are an effective way to get bioflavonoids into the system. - Strawberries - These luscious red berries are a great source of bioflavonoids. Other berry types are similarly rich in these kinds of antioxidants. This leads to specific claims of health benefits for berry-made wines and derivative foods. - Citrus Fruits - Oranges are a significant source of bioflavonoids. Lemons and limes, as well as peaches, nectarines and other fruits all contain vitamin C and bioflavonoid superoxidants. - Broccoli - This green vegetable has a lot of vitamin C, as well as some other essential vitamins for a healthy diet. As with other foods, use broccoli raw for best results. - Brussels Sprouts - For a hearty meal, include these cabbage type sprouts. Rich in antioxidants, they are also packed with their own unique taste for a delicious way to get bioflavonoids and vitamins. - Tropical Fruits - Exotic fruits, like mangoes and papayas, have a lot of bioflavonoids and other nutritional elements packed under their skins. These are becoming more accessible at supermarkets everywhere. Don't miss out on what they have to offer. - Garlic - By most accounts, garlic is a superfood. Our food culture has long been aware of its anti-inflammatory properties, but now scientists are counting it as among the natural foods rich in bioflavonoids, and therefore able to deliver the antioxidant values we associate with "healing foods." - Spinach - Popeye wasn't kidding: this stuff has all of the qualities you would associate with a green vegetable rich in antioxidants. Spinach is a good all-purpose nutrient - try it in place of lettuce for a salad that's bursting with nutrition. - Teas - Green tea and other teas are known to have a lot of powerful chemical elements that contribute to longevity and good health. Lots of health minded caffeine drinkers are switching from coffee to tea to get the effects of essential vitamins in their morning drinks. Raw vs. Processed These are just some of the top producing fruits and vegetables that deliver bioflavonoids and vitamin C to the table. Vitamin rich foods are always more effective in their raw form, so be aware of the difference between buying and using fresh produce, and eating these foods canned, cooked or processed. Overall, look for colorful, fresh fruits and vegetables to benefit from a diet that will contribute to your health in many ways.
<urn:uuid:fc8603ff-39ed-47ca-9aac-0e36dc2657ce>
CC-MAIN-2016-26
http://www.fitday.com/fitness-articles/nutrition/vitamins-minerals/9-foods-rich-in-bioflavonoids.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00016-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94824
757
3.15625
3
But maybe they’ll be convinced by new research from the World Bank, which just produced a major report on the outlook for Europe. In chapter 7, the authors explain some of the ways that big government can undermine prosperity. There are good reasons to suspect that big government is bad for growth. Taxation is perhaps the most obvious (Bergh and Henrekson 2010). Governments have to tax the private sector in order to spend, but taxes distort the allocation of resources in the economy. Producers and consumers change their behavior to reduce their tax payments. Hence certain activities that would have taken place without taxes, do not. Workers may work fewer hours, moderate their career plans, or show less interest in acquiring new skills. Enterprises may scale down production, reduce investments, or turn down opportunities to innovate. …Over time, big governments can also create sclerotic bureaucracies that crowd out private sector employment and lead to a dependency on public transfers and public wages. The larger the group of people reliant on public wages or benefits, the stronger the political demand for public programs and the higher the excess burden of taxes. Slowing the economy, such a trend could increase the share of the population relying on government transfers, leading to a vicious cycle (Alesina and Wacziarg 1998). Large public administrations can also give rise to organized interest groups keener on exploiting their powers for their own benefit rather than facilitating a prosperous private sector (Olson 1982). In other words, government spending undermines growth, and the damage is magnified by poorly designed tax policies. The authors then put forth a theoretical hypothesis. …economic models argue that the excess burden of tax increases disproportionately with the tax rate—in fact, roughly proportional to its tax rate squared (Auerbach 1985). Likewise, the scope for self-interested bureaucracies becomes larger as the government channels more resources. At the same time, the core functions of government, such as enforcing property rights, rule of law and economic openness, can be accomplished by small governments. All this suggests that as government gets bigger, it becomes more likely that the negative impact of government might dominate its positive impact. Ultimately, this issue has to be settled empirically. So what do the data say? These are important insights, showing that class-warfare tax increases are especially destructive and that government spending undermines growth unless the public sector is limited to core functions. Then the authors report their results. Figure 7.9 groups annual observations in four categories according to the share of government spending in GDP during that year. Both samples show a negative relationship between government size and growth, though the reduction in growth as government becomes bigger is far more pronounced in Europe, particularly when government size exceeds 40 percent of GDP. …we provide new econometric evidence on the impact of government size on growth using a panel of advanced and emerging economies since 1995. As estimates can be biased due to problems of omitted variables, endogeneity, or measurement errors, it is necessary to rely on a broad range of estimators. …They suggest that a 10 percentage point increase in initial government spending as a share of GDP in Europe is associated with a reduction in annual real per capita GDP growth of around 0.6–0.9 percentage points a year (table A7.2). The estimates are roughly in line with those from panel regressions on advanced economies in the EU15 and OECD countries for periods from 1960 or 1970 to 1995 or 2005 (Bergh and Henrekson 2010 and 2011). These results aren’t good news for Europe, but they also are a warning sign for the United States. The burden of government spending has jumped by about 8-percentage points of GDP since Bill Clinton left office, so this could be the explanation for why growth in America is so sluggish. Last but not least, they report that social welfare spending does the most damage. Governments are big in Europe mainly due to high social transfers, and big governments are a drag on growth. The question is whether this is because of high social transfers? The answer seems to be that it is. The regression results for Europe, using the same approach as outlined earlier, show a consistently negative effect of social transfers on growth, even though the coefficients vary in size and significance (table A7.4). The result is confirmed through BACE regressions. High social transfers might well be the negative link from government size to growth in Europe. The last point in this passage needs to be emphasized. It is redistribution spending that does the greatest damage. In other words, it’s almost as if Obama (and his counterparts in places such as France and Greece) are trying to do the greatest possible damage to the economy. In reality, of course, these politicians are simply trying to buy votes. But they need to understand that this shallow behavior imposes very high costs in terms of foregone growth. To elaborate, this video discusses the Rahn Curve, which augments the data in the World Bank study. As I argue in the video, even though most of the research shows that economic growth is maximized when government spending is about 20 percent of GDP, I think the real answer is that prosperity is maximized when the public sector consumes less than 10 percent of GDP. But since government in the United States is now consuming more than 40 percent of GDP (about as much as Spain!), the first priority is to figure out some way of moving back in the right direction by restraining government so it grows slower than the private sector.
<urn:uuid:3bbdd95d-06ae-429f-9edd-7541fa05e09e>
CC-MAIN-2016-26
http://finance.townhall.com/columnists/danieljmitchell/2012/02/11/world_bank_report_shows_large_public_sectors_reduce_economy/page/2
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00190-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957666
1,123
2.828125
3
The city of Boulder announced Thursday that Barker Reservoir near Nederland is expected to fill and start spilling water by this weekend, or even sooner if heavy rains occur. The National Weather Service forecasts a 40 percent chance of severe thunderstorms tonight, a 60 percent chance of heavy rain Friday, and a 40 percent chance of thunderstorms again Saturday. Barker Reservoir fills due to snowmelt runoff each spring and summer. Once the reservoir water surface reaches the Barker Dam spillway crest, any additional water flows over the spillway by design. Barker Reservoir provides drinking water for Boulder. It is not a flood-control facility. Unlike flood-control reservoirs that have extremely large storage volumes such as Chatfield Reservoir and Cherry Creek, Barker Reservoir has relatively limited storage space. Because storage in Barker Reservoir is limited, once the reservoir is full, any excess inflow passes over the spillway and continues flowing downstream into Boulder Creek. When peak stream flows occur during spring runoff, minor nuisance flooding often occurs along the Boulder Creek Path, historically at underpasses. Although spring runoff -- and higher flows on Boulder Creek -- are expected this time of year, some creek conditions have changed due to the September flood. To inform the community about potential risks, the city has prepared a Spring 2014 Post-Flood Guide. Residents are advised by city officials to be safe and vigilant, monitoring the weather, watching the water levels in nearby creeks and always being aware of their surroundings. The city will continue to monitor Boulder Creek and notify the public of any safety concerns. For more information about flooding in Boulder, visit www.BoulderFloodInfo.net.
<urn:uuid:83338047-5da3-4c9e-b035-a529fd404c23>
CC-MAIN-2016-26
http://www.dailycamera.com/news/boulder/ci_25814879
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00169-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935373
346
2.65625
3
1 Answer | Add Yours The most evident conflict in the Oates short story exists between Arnold Friend and Connie. This does not start out as conflict. Rather, it is one where her desire for attention and notoriety has been reciprocated by Arnold. The conflict emerges when Arnold becomes so emboldened with his advance towards Connie. His desire to have her come with him and, eventually, kidnapping her becomes the basis of the conflict. He uses psychological and physical manipulation in his attempt. From this, Connie recognizes that she does not want to go with him, but also grasps that she has little choice, as Arnold Friend threatens her family and leaves her with little choice. Arnold demonstrates some slight conflict internally between his age and his desire to appear young, allowing him to get close enough to lure girls like Connie. This conflict comes out in different points, such as when he speaks in different vernaculars of youth, but overall, he has little problem in being the person who stalks and victimizes Connie. This development of Connie's character as one who cares for her family is one that is not present at the start of the story. Connie is first shown to be in conflict with her parents and her sister. Connie seems them as too traditional and unable to fully understand her own predicament and her need to be independent from them. Connie's conflict with her sister is that she is too "plain," and unable to grasp the need to be "hip" and popular. In the end, Connie's conflicts at the start of the story vastly contrast with her conflicts at the end of it. We’ve answered 327,474 questions. We can answer yours, too.Ask a question
<urn:uuid:0ad09507-4d0c-4d2f-8361-9e577ee17913>
CC-MAIN-2016-26
http://www.enotes.com/homework-help/what-conflicts-present-where-you-going-where-have-277070
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.0/warc/CC-MAIN-20160624154951-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.9867
343
2.90625
3
Modeling with Geometry Geometry makes an excellent math class for middle and high school students, but that's not its only purpose. (Students might want to replace "excellent" with a different adjective, but we stand by our decision.) Students can—and should—apply geometry to real life. These standards aren't as strict about formulas and proofs, and encourage students to use geometry as it was originally meant to be used: to relate to the world around us.
<urn:uuid:545a4dac-a73f-4b48-9d23-aae22a22ff07>
CC-MAIN-2016-26
http://www.shmoop.com/common-core-standards/math-geometry-modeling-geometry.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969454
97
3.375
3
Brief Explanation Of Taekwondo Taekwondo is a martial art originating from the days of tribal communities on the Korean Peninsula. Taekwondo was developed amongst the tribes as a means of preserving their own life and race as well as building both physical and mental strength. Literally translated TaeKwonDo means ‘the way of the fist and foot.’ The most important part of the word is ‘Do’ as this translated means ‘the correct way.’ Learning to kick and punch are only physical attributes. By practicing Do and the principles of Taekwondo, students become overall martial artists both physically and mentally. Hence, TaeKwonDO, Kim Chung DO Kwan, Mu DO, Pal Chung DO, Do has relevance in everything we learn. if learnt correctly is a very technical martial art which results in students learning how to generate maximum power in relation to their size and build. Although Taekwondo practitioners perform a lot of upper body techniques such as blocking and striking, what differentiates Taekwondo from the rest of the martial arts is its superior kicking techniques. However, depending on age and ability not everyone is expected to have superior kicking skill. Taekwondo is truly for anyone who wants to practice. However, all martial arts have near identical foundations and objectives; to strive for better technique and to understand and put into practice breathing control, technique, stances, posture, power, focus, reaction force and etiquette. This website is for not only Taekwondo practitioners but for all martial artists to try and gain a little more knowledge and strive for improved technique in their chosen field. There is no such thing as perfect technique in Taekwondo or any martial art. Students regardless of age, ability, and grade should strive for better technique and a martial art mentality. This attitude is what is known as MuDo what literally translates to ‘spirit of the martial art.’ Every Master, irrespective of type of martial art, has a different method of teaching. By looking at a particular student you can tell which other club they belong to because of their technique. In Kim Chung Do Kwan Taekwondo we are honoured to learn from Grand Master Kim Yong Ho, 9th Dan, and President of The World Taekwonmudo Academy (WTA), who has a unique style of teaching taekwondo.
<urn:uuid:07b76b22-a779-4796-9495-745cf752a1ce>
CC-MAIN-2016-26
http://www.taekwondo-training.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397749.89/warc/CC-MAIN-20160624154957-00135-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957495
495
3.046875
3
- This data sheet also in Arabic and French. (September 2008) An estimated 100 million to 140 million girls and women worldwide have undergone female genital mutilation/cutting (FGM/C) and more than 3 million girls are at risk for cutting each year on the African continent alone. FGM/C is generally performed on girls between ages 4 and 12, although it is practiced in some cultures as early as a few days after birth or as late as just prior to marriage. Typically, traditional excisors have carried out the procedure, but recently a discouraging trend has emerged in some countries where medical professionals are increasingly performing the procedure. FGM/C poses serious physical and mental health risks for women and young girls, especially for women who have undergone extreme forms of the procedure. According to a 2006 WHO study, FGM/C can be linked to increased complications in childbirth and even maternal deaths. Other side effects include severe pain, hemorrhage, tetanus, infection, infertility, cysts and abscesses, urinary incontinence, and psychological and sexual problems. FGM/C is practiced in at least 28 countries in Africa and a few others in Asia and the Middle East. The 27 developing countries included on this chart are the only ones where data have been systematically collected at this time. FGM/C is practiced at all educational levels and in all social classes and occurs among many religious groups (Muslims, Christians, and animists), although no religion mandates it. Prevalence rates vary significantly from country to country (from nearly 98 percent in Somalia to less than 1 percent in Uganda) and even within countries. Trends in FGM/C Prevalence Source: Feldman-Jacobs and Clifton, Female Genital Mutilation/Cutting: Data and Trends (Washington DC: Population Reference Bureau, 2008). Since the early 1990s, FGM/C has gained recognition as a health and human rights issue among African governments, the international community, women’s organizations, and professional associations. Global and national efforts to end FGM/C have supported legislation targeting excisors, medical professionals, and families who perpetuate the practice, but political will and implementation remain an issue. Some of the data that have been collected in recent years give hope to those working toward the abandonment of FGM/C as they reflect lower levels of cutting among girls ages 15 to 19. Charlotte Feldman-Jacobs is program director, Gender, at the Population Reference Bureau. Donna Clifton is a communications specialist at PRB.
<urn:uuid:7961b9c6-6454-411f-a389-31731d813b68>
CC-MAIN-2016-26
http://www.prb.org/Publications/Datasheets/2008/fgm2008.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00117-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956822
523
3.25
3
July 2, 2012 Images Of Extreme Solar Activity Provide Origins Of Powerful Space Storms Lawrence LeBlond for redOrbit.com - Your Universe Online An international team of scientists have for the first time captured and identified images of an upward surge of the Sun´s gases into quiescent coronal loops, a discovery that provides one more step in the understanding of the origins of extreme storms in outer space, which are known to wreak havoc on satellite systems and power grids here on Earth.University of Cambridge researchers worked with colleagues from the India and the US in imaging and visualizing the movement of gases at a million degrees in coronal loops -- solar structures that are rooted at both ends and extend out from the active regions of the Sun. These active regions are the “cradle” for explosive energy releases such as solar flares and coronal mass ejections (CME). Scientists are hoping by observing these upward surges they will gain a better understanding of one of the most challenging issues in astrophysics -- how solar structures are heated and maintained in the upper solar atmosphere. Solar activity is cyclical and the next maximum forecast should occur in May 2013. Such severe space weather is detrimental to Earth´s communications and electrical systems, and the UK currently lists this severity as very high in the 2012 National Risk Register of Civil Emergencies. The Hinode Satellite, a joint project by Japanese, American, and European space agencies, was utilized to make the observations, which provided the first evidence of plasma upflows traveling at more than 10 miles per second in the one-million-degree active region loops. The team believes the upflow of gases is the result of “impulsive heating” close to the starting point regions of the loops. “Active regions are now occurring frequently across the Sun. We have a really great opportunity to study them with solar spacecraft, such as Hinode and the Solar Dynamics Observatory (SDO),” said co-author Dr Helen Mason from the University of Cambridge´s Department of Applied Mathematics and Theoretical Physics. “Probing the heating of the Sun´s active region loops can help us to better understand the physical mechanisms for more energetic events which can impinge on the Earth´s environment.” NASA´s SDO has shown large loops of hot gas guided by the Sun´s magnetic field in previous images, but the question has remained as to how solar plasma is heated and rises up into the loops in the first place. They have now been able to answer that question, as their research provides the first visualization of plasma flow by showing the movement of gases within the loop with diagnostic imaging using the extreme ultraviolet imaging spectrometer (EIS) on the Hinode satellite. The spectrometer produces spectral lines that identify the horde of elements and ions within the loop and shifts in the position of the lines provide information on the motion of the plasma. Although helium and hydrogen make up the bulk of the Sun´s composition, there are also a number of other trace elements, including oxygen and iron, as were observed in the hot ionized gas within the loops. This gas may be caused by a process of “chromospheric evaporation” in which “impulsive heating” on a small scale can result in the heating of the solar active regions but on a larger scale can lead to solar flares, coronal mass ejections, and other huge explosions, according to the team of researchers. “It is believed that magnetic energy builds up in an active region as the magnetic field becomes distorted, for example by motions below the surface of the Sun dragging the magnetic fields around,” explained Mason, whose study was published today in Astrophysical Journal Letters. “Sometimes magnetic flux can emerge or submerge and affect the overlying magnetic field. We believe that solar plasma surges upwards when impulsive heating results from magnetic reconnection which occurs either in the loops or close to the Sun´s surface.” “The Sun governs the environment in which we live and it is the so-called solar active regions that drive extreme conditions leading to the explosive flares and the huge eruptions,” said Professor Richard Harrison MBE, Head of Space Physics and Chief Scientist at the STFC Rutherford Appleton Laboratory, who was not involved in the research. “Understanding these active regions is absolutely critical for the study of what we now call space weather. The work published by in this paper is a key element of that work, applying innovative analyses to the observations from the UK-led Hinode/EIS instrument.” With better understanding of these active regions, scientists hope that one day they will be able to identify the magnetic field structures that lead to explosive solar energy releases and use this to better predict when such events will occur. The Cambridge study was partially funded by the UK´s Science and Technology Facilities Council (STFC).
<urn:uuid:749faaad-a5a7-40ab-88fc-134380b570ec>
CC-MAIN-2016-26
http://www.redorbit.com/news/space/1112649243/images-of-extreme-solar-activity-provide-origins-of-powerful-space-storms/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400572.45/warc/CC-MAIN-20160624155000-00011-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93066
1,007
3.015625
3
September 23, 2014 APL Researchers Find That ‘Space Bubbles’ May Have Aided Enemy in Fatal Afghan Battle In the early morning hours of March 4, 2002, military officers in Bagram, Afghanistan, desperately radioed a Chinook helicopter headed for the snowcapped peak of Takur Ghar. On board were 21 men, deployed to rescue a team of Navy SEALS pinned down on the ridge dividing the Upper and Lower Shahikot valley. The message was urgent: Do not land on the peak. The mountaintop was under enemy control. The rescue team never got the message. Just after daybreak, the Chinook crash-landed on the peak under heavy enemy fire and three men were killed in the ensuing firefight. A decade later, Michael Kelly, of the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Maryland, happened to read a journalistic account of Operation Anaconda, one of the first major battles of the war in Afghanistan, and thought radio operators may have been thwarted by a little-known source of radio interference: plasma bubbles. Now, Kelly and his colleagues provide evidence that plasma bubbles may have contributed to the communications outages during the battle of Takur Ghar and present a new computer model that could help predict the impact of such bubbles on future military operations. Their work has been accepted for publication in a journal of the American Geophysical Union called Space Weather. Giant plasma bubbles — wispy clouds of electrically charged gas particles — form after dark in the upper atmosphere. Typically around 100 kilometers (62 miles) wide, the bubbles can’t be seen, but they can bend and disperse radio waves, interfering with communications. Plasma is pervasive in the upper atmosphere during daylight hours when the sun’s radiation rips electrons from atoms and molecules. Sunlight keeps the plasma stable during the day, but at night, the charged particles recombine to form electrically neutral atoms and molecules again. This recombination happens faster at lower altitudes, making the plasma there less dense, so that it bubbles up through the denser plasma above, like air bubbles rising through water. The rising tendrils of low-density, charged particles are called plasma bubbles, and turbulence at their edges can skew radio frequency waves passing through them. In the atmosphere above Afghanistan, peak bubble season generally occurs during the spring, according to the study’s authors. Given the timing and location of the battle of Takur Ghar, the researchers thought these atmospheric anomalies could have been present. To confirm its suspicions, Kelly’s team looked at data from the Global Ultraviolet Imager (GUVI) instrument aboard NASA’s Earth-orbiting Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) spacecraft, both of which were built by APL for NASA to study the composition and dynamics of the upper atmosphere. “The TIMED spacecraft flew over the battlefield at about the right time,” said Kelly, the lead author of the new study. That was a stroke of luck for the researchers, Kelly noted — and realizing that that spacecraft might have been there was a breakthrough moment. Joseph Comberiate, a space physicist at APL and a coauthor of the new study, developed a technique to transform the two-dimensional satellite images into three-dimensional representations of plasma bubbles. Using this technique, the authors were able to show that on March 4, 2002, there was a plasma bubble directly between the ill-fated Chinook and the communications satellite. The new model shows the electron-depleted regions of the atmosphere where radio-wave interference, known as scintillation, is most likely to occur. The plasma bubble that was present during the battle of Takur Ghar was probably not large enough to disturb radio communications by itself, but it likely contributed to the radio interference caused by the complex terrain in the area, according to the study. Both factors ultimately led to the blackout in communications between the operations center and the helicopter, the new research says. In that kind of terrain, the radio equipment was already “operating out on the edge,” said Kelly. Losing a few decibels of radio signal due to plasma bubbles “could have pushed them over the edge,” he suggested. The new model could be used to minimize the impacts of plasma bubbles in the future by detecting and predicting their movement for several hours after they form, the researchers said. The model combines data from several different satellite-based systems to detect the bubbles and uses wind and atmospheric models to predict where they will drift. By identifying these turbulent bubbles and their paths in real time, soldiers may be able to predict when and where they will experience radio interferences and adapt by using a different radio frequency or some other means of communication, said Comberiate. The APL group is currently working to validate the new model so it can be used in future military operations. In addition to building high-fidelity models for operational needs, APL is also working on a next generation of the GUVI sensor for the U.S. Air Force, called SSUSI Lite. “The most exciting part for me is to see something go from science to real, potential operational impact,” Comberiate said. Michael Buckley, APL, 240-228-7536, email@example.com Kate Wheeling, American Geophysical Union, 202-777-7516, firstname.lastname@example.org Note: Journalists and public information officers of educational and scientific institutions who have registered with the American Geophysical Union can download a PDF copy of the article (“Progress toward Forecasting of Space Weather Effects on UHF SATCOM after Operation Anaconda”) here. Or, you may order a copy of the paper by e-mailing your request to Kate Wheeling at email@example.com. Please provide your name, the name of your publication, and your phone number. The Applied Physics Laboratory, a not-for-profit division of The Johns Hopkins University, meets critical national challenges through the innovative application of science and technology. For more information, visit www.jhuapl.edu.
<urn:uuid:ff222ff0-853c-44a2-847b-eaf0e21d1b2e>
CC-MAIN-2016-26
http://www.jhuapl.edu/newscenter/pressreleases/2014/140923.asp
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943717
1,293
3.3125
3
As heated global warming debates continue, scientists are also investigating ways to get our planet to cool off if the politicians can’t figure out how to reduce greenhouse gas emissions. The latest geoengineering scheme involves turning the world’s oceans into a giant bubble bath, with hundreds of millions of tiny bubbles pumped into the seas. This would increase the water’s reflectivity and bring down ocean temperatures, according to Harvard University physicist Russell Seitz. As the creative physicist said to the assembled crowd at an international meeting on geoengineering research: “Since water covers most of the earth, don’t dim the sun…. Brighten the water.” Seitz explained that micro-bubbles already occur naturally, with bubbles under the ocean’s surface reflecting sunlight back into space and mildly brightening the planet. What Seitz imagines doing now is artificially pumping many more bubbles into the sea. These additional micro-bubbles would each be one five-hundredth of a millimeter and would essentially serve as “mirrors made of air.” The scientists say they could be created off boats by using devices that mix water supercharged with compressed air into swirling jets of water. “I’m emulating a natural ocean phenomenon and amplifying it just by changing the physics—the ingredients remain the same” [ScienceNOW], Seitz said. Using a computer model that simulated how air, light, and water interacted, Seitz found that the micro-bubbles could have a profound cooling effect on our planet–suggesting that temperatures could cool as much as 5.4 degrees Fahrenheit. Along with the reflectivity of the added bubbles, previously published reports show that they may improve fuel efficiency of cargo ships, allowing them to virtually float on air [Treehugger]. Seitz has submitted a paper on the concept he calls “Bright Water” to the journal Climatic Change [ScienceNOW]. While Seitz is excited at the possibility of creating “bubble patches” to reduce the effects of global warming, it still needs to be seen what sort of infrastructure would be required to create these giant bubble baths. And as with all geoengineering schemes, there’s that pesky question of whether hacking planet-wide systems will have any pesky side effects. 80beats: Study: Climate Hacking Scheme Could Load the Ocean With Neurotoxins 80beats: With $4.5M of Pocket Change, Bill Gates Funds Geoengineering Research 80beats: If We Can’t Stop Emitting CO2, What’s Our Plan B? 80beats: Fighting Global Warming: Artificial Trees and Slime-Covered Buildings DISCOVER: 5 Most Radical Ways to Squelch a Climate Crisis (slide show)
<urn:uuid:9d4c883d-cb2e-4989-b840-e776a7bba369>
CC-MAIN-2016-26
http://blogs.discovermagazine.com/80beats/2010/03/29/could-turning-the-oceans-into-a-giant-bubble-bath-cool-the-planet/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912114
576
3.546875
4
Chapter 1: Extraction The process of hydro-fracking begins at the well pad with the drilling of a vertical well. After the well has been drilled, a larger drill rig creates the horizontal drilling bore, and drilling mud is used to cool and power the drill. Depending on the chemical content of the drilling mud, drill cuttings can be deemed hazardous waste when combined with this mud. This initial phase can take up to two months. Once the well is drilled, dried, cased and grouted, fracking begins. After the cement casing is installed, a perforating gun sends down electrical currents that fracture the rock. Fracking fluid, a mixture of water and highly toxic chemical additives, is then injected at high pressure to both maintain and induce fractures in the gas-bearing formation, thereby increasing permeability and facilitating the release of trapped gas. A pressure of up to 15,000 pounds per square inch may be employed during multi-stage fracturing events. This is a pressure range typically associated with bombs and military armaments. Each well requires between 2-7 million gallons of fracking fluid. To make this fluid, water is obtained from local surface or ground water sources. To date, most fracking operations have used on-site fresh or low salinity water. Approximately 10 to 50% of the fracking fluid returns to the surface during the drilling process as flowback water, which is estimated to contain between 9%-35% of the initial fracking chemicals injected. Flowback water contains high levels of total dissolved solids (TDS or salts), metals and naturally occurring radioactive material (NORM) from the drilling process, which is stored in open lagoons. Flowback generates the largest amount of waste from the gas wells. It can be reused to fracture additional wells, injected into underground disposal wells, treated or stored in open lagoons for dilution and reuse. Open lagoons are prone to liner failure, evaporative spread of volatile chemicals and direct human and animal contact. The hydro-fracking process takes approximately 4 months to complete from preparation to waste disposal. Following fracking, the drilling rigs are removed and natural gas extraction begins. During this process, gas is collected at the producing well and piped or transported via truck to a processing facility. Wastewater pumped from a well with natural gas is known as production brine. Due to the Marcellus Shale’s marine origin, the production brine contains high levels of total dissolved solids (TDS or salts), metals and naturally occurring radioactive material (NORM). Production brine can be 5 times saltier than seawater. Between 300-6300 gallons of production brine can be generated per day. Production brine requires secure on-site storage in steel tanks and a hazardous wastewater disposal plan. The radioactivity of production brine waste from traditional vertical wells drilled into Marcellus Shale was found to be 267 times the recommended EPA levels under the Safe Drinking Water Act. Protected under the 2005 “Halliburton loophole,” which prevents the U.S. EPA from regulating the natural gas drilling industry, the oil and gas industry is exempt from federal laws dictating the handling of toxic waste, thereby leaving the responsibility to individual states.
<urn:uuid:2e4491d2-931a-4518-a6e2-fd68188563cb>
CC-MAIN-2016-26
http://cooper.edu/isd/projects/energy/natural-gas/hydfrack_extraction
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.918722
665
3.765625
4
Home > News > Bonds strengthened on mechanically linked molecules March 27th, 2003 Bonds strengthened on mechanically linked molecules For about 20 years, researchers in the field of mechanically- linked molecules have been building supramolecular structures, called pseudorotaxanes, out of cyclic and linear compounds, with cyclic compounds hosting linear guests. The structures are building blocks for improved polymers. Superheroes are real: Ultrasensitive nonlinear metamaterials for data transfer June 25th, 2016 Russian physicists create a high-precision 'quantum ruler': Physicists have devised a method for creating a special quantum entangled state June 25th, 2016 Nanoscientists develop the 'ultimate discovery tool': Rapid discovery power is similar to what gene chips offer biology June 25th, 2016 Ultrathin, flat lens resolves chirality and color: Multifunctional lens could replace bulky, expensive machines June 25th, 2016
<urn:uuid:7c1031d1-454c-43ed-94d5-25974954c9a4>
CC-MAIN-2016-26
http://www.nanotech-now.com/news.cgi?story_id=00486
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00120-ip-10-164-35-72.ec2.internal.warc.gz
en
0.905835
197
2.875
3
by Joseph Green (from Communist Voice #25, Nov. 27, 2000) Why deal with this issue? Overview of the argument The labor content The search for the natural unit -- The early days of the workers' movement -- The emergence of Marxism -- The Day After the Revolution -- After 1917 -- One, two, three, many natural units (the method of material balances) Part two (to be published next time) The annual cycle of production The mistake of equating living and dead labor Contradictions of true value True value and capitalist growth Marx's and Engels's views Some remarks on planning in a classless society Value and the transition period . The bourgeois economists say that without money, markets and capitalists, economic life would come to a standstill. There would supposedly be no way to decide what to produce, what methods to use to produce it, and how to distribute it among the population. . How, indeed, will economic calculation take place in a society without capitalists or the profit motive? This is one of the questions about the feasibility of communist society. It has been answered in various ways by different schools of communist thought. Some communist theoreticians say that calculations with money should be replaced by calculations using labor-hours. In the mid-nineteenth century this idea first appeared with those theoreticians who argued that ordinary money should be replaced with "labor-money", which doesn't use dollars and cents but instead measures the value of things by the number of labor-hours needed to produce them. They regarded this as letting every object have its "true value", and thought that the evils of capitalism came from the distortion of this true value in the prices that occur in the marketplace. Later, among those who advocated that communist calculation would be in labor-hours, it was usually argued that this wouldn't mean the establishment of true value, but would instead be a sign that exchange-value and money had been overcome. Many of these theorists argued that the superiority of communist planning over capitalism centered on the use of a "natural unit", which is how they regarded the labor-hour, instead of artificial financial units, such as the dollar, ruble, franc, or peso. . This article disagrees with these positions. It advocates that the labor-hour scale is not a rational, natural or scientific unit of planning, but the essence of capitalist exchange value. While any communist society will of course pay attention to the amount of labor, raw materials and equipment needed for the production of any useful item, it will not be able to reduce these considerations to the bottom line, a single unit, whether of labor-hours or anything else. Communist planning, the assessment in a communist society of what production is possible and what benefits and shortcomings it involves, will proceed from a concrete assessment of economic activities, and not by judging each choice by a single number on some numerical scale, even if that number represents so many labor-hours. In my opinion, this is what Marx and Engels meant when they said neither value, nor true value, would govern the future society. They stressed that to imagine that a society without capitalists and commodity production would be ruled by true value is to imagine that one could abolish Catholicism by setting up the true pope. This means that, as far as applicability to future economic life, the labor theory of value will fade away in a classless society, along with capitalist value: it is not a theory of how planning should be done under communism, but an insightful analysis of capitalism and of its exploitation of the working class. Why deal with this issue? . But why bother worrying about how planning will take place under communism? After all, the future society will figure out for itself how to do economic calculations. Experience will soon lead it to abandon various theoretical fallacies, and far more completely than any amount of theoretical disputation will accomplish today. Moreover, it is unlikely that the revolutionary movement under capitalism will ever agree totally on the methods of future communist calculation. Indeed, even in the CVO we are still discussing this issue, and have not yet come to a final conclusion. A strong socialist movement should be clear on the basic class changes it wants to bring about, but it would be a mistake to split the revolutionary movement over mere details about the future society. . Nevertheless, the discussion of the nature of communist planning can help clarify some points about the nature of communism. Certain broad conclusions will eventually emerge, and they can help clarify a number of points of interest today, and refute some common but mistaken ideas. . For example, today there are a number of left theorists who are preoccupied with inventing new mathematical systems of planning for a future society. But a careful study of the needs of planning will show that it is not mathematical technique, but social relations which are crucial to whether communist society will function. The fundamental issue remains whether workers (the entire population will be workers in a classless society) will produce efficiently and in a disciplined manner without the existence of a separate managerial class holding the whip of oppression and hunger over them. If so, then even relatively simple mathematical techniques will allow the economy to function, and the techniques can be improved over time. If not, then no mathematical technique can make communism viable. . A careful study of the needs of communist planning will also pour cold water on the fruitless search for the single, "natural" economic unit, what one might call the natural unit of planning. The medieval alchemists sought long and hard for the philosopher's stone that would convert lead to gold, while in the twentieth century the natural unit was the philosopher's stone of much quasi-communist theorizing. . Issues about planning also come about with respect to the collapse of the Soviet and other state-capitalist economies. For decades, many of the advocates of new mathematical systems of planning have believed that their inventions would supply a critical gap in the state-capitalist systems such as the former Soviet Union or eastern Europe. Yet the problem with these economies was not that defective econometric techniques resulted in a growth rate a percent or two lower than otherwise achievable. Instead the economies eventually bogged down in stagnation and crisis; the economic figures sent from factory to ministry and back, or appearing in the public reports of the government, were in large part fantasy; and the struggle of each individual and grouping for their private interests went on underneath the constant repetition in public of pious words about concern for all. These were symptoms of the exploitative class relations in these societies, which were no longer revolutionary but had developed Stalinist or other oppressive ruling classes. The ruling classes of these societies beat their breasts about how "socialist" they were, but these were societies where the workers had no say, and where the state-capitalist bureaucrats jockeyed among themselves to accumulate the most privileges. It was the struggle of one state-capitalist executive and planner against another, and the struggle of them all against the local working classes, that led to the false reporting of economic figures, and to the anarchy that corroded the entire economic system. . The refutation of the fallacies of labor-money and the natural unit of economic calculation also bears on the fallacies of the market socialists, who believe, along with straight-out bourgeois economists, that only money and the market can effectively direct economic life. Indeed, many of the left economists who are looking for elaborate new planning schemes are vainly seeking to devise market mechanisms which will, by some magic, produce socialist rather than capitalist results. . There is also the question of what central planning will look like in a marketless communist society. There are those who believe that, if there is no money and no market, then every economic decision must be dictated from the center. It has therefore been advocated that communism has only now become a possibility because there are powerful supercomputers, capable of keeping track of thousands, indeed millions, of different products and factories, and comparing them via complicated mathematical methods. This computer model of communism is a science-fiction nightmare, which ignores the need for extensive human initiative at every step of economic activity. This is related to the issue of the natural unit of calculation, because to reduce decisions to what can be mechanically calculated by a computer, generally requires a method of reducing the multitude of different economic factors to a single quantitative index or parameter: the computer then evaluates the different plans, arriving at how each plan measures up according to this ultimate index. In reality, computers are important economic tools, but subordinate ones, and the single quantitative index that provides a true and supreme guide to all economic life will never be found. Instead of searching for this index, it is important to get an idea of how, in a communist society, central planning and local initiative can not only coexist, but even be prerequisites, one for the other. The bloated central ministries of the old Stalinist model, which feared and quashed most local initiative, are not models of communist central planning but of state-capitalist oppression of the mass of the population. . It is also important to show that the possibility of communist society and of central planning does not depend on there being perfect foresight of every eventuality. It is the development of large-scale production, and of a working class accustomed to cooperation and social production by its work in large-scale production, that creates a material basis for the emergence of communist society, with its planned economy. But life will always bring unexpected events, and indeed progress itself -- the replacement of the old by the new -- involves some unpredictable results. Communist economic life will be notable for its ability to handle economic surprises better than a market economy. . The present article will deal with what seems a mainly technical question: the issue of whether the labor-hour will replace the dollar as the regulator of economic life. Nevertheless, it will touch, at least to some extent, on all of the above issues, some of which deserve future articles in their own right. Although we can only anticipate the most general outlines of a future society, we can still draw some conclusions of use to our work today. Overview of the argument . The body of this article begins by defining the "labor-content" of a product, which measures the number of labor hours used in producing this product. It then proceeds to sketch the history of the search for the natural unit of economic calculation that would replace financial measurement, with the labor-content (or some variant) being the main contender for this role. . Historically this search has been closely connected to a belief that equal exchange would eliminate capitalist exploitation. The labor theory of value was first developed by bourgeois economists. But starting in the first half of the 19th century, a number of major left-wing figures, both socialists and anarchists, took the labor theory of value to mean that equal exchange, according to the fair or true value of goods, would allow the working class to vanquish capital. The article points out that Marxism, with its insistence that equal exchange leads not to the liberation of the working masses, but their exploitation, marked a radical challenge to previous ideas. Marxism also denounced the pursuit of true value as a chimera, but some communists now believed that the quest for a natural unit was a quest, not for true value, but for its negation. . This survey of the search for the natural unit concludes by examining the method of "material balances", which was originally developed in the Soviet Union and served to supplement financial calculation with a consideration of the physical (natural) interrelationships between the different sectors of the economy. Experience with this method verifies that planning an economy directly in material terms requires a multitude of natural units, and can't be done by the use of a single natural unit. Thus ends the first part of the article. . Part two of this article, to appear in the next issue of Communist Voice, will then turn from history to theory in itself, and elaborate a series of reasons why neither the labor-content nor anything else can fulfill the role of being a single, regulating natural unit for economic planning. The first reason to be given will be that it is crucial, in planning the yearly production of an society, to distinguish between present (or living) labor and past (or dead) labor. As far as the labor-content goes, past labor, crystallized into existing stocks of such things as raw materials, machinery, and consumers goods, is equivalent to living labor. A hour of past labor equals an hour of present labor. But in economic planning, they cannot be equated. For example, a production plan is restricted by the amount of raw materials and machinery (produced by past labor) that exists: it can plan to increase these goods for the next production cycle, but in the present production cycle it can only use as much raw materials and machinery as it already has on hand, no matter how much living labor is available. . It will be further be shown that the labor-content doesn't provide a means of determining whether factories are producing efficiently. Nor does it provide a way of allocating labor to different enterprises, even given that one has already decided what goods should be produced. And it does even worse in dealing with the issue of deciding what goods should be produced. . Moreover, the article will advocate that to regulate production by measuring all economic goods via a single index, a single unit of measurement, would mean, in effect, using money. It would, ultimately, subject the economy to the law of value. Using labor-hours as the single or controlling index doesn't change this conclusion. The article will show that the result of regulating production under communism in this way would be to reproduce a number of the economic irrationalities of capitalism. . Along the way, the article will cite some of Marx's and Engels's denunciations of the idea that either present-day value, or a purified true value, could regulate future society. It will at one point turn in more detail to Marx's and Engels's views concerning measurement by labor-hours and the regulation of production under communism. It will cite Marx's view that to equate different economic products quantitatively, on a single scale, means to ignore the qualitative differences that truly exist between them. Hence, it would seem to me, it follows that no such single numerical scale can serve as the natural unit of economic calculation, because any such scale negates the "natural" (material, physical) qualitative differences between products and between the labor of different people. There can only be a multiplicity of natural economic units, each measuring a single material good or factor of production, not a single natural unit which stands supreme above them all and serves as the natural unit of overall calculation. To put the different natural units into a purely quantitative relationship with each other, as is required if there is to a single, supreme unit of economic calculation, is to negate the qualitative differences between the things being measured by the different natural units. The article also deals with a series of quotations from Marx and Engels which seem to show them advocating--and have been widely used to portray them as doing such--that the measurement of labor-hours will indeed be the regulating principle and bottom line of communist society. It shows that, actually, they were simply advocating that, despite the lack of money, economic accounting and calculation exist in communist society.(1) . The article will then turn to a brief discussion of some ideas about how planning will take place under communism. It will sketch how such planning can proceed even though there is no single natural unit of economic planning and to refute the idea that this would require every single economic decision to take explicit account of thousands, indeed millions, of different factors, something that would squelch all local initiative. . However, the theme of this article, that the labor-content is not a natural economic unit, is not meant to imply that no consideration whatsoever will be given to the labor-content (or some other single scale of economic measurement), only that such a scale can only be one of several tools that might be used in economic life. It will not be the bottom line of economic decisions, nor the regulator of overall economic decisions. Were an attempt made to use it as the bottom line in economic planning, it would be subject to all the objections made earlier in the article. But the article is not putting forward a type of magic which would make the number of hours of concrete labor of a definite sort used to produce a product disappear from economic planning, nor does it even advocate that the labor-content -- a measure of abstract rather than concrete human labor -- will necessarily disappear altogether from economic considerations. It will instead suggest what use a communist society might conceivably make of a single numerical scale upon which all articles are measured. It will also point out that communist society will routinely, frequently, and inevitably take major and minor decisions that ignore or even completely violate this scale, something that would be impossible if this scale really were the long-sought natural unit of economic planning. . If the reader didn't realize in advance that this article is not arguing that the labor-hour is irrelevant to economic planning, the reader might end up feeling cheated. Coming to the latter part of the article, such a reader will feel that the article, in discussing the subordinate uses which the labor-content might have in communist planning, is going back on everything said earlier. If the distinction between concrete human labor and an abstract measure of labor (such as the labor-content), and the distinction between the labor-content as the bottom line of economic planning versus being a subordinate measure, seem at first to be vague and forced to some readers, such a reader, being warned in advance, may then consider these points in connection with every argument against the labor-content as bottom line or natural unit that is made throughout the article. Again and again, the reader will see that there is a distinction between concrete labor-hours and abstract labor-hours. Concrete labor-hours are not interchangeable and have distinct, qualitative features (some being crystallized in past production, and some being the labor expended during the current production cycle; some being done by workers of one skill or residing at one particular location, some by workers with another skill or living elsewhere; etc.). Abstract labor-hours are interchangeable, one abstract labor-hour being equivalent to any other such hour. The labor-content is a measure of abstract labor-hours, which would only be a natural unit if all labor-hours were indeed interchangeable. But labor-hours are only interchangeable on the market; once the market is gone, labor-hours must be considered in their material distinctness. In such a situation, a measure of abstract labor-hours can only be an approximation, and can only be of use under certain circumstances. It may be hard work to grasp these distinctions, but there will be ample rewards for doing so: once these distinctions are grasped, they will be found to be key to a number of economic problems. . Finally, the article will conclude by raising some issues concerning the labor-content and value in a transitional society. The socialist revolution will not immediately produce a classless, moneyless society. After the capitalists are stripped of political power, there will be a protracted transitional period during which the capitalists are dispossessed and the working masses develop their ability to run the economy, and to run it in a new way. The article will show that the labor-content and value will continue to have an objective reality in such a society, not just as a subordinate planning tool but as an independent reality that can never be lost sight of. But the extent of that role will be a measure of how far the transitional society has still to go to achieve communism, a measure of how far commodity production is still a reality in the transitional The labor content . Now to begin. What this article calls the "labor content" of a product refers to the number of labor-hours it takes to produce it. This includes not just the number of hours which workers at the final factory or workplace have to devote to fabricating it, but also the amount of labor embodied in the raw materials needed for its production, as well as the amount of labor needed to maintain in good condition the needed tools and machinery or embodied in replacement tools and machinery. So the labor-content of an item equals the number of labor-hours expended by the immediate producers plus the labor-content of the necessary raw materials and the labor-content of the machinery etc. that has to be replaced due to being worn down during the production process. . But isn't this numerically equal to what the labor-theory of value regards as the value (exchange-value) of a commodity? Yes, it is, so that one can find more elaboration on what it is by consulting the detailed description given by Marx in volume one of Capital. For example, as explained by Marx, the labor-hours that count toward the labor-content or value are solely the "socially-necessary" labor-hours. For example, if a particular worker is clumsy and slow and takes more time to produce things, this doesn't make the things produced more valuable than those of the average worker, nor are the products of an exceptionally fast worker -- who spends less time on the work -- thereby less valuable. Whether a particular worker takes five hours or fifty hours to produce something, the labor-content of the product is determined solely by the average time that workers of normal skill and dedication would take to produce it under average conditions and with the usual equipment. . Well, if the labor-content is, numerically, simply the value (as defined by Marxism), why give it another name? But for one thing, I will apply the term "labor-content" not just to commodities, which have a capitalist exchange-value, but to things produced in a communist society, which don't have an exchange-value. Moreover, this article also deals with different views concerning whether calculations via labor-hours provide a system of communist economic calculation, or whether this would result in having the economy duplicate the problems caused by capitalist value. To discuss these issues, it is necessary to distinguish between the number of hours involved in producing a product (the labor-content), and whether such a number functions as the value of a product. This should hopefully reduce terminological arguments and misunderstandings to a minimum and focus attention on the content of the matters at stake.(2) The search for the natural unit -- The early days of the workers' movement -- . The idea of replacing money with calculations based on the labor-hour goes back to the first half of the 19th century. It appeared at first not as the idea of eliminating money, but of replacing ordinary money with labor-money, money denominated in labor-hours. The details of the idea varied from author to author, but generally speaking, the idea was that buying and selling were to continue, but these transactions would now be exchanges of equal value: anyone producing a product should be able to exchange it for money worth the precise number of hours that the product required for its production. The use of labor-time for calculations was supposed to ensure that any product would always be exchangeable, and its producer could always obtain goods of equal value in return for it. There would be no problem of producing goods which couldn't be sold or exchanged, and no problem of finding suitable goods which one wanted in exchange for one's product. . The famous utopian socialist Robert Owen put forward such a view as early as 1821. One history of socialist thought points out that "In the Report to the County of Lanarck Owen compares labour-power to horse-power. He says that, although individual horses differ greatly in power, that has been no obstacle to the establishment of a standard of horse-power as a unit of measurement. The same, he says, could be done with the power of labour, which is the sole agency capable of imparting value to commodities. . . . Owen argues that the natural value of things made by men depends on the amount of labour incorporated in them, and that this labour is measurable in terms of a standard unit of 'labour time'. . . . Labour, he contends, should supersede money as the standard for measuring the relative values of different commodities; and the exchanging of one thing for another should be done in terms of their relative values thus ascertained."(3) Eventually a series of "Labor Exchanges" were set up that used "Labor Notes" for buying and selling. . Marx pointed out that "the theory of labour time as an immediate money unit was first systematically developed by John Gray" in 1831. He cited Gray posing the alternative: "Shall we retain our fictitious standard of value, gold, and thus keep the productive resources of the country in bondage? or, shall we resort to the natural standard of value, labor, and thereby set our productive resources free?" (emphasis added)(4) . These ideas were inspired by the labor theory of value developed first by such bourgeois economists as Adam Smith and David Ricardo. However, left-wing activists derived conclusions rather different from those intended by Smith and Ricardo. Marx pointed out that during this period "in England. . . almost all the Socialists . . . have, at different periods, proposed the equalitarian application of the Ricardian theory" and pointed in particular to Thomas Hodgskin, William Thompson, T.R. Edmonds, John Francis Bray, and John Gray.(5) He characterized the "ultimate meaning" of the reproach that they leveled at the bourgeois economists as follows: "Labor is the sole source of exchange-value and the only active creator of use-value. This is what you [Adam Smith, David Ricardo, etc.--JG] say. On the other hand, you say that capital is everything, and the worker is nothing or a mere production cost of capital. You have refuted yourselves. Capital is nothing but defrauding of the worker. Labor is everything."(6) (Emphasis as in the original) . Thus these socialists not only believed that the labor theory of value vindicated the proletarian cause, but that it showed that capital oppressed the worker simply by defrauding him. Hence, they held that if exchange proceeded fairly, according to the real value of goods as determined by the labor theory of value, capital could be overcome by the working class. Marx pointed out that "the most important" of these proletarian advocates wanted to eliminate capital. However, Marx said, they accepted commodity exchange and the marketplace, unaware that these are the "economic pre-conditions of capitalist production" and that their "necessary consequence" is the development of capital.(7) . The belief in labor-money and equal exchange existed not only among many early socialists, but also among many anarchists. Josiah Warren, one of the early American individualist-anarchists, was originally a participant in the Owenite utopian community of New Harmony in Indiana. In 1827 he left New Harmony and began to develop his own theories, in which he looked to an extensive development of commodity exchange for a solution to the problems of New Harmony. This was to differ from the ordinary commodity exchange in capitalist society by being equal exchange. As the pro-anarchist historian Woodcock says, . ". . . He therefore made 'labor for labor' his formula, and sought to find a means of putting into effective practice Owen's original proposal for an exchange of labor time on an hour-for-hour basis, but with a flexibility that would allow individuals to agree on some kind of adjustment when one man's work, irrespective of time, had clearly been more arduous than another's. . "Immediately on his return from New Harmony to Cincinnati, Warren started his first experiment, which he called a Time Store. He sold goods at cost, and asked the customers to recompense him for his own trouble by giving him labor notes, promising to donate to the storekeeper an equivalent time at their own occupations for that consumed in serving him."(8) . Years later Pierre-Joseph Proudhon, one of the patron saints of anarchism, came to a similar conclusion about equal exchange. He held that capitalist market relations were essential, but had the bad side that the small producers were suffering under them. The key to removing the bad side of capitalist relations was supposed to be buying and selling at a fair price. That price was the amount of labor needed to make the product being traded, the labor-content, which he called the "constituted value" or "absolute value" of the product. More generally, he saw the principles of fair exchange and of contractual relations as key to allowing small production to flourish under capitalism.(9) He further suggested that it was absurd not to carry out this exchange in labor-units, rather than then the French monetary unit, the franc. As he said: "In economic science, we have said [that] after Adam Smith, the point of view from which all values are compared is labor; as for the unit of measure, that adopted in France is the FRANC. It is incredible that so many sensible men should struggle for forty years against an idea so simple. But no: The comparison of values is effected without a point of comparison between them, and without a unit of measure, -- such is the proposition which the economists of the nineteenth century, rather than accept the revolutionary idea of equality [equal exchange, according to "constituted value" and labor-hours], have resolved to maintain against all comers. What will posterity say?"(10) . Proudhon's system of "mutualism" involved, besides equal exchange, having the small producers create some institutions to help themselves out, but these institutions were to help them survive under equal exchange. Thus he held that the small producers needed to get free credit from a "People's Bank". And he advocated that the aim of co-operatives and workers' associations should be "not to substitute collectivities for individual enterprise . . . It is to secure for all small and medium-sized industrial entrepreneurs, as well as for small-property owners, the benefit of discoveries, machines, improvements and processes which would otherwise be beyond the reach of modest firms and fortunes."(11) . Woodcock points out that when "Proudhon's mutualism was introduced into the United States . . . its similarity to native individualism was quickly recognized. The Proudhonians remained a small sect, but they and the disciples of [the previously-mentioned Josiah] Warren" helped focus interest on "currency reform".(12) . It may seem strange that the anarchist Proudhon, who is most famous for his declaration that "property is theft", was a fervent advocate of commercial exchange and respect for contracts. But he also believed that "property is liberty". Proudhon repeatedly clarified that he was only denouncing such things as the abuse of property rights, feudal property rights, the cheating of the small producer by the large, and unequal exchange, not the right to "possession" (which, in his mind, was the right to small property) or even the right to profits, rent and interest. He wrote that "Property is theft; property is liberty: these two propositions stand side by side in my System of Economic Contradictions and both are true." And he clarified that "I protest that when I criticized property . . . I never meant to . . . prevent property from being freely and regularly acquired through sale and exchange, nor to forbid and suppress, by sovereign decree, ground rent and interest on capital. . . . all these forms of human activity should remain free and optional for all."(13) . Thus, among both socialists and anarchists, there were a number of advocates of exchange according to the true value of commodities, or the carrying out of calculations in labor-hours. Indeed, this idea was reasonably widespread for a time in the workers' movement of the first half of the 19th century. It was not just a matter of theory, but was implemented in a series of experiments with Labor Exchanges, Time Shops, People's Banks, and proposals for currency reform. The collapse of various Labor Exchanges and other experiments threw cold water on these ideas, and the working class movement turned to unions, strikes, political parties, and other means of struggle. Nevertheless, labor exchanges and alternate currency schemes have continually popped up here or there, and they are still promoted today as methods of establishing alternate community currencies. Minor reformist projects of this sort, such as LETS plans (Local Exchange Trading Systems), have been created by community activists in a number of cities. . There are also variants of the LETS idea which use the labor-hour as the unit of account. These include the "Time Dollar Service Exchange" plans, "Ithaca HOURS" style-plans, and another variant under development called ROCS (a Robust Complementary Community Currency System). They differ in a number of details, including the relationship of the standard hour of account to an actual hour of an individual's work. For example, in most Time Dollar plans one person's labor-hour is equal to another's, and anyone can get a standard labor-hour worth of Time Dollars for an hour of work. But how many Ithaca HOURS one gets for any particular job or for an hour of work is negotiated in each transaction, and can depend on skill, intensity of work, use of one's own expensive tools, bargaining skills, desperation, etc. -- The emergence of Marxism -- . The emergence of Marxism brought something new into socialist theory. Marx took up the labor theory of value, as had various socialist theorists prior to him, but his elaboration of this theory marked a revolutionary break with previous ideas. Marx showed that pricing according to value would not eliminate exploitation. On the contrary, pricing according to value, "measured by labour time, is inevitably the formula of the present enslavement of the worker, instead of being, as M. Proudhon would have it, the 'revolutionary theory' of the emancipation of the proletariat."(14) . No doubt capitalists cheat the workers as much as they can. But Marx showed that the origin of the surplus-value which the capitalists extracted from the workers did not lie in cheating in the marketplace. It arose from the process of capitalist production itself, in which the capitalists monopolized the means of production and the workers sold their labor-power, and this exploitation would continue if everything, including the workers' labor-power, was bought and paid for at its full value. Moreover, even if there was the most perfect and equal exchange in the marketplace, the class differentiation between workers and capitalists would continue to grow so long as commodity production continued to exist. All this might suggest that any plan for running an economy more exactly according to the labor-content than is achieved in the capitalist economy, would not amount to anything but copying, and trying to improve and perfect, the commodity marketplace. . But why had value, reflecting the underlying laws of commodity exchange, ever been thought of as a remedy to exploitation? Well, in the marketplace, the prices of goods only have a tendency to conform to their exchange-value: they only average around this value, and even that only under appropriate conditions. Fluctuations in pricing cause hardships for the working masses, and the inability to sell goods at a reasonable price often ruins small producers. But the labor-exchange plans promised to stabilize prices and provide guaranteed markets for small producers and guaranteed sources for consumers. In dealing with the conceptions that were current in his time, Marx devoted a good deal of his criticism of these labor-exchange and labor-money plans to the mistaken idea that they could accomplish these goals. He pointed out that fluctuations in pricing, and in supply and demand, were inherent in commodity production. He also criticized the idea that simply reforming the currency could affect the basic nature of commodity production. He stressed the need, if the workers were to be emancipated, to directly eliminate the conditions of capitalist production. Thus some supporters of Marxism hold that Marx was not criticizing the idea of running the economy according to the labor-content of goods, but only the idea of doing this while commodity production and capitalist relations still existed. This interpretation of Marx's view was reinforced by the way some people saw Marx's repeated remarks on the fact that communist society will indeed keep track of the amount of labor used for producing things. . But what if there were no ordinary buying and selling, and yet the economy was run according to labor-content? The famous polemic between Engels and Duhring in the 1870s comes close to raising this issue. One of the things Engels criticized was Duhring's picture of the future society, which maintains equal exchange between economic communes according to "the estimate of the quantity of labor required" to produce things, i.e., the labor-content, but does this without an ordinary marketplace in goods. The notable point is that Engels grants that Duhring wants to eliminate the old-style competition in the marketplace. In most other systems of equal exchange criticized by Marx and Engels, the old marketplace had been retained, but not in Duhring's plan. Nevertheless, Engels advocates that Duhring's system, in which the old capitalists and exploiters have been removed and even exchange between the communes is regulated to avoid direct competition, will inevitably develop new inequalities and capitalist features because it still regulates production by the labor-content. Engels describes this regulation via the labor-content as Duhring maintaining the law of value in his future society. . Let's look at this more closely.(15) Duhring's plan for a future "socialitarian" society is not particularly clear, but he did envision society divided up into a multitude of "economic communes" which produce goods communally. There was not supposed to be economic exchange between individuals inside the communes: instead, the commune was to be the collective owner of all the means of production. It was to distribute consumer goods to its members on the basis of reimbursing them for the amount of communal labor that they took part in. Duhring specified that "interest or profit would never be paid to him [the member of a communeJG]." Despite the fact that the individual would maintain some private possessions, it would "not be able to lead to any amassing of considerable wealth, as the building up of property . . . can never aim at the creation of means of production and rent-receiving existences." . The different economic communes were to be federated together, and there was to be exchange among them, an equal exchange based on the labor-content of the goods involved. As Duhring says, "Labor . . . is . . . offered in exchange against other labor on the basis of equal valuation", both between the economic commune and its members, and between communes. But at the same time, the exchange between the economic communes was to be regulated: it was to be conducted through groups of communes organized into "trading communes" that embraced the entire country. Indeed, the trade communes would, according to Duhring, "possess the whole of the land, houses and productive institutions", despite the control and utilization of these things by the local economic commune. Engels characterized Duhring's plan as involving a "national organization of trade" that would "prohibit competition in products between the individual communes". Nevertheless, each commune would be expected to deliver up through the intermediary of the trading communes as much in goods to other communes as it received in turn from them.(16) Duhring no doubt believed that, so long as this was equal exchange, it ensured that justice and morality and economic stability would prevail in his "socialitarian system". . But Engels analyzed the features of Duhring's system and concluded that the communes would break up into rich and poor communes. He wrote that: . "The 'exchange of labor against labor on the principle of equal value,' in so far as it has any meaning, that is to say, the exchangeability against each other of products of equal social labor, that is to say, the law of value, is precisely the fundamental law of commodity production, hence also of its highest form, capitalist production. . . . By elevating this law into the basic law of his economic commune, and demanding that the commune should apply it will full consciousness, Herr Duhring makes the basic law of existing society into the basic law of his imaginary society. In this he is on the same ground as Proudhon. Like Proudhon, he wants to abolish the abuses which have arisen out of the evolution of commodity production into capitalist production by applying to them the basic law of commodity production, precisely to the effects of which these abuses are due."(17) . Thus Engels held that exchange relations between independent economic communes -- even though there was no private ownership of the means of production within each commune and even though there was equal exchange between communes -- would give rise to commodity relations. Moreover, it is notable that Engels holds that this occurs even though direct competition between the communes in the sale and purchase of products has been eliminated. Engels held that the mere keeping of economic accounts by the economic communes in terms of value, or "social labor" (if it includes keeping these accounts in balance so that the communes receive no more and no less social labor from other communes than they give up to those other communes) would serve to have this result. This would seem to put a limit on what role the labor-content could play in planning in a communistic society. Whether social-labor (the labor-content) is measured directly in labor-hours or indirectly via some sort of money, it is still social-labor. Indeed, Engels held that Duhring's communes could just as well keep their accounts in labor-hours as in any type of financial unit. So this would seem to mean that, if calculations in labor-hours were used in an attempt to maintain equal exchange between different economic units, it would amount to applying the law of value. . However, in Duhring's system, this equal exchange of labor for labor only takes place because each commune has its own ownership rights (even if its ownership rights are subordinate, in some sense, to those of the trade communes).Hence Engels's polemic is probably generally taken to be directed only at the fact that the economic communes are independent entities, with their own ownership rights (and hence their right to equal exchange). . Moreover, Duhring couldn't imagine that future society could do without various of the features of capitalism, and he added them back into his picture of the "socialitarian society". For example, there was a capitalist-style division of labor in each commune. As well, he insisted on the use of money, both within the commune and between communes; indeed, he insisted on using the "precious metals" (gold and silver) as money. Engels dwelt on these features, and in particular showed how the use of money would facilitate the breaking down by economic exchange of the communal features of Duhring's plan . Engels also pointed out that a true socialist society would, of course, have to keep track of the amount of hours used in production, and it would do this directly in labor-hours, not by translating the amount of labor-hours into financial terms or value terms. This has probably led a number of people to believe that Engels was saying that the difference between Duhring's "socialitarian plan" and socialism was mainly that Duhring used money, rather than calculating directly in labor hours. In any case, it has led to the belief that Engels was advocating that calculations in labor hours would indeed constitute the natural unit for socialist society. In part two of this article I would deal in more detail with Engels's remarks in Anti-Duhring on the use of the labor-hour. Here it suffices to remind the reader that there is a difference between future society having to take account of the number of labor-hours used in production (and in the passage concerned Engels specifically noted that the "labor forces" were only one part of the means of production that would have to be taken account of), and regarding the labor-content as having the same significance for socialist society as the dollar has for capitalist society. -- The Day After the Revolution -- . The issue of the role of value in socialist society also came up in correspondence between Engels and Karl Kautsky, one of the leading theoreticians of the Second International.(18) In a letter in 1884, he reproached Kautsky for believing that "current value is that of commodity production, but, following the abolition of commodity production, value would also be 'changed,' that is to say, value in itself would continue to exist, and only its form would be modified. But in fact, however, economic value is a category specific to commodity production, and disappears with the latter, as it likewise did not exist prior to commodity production. The relation of labor to the product, before as after commodity production, is no longer expressed under the value-form."(19) . Unfortunately, I have not been able to find the remarks by Kautsky that brought forward this protest by Engels. It would have been valuable to know what type of arrangement Engels regarded as a mere modification in the form of value, a modification that left its essence unchanged. Nor do I know what Kautsky's immediate response to Engels was. However, more than a decade and a half later, in Kautsky's main attempt to picture a future socialist society, he was at pains to claim that value had been eliminated. What is most interesting, however, is what he means concretely by the elimination of value. . Kautsky depicted the future society in an essay of 1902, "On the Day After the Revolution", which was an influential socialist pamphlet in its day.(20) It concerned not the problem of overthrowing the exploiters politically, but what the proletariat would do economically after it conquered state power. It sought to consider what the proletariat would be forced to do by "its class interests and the compulsion of economic necessity", rather than simply invent a plausible new society. Thus it dealt with the economic expropriation of the bourgeoisie, the improvement of the conditions of the workers, the need to increase production, the further centralization of various industries, etc.. . Kautsky ended up picturing a mixed economy, with the state sector controlling heavy industry and large-scale production in general, but with there also being municipal ownership, co-operatives, and even private ownership of various enterprises. Money and exchange still exist, both within and between the various sectors of the economy pictured by Kautsky, but this economy is nevertheless supposed to be subject to the "systematic regulation of production and circulation". Many of the measures that Kautsky described resemble measures taken in a number of revolutions of the 20th century. But what Kautsky described is a situation in which the big bourgeoisie has been expropriated, but commodity production and small capitalism still exist. This could represent a transitional economy, if the workers had both political power and an increasing ability to run the economic enterprises and to supplant the need for a separate class of managers (conditions which, unfortunately, have not existed for any length of time in 20th century revolutions). But it is not a picture of a fully socialist society which has overcome commodity production. . It was reasonable for Kautsky, writing about the immediate steps to be taken by a proletarian revolution, to focus on the steps needed to reach a transitional economy. But, in The Day After the Revolution, despite occasional statements that suggest that he envisioned something further (for example, when he wrote that it was impossible to "immediately" abolish money), Kautsky didn't distinguish between a transitional and a socialist economy. Instead he tried, in effect, to prove that the transitional economy has various of the features of a society without commodity production. . Thus, since he accepted Engels's point that the law of value would disappear in a socialist society, he had to show that the law of value wouldn't apply to the economy that he depicted. He sharply pointed out the incompatibility of the law of value and socialism, writing that . "There could be no greater error than to consider that one of the tasks of a socialist society is to see that the law of value is brought into perfect operation and that only equivalent values are exchanged. The law of value is rather a law peculiar to a society of production for exchange." But since he didn't raise the issue of there being a transitional economy, he had to show that the mixed economy he portrayed itself had gone beyond the law of value. Kautsky believed that he had accomplished this. He held that the law of value had been supplanted, if prices were no longer equal to value. There might still be money, still be exchange, still be buying and selling, still be different forms of ownership of the means of production, just so long as prices weren't necessarily equal to value. . Theory had come full circle. If some socialist and anarchist theoreticians of the early 19th century had held that the workers could achieve a just society if only there were equal exchange and prices were equal to value, Kautsky argued that if prices weren't always equal to the value, then the law of value was gone and capitalism was overcome, even though the buying and selling of goods continued. . For example, would the "wage system" still exist in the economy pictured by Kautsky? He argued "That is only superficially correct." Since labor-power would now be paid higher than its low value under capitalism, this was supposedly no longer the wage system. He also argued that the law of value would vanish because production would now be regulated consciously, by "a previous calculation of all modifying factors [which] will take the place of retroactive corrections through the play of supply and demand", although it isn't quite clear how this was to be done in his system, in which a variety of different systems of ownership (state, municipal, co-operative, and small-scale private ownership) of the means of production still existed. Instead of analyzing whether the continued buying and selling and the use of money would reflect the continuing existence of the law of value, Kautsky believed that money had lost its teeth and would no longer be a "measure of value". Indeed, he believed that the use of "token money" (paper money) rather than "metallic money" (gold and silver), allowed "the price of products" to "be determined independent of their value". . This would seem to mean that Kautsky believed that calculations would not be made in labor-hours, as prices would deviate from (be independent of) value. But this isn't altogether clear. He believed that when the price of a product varied due to the vagaries of supply and demand, that this meant that the value of this product had changed.(21) Hence if price was kept constant and in proportion to the actual labor used to produce a product, as he seemed to believe that it should, it would, according to his way of speaking, be deviating from the value of the product. So he may have believed that calculating in labor-hours meant departing from value, not adhering to it. But certain other statements in The Day After the Revolution seem to contradict this. . Whatever he thought about calculations in labor-hours, the main new thing about value in The Day After was Kautsky's attempt to present a mixed, multi-sector economy with money as having overcome the law of value. In essence, Kautsky anticipated Preobrazhensky's attempt in the 1920s to prove that the Soviet state sector, under NEP, really had overcome value, and that the financial accounting in the state sector was only a formal and superficial appearance.(22) Preobrazhensky held that a transitional economy was the union of two halves: an already socialist state sector, and the private sector. To present the Soviet state sector as already fully socialist, he had to explain away the prevalence of commodity production in the state sector, and present it as only formal. But all this brings us to the next stage of our story -- the Bolshevik -- After 1917 -- . The Bolshevik revolution of 1917, and the world spread of Leninism, gave an immense impetus to communist thought on many different subjects: the role of the proletarian political party; the forms of proletarian political power; the peasant question; the united front; the nature of imperialism; the right to self-determination of nations and the anti-colonial struggle; the idea of a transitional economy; etc. The Russian workers were faced with the problem of actually running an economy, thus lending urgency to various problems of economic calculation. The practical economic work led to new theorizing, and one off-shoot of this was a new series of attempts to find the natural unit that would be the key to economic calculation. But this turned out to be a sterile off-shoot of communist theory: all of these attempts failed. Indeed, none of them got beyond abstract theorizing. . Mid-1918 to 1920 was the period of the Civil War, during which the economical policy of so-called "War Communism" was followed. In the heat of emergency mass mobilizations to deal with the economic and military crises, it was believed that the transition to a fully socialist economy was near, and that the ruinous inflation of the currency was a sign that the use of money was being overcome. By 1919 and 1920 there were a series of proposals for the use of a "natural unit" to replace money; these were discussed at a meeting of economists about "Problems of a Moneyless Economy".(23) The chairman, an agricultural economist, A.A. Chayanov, outlined a plan whereby money would be replaced by keeping accounts of the amount of each product separately. He presumably envisioned using a multitude of distinct physical or natural units for labor, for the different raw materials, for the various sorts of machinery, etc. I am not sure, from the sketchy description available to me, whether his plan was entirely consistent. Nevertheless, it seems to have been a forerunner of what would later be called the method of "material balances", which will be discussed in the next section. But a number of other participants sought instead to replace money with only one or two natural units. . M. Smit and S. Klepikov sketched a system whereby money was replaced by a system of five basic natural measures: for human effort; mechanical energy; heat; raw materials; and machines and tools. These were combined into two natural units, one for labor-hours, and the other for energy. Smit and Klepikov held that, "in the distant future", only one unit would be left, because there would be a fixed relationship between the labor-hour and an definite amount of mechanical energy. This plan was never elaborated, having been attacked for various faults, including that different sorts of energy could hardly be grouped together, as the same amount of energy was expensive to obtain in one form, and cheap in another (such as windmills). . For his part, Kreve appears to have wanted to use the labor-content as the overriding natural unit. His basic unit, the trudovaya tsennost, was one hour of unskilled labor at the basic norm for the particular job it was being applied to. Skilled labor would be evaluated as a multiple of basic unskilled work. To apply this unit, Kreve noted, one had to correct the assessment of existing stocks of good so that they would be evaluated according to the labor needed to produce them in 1920, not when they were originally made. He also apparently believed that the replacement of money by such a unit would "drive the last stake" into capitalism's heart. . K. Shmelev and S.G. Strumilin proposed the use of the tred (short for trudovaya editsa, or labor unit). Here too skilled labor was to evaluated as so many tred units. All prices in tred were to revised every so often. Strumilin also set forth another plan, which also used the tred, but included an attempt to sketch a method of determining the social usefulness of various economic outputs. . A line of thought similar to the above proposals was briefly touched on by Bukharin in 1920. He wrote about the need for calculation in natural units, saying that in the transition period "one must here take the natural form of things and of labor powers, make calculations in these units, and regard society itself as the organization of elements in their natural thing-like character." Apparently he regarded that these natural units could be reduced to one or two units, a unit of "social labor" and a unit of "use effect". What would a unit of usefulness look like? It might be "energy magnitudes". Thus, while Smit and Klepikov proposed using the energy unit as a measure, along with the labor unit, of the economic cost of producing something, Bukharin regarded energy units as measures of how useful something was. He would use comparisons of energy units versus labor units as comparisons of usefulness versus the amount of labor needed to produce something.(24) . But none of the plans for a labor unit ever came into effect. Meanwhile the advent of NEP in 1921 brought an end to the idea that full socialism could be achieved rapidly. Instead there was to be a gradual transition towards socialism, a transition during which there would still be commodity production, a multi-sector economy (state, co-operative, individual peasant, etc.), and the use of various capitalist economic methods. There were differences among the Bolsheviks over the nature of NEP, but it was generally accepted that money would remain during this period. . Nevertheless, the search for the natural unit still exercised its fascination. In his book of 1926, The New Economics, the Soviet economist Preobrazhensky held that a socialist economy would be regulated "on the basis of direct calculation of labor-time", which would occupy the place that the law of value has under capitalism. Thus he took labor-time as the "natural unit" for a socialist economy. . Preobrazhensky was not, however, advocating the immediate replacement of rubles with labor-time for calculations in the Soviet economy, or even in the state sector. Instead, he was simply trying to show that the state sector was socialist despite the existence of money transactions, buying and selling, and all the categories of commodity production (profit, rent, interest, stock, etc.)., and despite the declining influence of the working class in controlling and directing the Soviet state sector. Preobrazhensky aimed to show that commodity production in the Soviet state sector was just a surface appearance. . For example, he argued that the state sector wasn't really subject to the law of value, because, in his view, the law of value only applied when prices were set spontaneously in the market, not when they set by state action. He held that all forms of economy, capitalist or socialist, were regulated by the "labor-expenditure", that is, the labor-content. But, Preobrazhensky said, this only amounted to regulation by the law of value, when the prices that measured the labor-content were set in an ordinary market. For him, it was this that made the labor-content, reflected in these prices, into a value. If the prices were calculated beforehand directly in labor-hours, the labor-content reflected in these calculated prices was supposedly no longer a value. . Well, in fact, the Soviet state-sector didn't calculate directly in labor-hours. No matter. It could set its own prices, and that sufficed -- in his view -- to remove it from commodity production.(25) In this line of reasoning, he ignored the fact that while the Soviet state could apparently set prices as it pleased, it then had to suffer the consequences of these prices. So the prices weren't so arbitrary after all, and the fact that the state sector calculated in terms of profits, rents, interest, etc. wasn't simply an arbitrary, surface appearance either. . By the latter part of the 20s and the decade of the 30s, the Bolshevik revolution died away, and a system of Stalinist state-capitalism was consolidated. Stalinism, however, maintained its pretense of loyalty to Marxism, and sought to clothe state-capitalism in socialist and Leninist colors. In the 30s, with industrialization, forced collectivization, and the five year plans, Stalinist economics believed that the Soviet Union -- although using money, cost accounting, etc.--had overcome the law of value. But in the 40s, official Stalinist economics came to the view that socialism could make use of the law of value to ensure rational pricing, arbitrary pricing being regarded as a major problem. Thus an influential article in a major Soviet journal in 1943 stated that "Since the elimination of capitalism the socialist society, in the guise of its state, has taken over the law of value, and consciously uses its mechanism (money, trade, prices, etc.) in the interests of socialism, for the purposes of the planned guidance of the national economy."(26) "Socialist" prices were supposed to generally correspond to value, although they could deviate from value for certain reasons, such as promoting the rapid development of heavy industry. The same idea appears in Stalin's 1952 pamphlet Economic Problems of Socialism in the U.S.S.R.. in the sections "Commodity Production Under Socialism" and "The Law of Value Under Socialism". . From then on, when Soviet economists argued over how prices should be set, they often tried to justify various proposals in the name of equating prices to value, or to denounce proposals of their rivals as violations of the law of value. (However, these economists differed dramatically on how they interpreted value. Some displayed marvelous ingenuity in imitating Western theories of pricing under the cloak of loyalty to the Marxist theory of value.) . Thus, for example, Strumilin, who in 1920 had proposed using the tred as a natural unit, later wanted to equate prices in rubles to value. He wrote in 1959 that "the definition of value remains an important task under socialism" and "the determination of the value of different goods is of the utmost importance for the rational planning of the prices of these goods." T. S. Khachaturov lamented the "considerable differences in the relationships between prices and value in different industries and for different articles." In 1960 the prominent mathematician L.V. Kantorovich advocated the use of prices obtained by linear programming, a field of mathematics in which he was one of the pioneers. These prices are called "shadow prices" elsewhere, and "objectively determined valuations" by Kantorovich; he specified that the inputs into the equations were supposed to be "social labor-time", but account was to be taken of "indirect as well as direct labor inputs", rents, etc. in order to obtain "the full social expenditures of labor" involved in production. A. I. Kats denounced him for these indirect inputs, saying that Kantorovich "ignores the Marxist theory of the expenditure of labor as the substance of social costs of production. This leads, in particular, to the totally unsound nature of his suggestions in the field of price formation." Meanwhile S. Pervushin, the editor-in-chief of the Soviet journal Planovoe Khoziaistvo, wrote that "Unfortunately a satisfactory solution has not yet been found for the problem of using the law of value and value categories in socialist economics." And he referred to the need for certain prices to be set in defiance of the law of value.(27) And so on every few years, with each new reform proposal generating a new round of discussion. . But other people were still looking for the natural unit to replace money. Charles Bettelheim, for example, devoted his 1970 book Economic Calculation and Forms of Property to the contrast between the monetary calculation and calculation directly in labor-hours. He called monetary calculation "imaginary", while calculation directly in labor-hours was "real" economic calculation, or "social economic calculation (SEC)", as he called it. To carry out SEC, he regarded that money would be replaced by two units, a unit of labor-hours to measure the cost of production. and another unit to measure the social usefulness of a product. . But, significantly, Bettelheim didn't see how this could concretely be done. With respect to the unit of social utility, he didn't know how this could be measured on a single numerical scale, and he wrote that "we still have to elaborate the system of concepts and procedures that enable social utility -- of different labors and products, supplied in determinate conditions -- to be measured so that the distribution of labor (i.e., of social labor) between the different types of production can be regulated on the basis of this measurement." Nor was it any better with obtaining the number of labor-hours which constituted the economic cost of producing a product. He wrote that the "unit of measurement, and the nature of this measurement, . . . must be theoretically defined . . . Such a unit would no longer be a currency, and the magnitudes expressed in this unit would no longer be prices. As this point, the question of the possibility of formalizing the evaluation of social units, so that a real unit of measurement can be defined, remains open."(28) . Thus Bettelheim called for a search for the theoretical concepts, the "theoretical space" as he called it, that would allow "social economic calculation". The irony is that Bettelheim, who devoted his book to dispensing with capitalist value, had not the slightest idea that his search for the units of SEC was the traditional quest for true value. -- One, two, three, many natural units -- (the method of material balances) . Earlier we mentioned that the Soviet economist Chayanov, in 1920, envisioned a plan whereby money would be replaced by, apparently, keeping track of each product separately. There would presumably be separate units of measurement for labor, and for various different categories of material goods and products. These would be natural units, since they would not be expressed in terms of money, but in terms of the physical amount of each thing. This foreshadowed the method of "material balances", which also measured each separate material good, such as wheat, or coal, or iron, in its own natural unit. Although it never replaced financial calculations, "material balances" became an established part of Soviet planning. Yet this was not the long-sought-for answer to the quest for the natural unit of economic calculation. It is useful to see why. . First of all, what is a "material balance"? Let's start with using a balance to simply describe, not plan, an economy. One of the important sectors of the Soviet economy was the production of coal. A fully-detailed balance in coal would list the various places where coal was used in the economy (the places where coal is "distributed" to), and the amounts that are being used in each place. Thus there would be so much coal going for heating cities, for industry, etc. It would also list the amount of coal being produced at various mines (the "sources" of coal). The total amount of coal being produced would equal the total amount of coal being used.(29) Here we have a balance, and one that is convenient to do in a natural, or physical unit, such as metric tons of coal, because the balance deals with the actual physical uses of coal, and not the profits or losses of any enterprise. . Thus here we have planning in a material unit, say, metric tons of coal. But of course this is only one part of the economy. There would also have to be a material balance for, say, wheat, Hence here we have a second natural unit for the economy, metric tons of wheat. Of course, to describe the economy more fully would require natural units for oil, construction materials, machinery, etc. All of a sudden, we have one, two, three, many natural units. Having had to search for one natural unit, one is now smothered under a plethora of natural units. In practice, a description of the economy by material balances might restrict itself to only the major material goods, or might group related goods together, but it is still clear that it involves many, many natural units. The more natural units are involved, the more accurate the description of the economy. . But there is more to the method of material balances. The different balances for coal, wheat, machinery, etc. have to be mutually consistent. The balance for coal includes, for example, allowances for the coal used in producing machinery. The balance for machinery specifies how much machinery is to be produced, and this requires a definite amount of coal, and that has to be the same amount listed for that purpose in the balance for coal. Thus the description of an economy by material balances has to deal with the relationships between different sectors of the economy. Its figures reflect such facts as that the production of so much agricultural machinery requires so much coal, so much iron, so many laborers, so much wheat and housing for these laborers, etc. It doesn't just evaluate, say, a tractor as equal to so much money or even simply so much labor. It correlates the production of tractors with a list of all the factors that go into their production. (Naturally, these correlations can change from year to year. As production becomes more efficient, more tractors can be produced with less labor or less materials. For that matter, if a factory becomes run down, production could become less efficient and the tractor could require more labor and more materials to produce.) . Of course, if the method of material balances were only used to describe an economy, it wouldn't be of that much interest. It is also used to plan changes in an economy. Let's imagine that a description of how the economy is running in a certain year has been obtained via the method of material balances. It might be thought desirable to plan that it produce more wheat in future years, perhaps because the population will increase, or there is to be more bread per capita, or more wheat is to be exported. A greater production of wheat would allow greater amounts to be allocated, in the material balance for wheat, to the various uses of wheat. But it would also require that the balance include more sources of wheat, such as a larger harvest of wheat from various farms or from more farms being created. This might require more agricultural machinery, or more fertilizer (whether natural or artificial), or the development of new farming communities (and hence more construction material, more labor, etc. etc.) This would require a change in the balances for machinery or fertilizer or construction materials. . But when we consider these changes, such as the need for more agricultural machinery, this in turn requires a whole new series of changes, as more production of agricultural machinery requires either producing less of other machinery, or an increase in the use of coal and iron in the factories producing machinery, as well as allocating more laborers to these factories and more wheat for the consumption of these laborers. Hence there are yet more changes to be made in the material balances for coal, iron and even wheat. . Thus a contemplated increase in the material balance for wheat would involve a series of changes, both increases and decreases, in the material balances for many other goods, and even react back on the material balance for wheat itself. The material balance for wheat can't be changed in isolation, but there has to be a coordinated change in the other material balances involved in the economy as a whole. . In general, there is no simple, automatic way to make these coordinated changes in all the material balances. It can be a difficult mathematical problem. However, in practice one need only obtain an approximate solution; moreover, this solution can be based on seeing what changes have to be made in the material balances that have existed in the past, rather than seeking to write the material balances from scratch. This solution can then be used to push the economy in the planned direction by directing resources to this or that sector in accordance with the description given by the altered material balances. (At least, this can be done to a certain extent in those economies where the government has the ability to direct resources in this way, or in a communist society, where society as a whole really has this ability.) According to a study done in 1959, Gosplan (the Soviet State Planning Commission) seems historically to have used some relatively crude methods of making changes in the material balances.(30) Nevertheless, the long-run problems in Soviet planning have not come from the mathematical difficulties of the method of material balances, but from other sources. . Thus calculations via the method of material balances can differ dramatically from those made by simply replacing the dollar with the labor-hour. Let's look at this further. A common economic problem is deciding which of two different ways of making a product to employ in an enterprise or factory. There might by two (or more) different processes possible for making this product: processes A and B. In a commodity economy, if process A is used, the product might cost, say, $9, and using process B, $8. Then financial calculation says to go with process B, on pain of going bankrupt. If the problem were approached by the use of the labor-content, one would compare how much labor is used by each of the two processes, and as well how much labor is embodied by the raw materials used by each of the two processes, and in keeping the machinery used in these two processes in good repair. One would then compare so many labor-hours for process A, versus so many labor-hours for process B. The process of production which involved less labor-hours would be the more efficient process. This is a familiar way of deciding the issue, similar to that used with financial calculation. . But the method of "material balances" might inspire a very different way of making this decision. According to the spirit of this method, one must consider a product in relation to all its inputs, including raw materials, labor, machinery that is worn out during production, etc. Thus process A would be evaluated, not as a single measurement on a scale, but as a list of measurements, say, (3,1,4,10, . . .), representing 3 units of iron, 5 units of coal, 4 labor hours in the final fabrication, 10 units of machinery necessary during production, etc., while process B might be (2,6,11,2, . . .) (31) The problem is that process A uses more iron and requires much more machinery, while process B uses more coal and requires much more labor, so which is preferable? Unlike a comparison of two numbers, now the answer is no longer obvious. This is one of the problems in trying to use "material balances" to determine which process of production is better, and different ways of dealing with this problem have been proposed. But clearly, the comparison should depend on which input is in short supply, and determining how severe this shortage is. Unlike either financial calculation or calculation via the labor-content, in which saying a product costs $9 or embodies so many labor-hours seems to be expressing a statement simply about the product, a consistent use of material balances brings one up against the fact that the actual economic cost of producing something can't be taken in isolation, but always is related to what's happening in the rest of the economy. . Thus, it makes a big difference whether there is one natural unit that governs the economy, or a multiplicity of them. The search for the natural unit of economic calculation has been, in essence, a search to put aside the multiplicity of natural units that appear when one deals concretely with the economy. This is why the method of "material balances" has, as far as I know, never been regarded by anyone as the answer to the search for the natural unit. . The method of material balances was developed in the Soviet Union over time. With respect to industrial enterprises, immediately after the Bolshevik revolution of October 1917, attention was focused on taking control, and then ownership, of them away from the capitalists. There was not, at first, much that could be done in the way of overall planning, and already by mid-1918 the Civil War broke out. This led to a period of economic crisis. . But in the face of the disorder, the galloping inflation, and the loss of many key areas for whole periods of the Civil War, production of certain key materials had to be maintained in order to avoid a total collapse. This necessity, plus the collapse of the currency, encouraged the emergency tracking of key materials in physical terms as well as a system of priorities. Definite physical amounts of grain had to be obtained and distributed to prevent hunger, definite amounts of coal had to be obtained and distributed to power stations to heat cities, and to enterprises in order to maintain production. These supplies could not be assured by simply setting aside money to buy them. Grain was requisitioned; emergency labor forces worked on priority assignments; necessities were rationed rather than sold; some services were provided free; etc. This experience, in which money often seemed irrelevant, encouraged the idea that communism itself might be close, so that the period became known as the "War Communism". This experience no doubt encouraged the economists at the 1920 meeting of "Problems of a Moneyless Economy" who dreamed of accounting in terms of natural units. . Aside from the Soviet government, in wartime ordinary capitalist governments often have had to institute accounting for vital materials in physical form. Both World War I and World War II resulted in a proliferation of War Production Boards, Ministries of Supply, and similar institutions in all the major combatant countries. For example, in order for the U.S. to supply its war machine with sufficient rubber for tires and other uses in World War II, the U.S. government couldn't rely on simply setting aside a sufficient amount of money. Indeed, Japanese advances in Asia cut off the U.S. from its main pre-war suppliers of natural rubber. The U.S. had to search for new sources of natural rubber, as well as rushing the development of synthetic rubber. Thus the U.S. government resorted to rationing key materials like rubber, directing them to war industries, and searching for new physical supplies of them.. This planning in physical terms didn't supplant commodity production, of course, but just supplemented it; the war industries continued to get rich. Such war planning in physical terms having being used by regimes of many different types, it shows that the use of a form of "material balances" by no means shows that a country is socialist, transitional, ruled by a workers' government, or even ruled by a liberal regime. . In the case of the Soviet government, it continued its interest in planning after the Civil War crisis passed. True, it was soon realized that there would be no immediate transition to a fully socialist economy, and that War Communism would have to be replaced by a slower and more gradual transition to socialism. So with the advent of NEP in 1921, Soviet industry was put back on a financial accounting basis. But the Soviet government sought to provide a central direction to this industry, and to increase the power of the working class over it. The tragedy of NEP was that working class control of the economy and government gradually died out and the Bolshevik revolution faded away, leading to the consolidation of the Stalinist state-capitalist regime in the 30s. . But the Soviet Union continued developing national planning of the economy, and this was of a type not previously seen. By 1923-24 the Soviet Central Statistical Department developed a balance of the national economy, which apparently marked the start of the method of material balances and was a real spur to national planning. It was a type of overall balance for the entire economy which had not been made in Western countries, and the Soviet planning debates of the 20s represented a major development in economic theory. . Originally, the material balance for each product was presented on a separate balance-sheet, with two columns, one for the sources of the product, and one for where it was distributed. The collection of all the separate balances constituted the balance for the entire economy. But eventually they might be combined into the checkerboard pattern used for input-output tables. Such an table has been described as follows: . "A checkerboard type cross reference table that shows what happens to the output of each producing branch of the economy, and what each consuming sector of the economy consumes. The producing branches (such as agriculture, iron and steel, electric power) may be listed vertically down the left margin, and one can then read horizontally across to see how much of the total output of each branch goes to each consuming sector. Listed horizontally across the top, then, are the various consuming sectors which consist of the same producing branches (since in the process of production, goods must necessarily be consumed) plus such additional consuming sectors as households and government. Reading vertically down a column, one can find out how much each consuming sector uses of the output of each producing branch. If, for instance, one million tons were listed in the cell formed by the 'iron and steel' row (producing branch) and the 'coal' column (consuming sector) this would indicate that one million tons of the total output of iron and steel go to the coal industry, i.e., that in the process of producing the nation's coal output, this tonnage of iron and steel is 'consumed' by the coal industry."(32) . During the 30s, state control of the economy expanded dramatically with industrialization, forced collectivization and the first five year plans. One might have expected that the role of "material balances" would have expanded dramatically. But, while plans for economic allocation may have been made in material terms, no serious attention was given to keeping the economy in balance. While a certain amount of imbalance is no doubt inevitable in any period of rapid growth and completely new undertakings, what existed went far beyond that. Certain economic targets had priority and were to achieved at all costs, while other economic sectors lagged far behind, and agriculture suffered repeated crises. Factories competed fiercely with each other for raw materials and other supplies, and there was a war of executive against executive. This was the anarchy of production on an immense scale, and the resulting pressures on society were reflected not just in the bloody Stalinist repression of all opposition among the working masses but even in a murderous political struggle inside the ruling class itself. . Later more attention was paid to the issue of economic imbalances. As we have seen earlier, from the early 40s on, there was a good deal of emphasis on reforming the price structure and equating prices and values. This was supposed to deal with imbalances. Eventually there was also a development of the mathematical techniques of planning, the utilization of computers, discussion of linear programming and "input-output" techniques, and so forth. This, too, was done in connection with reforming prices. For example, the mathematician Kantorovich, in his proposals for implementing the method of material balances via linear programming, aimed at finding proper prices. But, as we have seen, the logic of the method of material balances equates a product not to a single price or number, but to the list of all the inputs needed to make that product. As a result, there is no natural way to determine prices, and every economist and mathematician in the Soviet Union had a different approach. The innumerable price reforms never solved the imbalances in the Soviet economy, and each economist would use this to promote a new system of pricing as supposedly superior to the previous one. . Let's examine an often-denounced problem in the Soviet economy, namely, a factory fulfills its planned output by producing goods of the wrong sort. As one account puts it: "Examples can be cited in very large numbers, drawing on material published not only in the USSR but also Hungary, Poland and Czechoslovakia, . . . A plan expressed in tons encourages the production of heavy commodities; in any choice involving weight, the 'weightier' variant is bound to be favoured, since this facilitates the fulfilment of the plan. . . . For instance, factories making prefabricated cement blocks, prefer to make large blocks, which is the easiest means of fulfilling the a plan in terms of tons, though, as it happens, the result is a shortage of small blocks . . . The humorous journal Krokodil once pictured, in a cartoon, a factory which fulfilled its entire month's output programme for nails by the manufacture of one gigantic nail, hanging from an overhead crane the whole length of the workshop."(33) . Problems like these are often taken to show that accounting in physical terms is a disaster. Here, however, the problem wasn't that the factory didn't receive the necessary supplies, nor that the planning bodies had forgotten that the economy needs a proper assortment of nails and cement blocks. The problem was that the enterprise -- to be more precise, the managers of the enterprise -- had their own economic interest, separate from and antagonistic to that of other enterprises and of the economy as a whole. The enterprise was quite capable of producing the needed nails, and the management could figure out what was needed, but it was in its economic interest to do something else. It didn't matter whether they could obtain accurate information about what type of nails were needed, they would produce those nails that served their own interest. Nor could the managers of other enterprises, who suffered from the lack of proper nails, apply too much pressure to ensure a proper supply of nails, because they were managers too, and it was in their class interest to maintain the state-capitalist system in which they, as well as the managers of the nail factories, could follow their own small-group interests. Meanwhile the workers were an exploited class, who had no say in this type of problem and no control over what the managers did. . There are many apologists of Soviet state-capitalism who insist that, as the Soviet executives did not own their enterprise in the Western sense, they did not compete among themselves and did not have separate interests. Yet the problems illustrated by the joke about the gigantic nail showed that the Soviet managers and executives did in fact have such separate interests: they enriched themselves and climbed in the bureaucracy by behavior that harmed the country as a whole as well as harming other managers and executives, who ended up with a real shortage of materials. No matter how the targets of the plan were expressed, Soviet managers would find some way to distort them, because they were just as much out for themselves as a Western executive and his firm are out for themselves. Suppose, to avoid the problem of one big nail, the Soviet ministries demanded that the plan for nails be fulfilled, not by weight, but by the number of nails. What would happen? "If a plan for nails were expressed in quantity (e.g. thousands of nails) they would tend to be small, . . . Highly original plan-measurement criteria were devised in some industries; for example, central-heating boilers were assessed for this purpose in terms of the area of heating surface (of the boiler); consequently, when a new model was devised which heated more efficiently with a smaller heating surface, no one would touch it, as it would worsen their success indicators."(34) . The problem wasn't that output was planned in physical terms, rather than financial terms. Indeed, when the output targets were expressed in financial terms, that didn't help at all. "Output targets in roubles evade such difficulties as these, but at the cost of creating others, In nearly all instances, the money measure is applied to gross value of output. . . . This encourages a number of distortions. In the first place, an advantage is derived from using dear materials. . . . It may appear simple to overcome some of weaknesses, for instance by measuring plan-fulfilment in terms of goods actually completed and sold . . , or to measure value-added only. Both these methods have been discussed and sometimes tried. However, they have defects of their own. . . . As for value-added, this method is now applied in the Soviet clothing industry, and it has indeed increased the range of cheaper garments produced. But it has the opposite weakness of encouraging any activity which adds to the value of work done within the enterprise, while useful forms of sub-contracting may well be avoided lest the value of work done with that enterprise should be lowered."(35) . Thus the problem of factories producing the wrong assortment of products, or using wasteful production methods, stemmed from the class structure of Soviet society, from the fact that it was a state-capitalist society with an exploiting ruling class, and not from some supposed inability of an economy to be planned in physical terms. No plan could specify every last detail of production, nor would it be desirable for such a thing to occur: it would squash the initiative of enterprises at the base, and their ability to innovate. If a society is such that local initiative always conflicts with social objectives, then this will undermine any system of planning. . But if the Soviet-bloc version of "material balances" wasn't responsible for all the problems of the state-capitalist economies, it wasn't socialist planning either. In the Soviet bloc the method of material balances always went in parallel with financial calculation. As well, the objectives of the planning were set by the Soviet bourgeoisie. Indeed, the methods by which the economy was planned, and the plans implemented, were based on the passivity of the working class and the power over them of a new class of managers, executives and bureaucrats. What Soviet "material balances" has in common with the future economic calculation of a classless society, is that they both have to keep track of society's production in material terms.(36) And to do so, both the Soviet version of "material balances" and future communist planning required and will require, the use of not one, but many separate natural units. The experience of the method of "material balances" verifies that there is no single natural unit of economic planning. (to be continued) (1) However, the article does not argue that, simply because Marx and Engels said these things about value, labor-time, and economic calculation, therefore they are true. All views, including those of Marx and Engels, have to be subjected to criticism and examination. They wrote many profound things, which still serve as the basis of a materialist worldview, and of any scientific conception of socialism, but not everything they said was correct. Hence, the impossibility of finding a natural unit of economic calculation follows not from a textual analysis of different writings, but from an economic consideration of what a classless economy would actually be like, verified by a study of the economic experience of the last century. Marx and Engels's writings are important insofar as they aid such a theoretical and economic investigation. In this case, it appears to me that Marx and Engels turn out to be, as in so many other cases, correct. Moreover, no matter whether readers of this article agree with my presentation of what Marx and Engels meant, I think they will find that examining the disputed extracts concerning labor-hours and the future society will help them realize some of the issues at stake. Finally, as for me, I learned the point about the impossibility of there being a natural unit of economic planning by studying Marx and Engels's views about value and the future society, and so it is only fair that it be attributed to them, rather than presented as some new invention of mine. It appears to me to be a consequence of their view about the historical (rather than eternal) nature of currently important economic categories, such as value. (Return to text) (2) There are other reasons as well to have a term such as "labor-content". As is pointed out in Engels's supplement to the third volume of Capital (Capital, Vol. III, Supplement, I, "Law of Value and Rate of Profit", pp. 899-900, Progress Publishers, Moscow), the prices of goods tend to average around the labor-content in an early period of commodity production, the period of "simple commodity-production". But as capitalism develops, and an equalization of the rate of profit among capitalists takes place, non-monopoly prices tend to oscillate around what Marx calls the "price of production", which deviates in a precise way from the labor-content. This deviation is based on how much the various spheres of production differ in what Marx calls "the organic composition of capital", which is, roughly speaking, a measure of how labor-intensive each sphere of production is. (More precisely, it is the ratio in the particular sphere of production of the amount of "variable capital", used in employing labor, to the amount of "constant capital", used for raw materials, machinery, etc.) Would one then say that exchange-value in this stage of capitalism has been modified, and is now numerically equal to the price of production, or that it is stays the same as before, and remains numerically equal to the labor-content? If one says that the exchange-value has been modified, then one preserves the idea that actual prices tend to oscillate around the exchange value, but one now has a distinction between the exchange-value and the labor-content. If one says that the exchange-value remains numerically equal to the labor-content, then the prices will no longer oscillate around the exchange value, but instead will oscillate around, say, the "market-value", equal to the price of production, and this market-value will be a modification, according to a definite law, of the exchange-value. Different authors, even if they adhere to the law of value, will no doubt answer this terminological question differently. By use of a term such as labor-content, they can at least indicate clearly what terminology they are using for value, exchange-value, etc. However, in this article I will ignore the modification in the law of value caused by the equalization of the rate of profit, except at the few places when it becomes relevant to the points under discussion. (Text) (3) G.D.H. Cole, Socialist Thought: The Forerunners, 1789-1850, Ch. IX. Owen and Owenism--Earlier Phases, p. 95. Owen also advocated utopian plans for communistic communities such as New Harmony, where production took place on a communal basis and material goods were not commodities. In New Harmony, at first labor accounts were used to apportion each member a certain amount of the commune's goods depending on how much labor had been performed. Thus "a credit was to be set against each name at the public store for the amount of useful work done; and against this credit a debit was entered for goods supplied. At the end of the year the balance would be placed to the credit of the member; but he was not at liberty to withdraw any part of it in cash, without the consent of the committee. He could, however, leave the Society at a week's notice, and withdraw his balance." Later, still according to Owen's ideas, outright communist distribution was used at New Harmony. "Each man was to give of his labour according to his ability and to receive food, clothing and shelter according to his needs." (Frank Podmore, Robert Owen: A Biography, pp. 293, 300-302) Marx is presumably referring to Owen's ideas about labor accounts for distribution inside a communal society, such as originally at New Harmony, when he says that ". . . Owen's 'labor-money'. . . is no more 'money' than a ticket for the theater. Owen presupposes directly associated labor, a form of production that is entirely inconsistent with the production of commodities. The certificate of labor is merely evidence of the part taken by the individual in the common labor, and of his right to a certain portion of the common produce destined for consumption." (Marx, Capital, vol. I, the first footnote in Chapter III, section 1) This is correct concerning its role at New Harmony and concerning Owen's general communistic plans. By way of contrast, the labor notes used by the Owenite labor exchanges and co-operative stores actually served as money, and they circulated even among some non-Owenite businesspeople. They were a medium of economic exchange between people, stores, and businesspeople who had separate economic interests, just as ordinary money is. Indeed, Owen also proposed that the Bank of England switch to labor notes. Of course, the Labor Exchanges and Co-operative Stores were, for Owen, only a step towards future communistic communities, and he hoped that they would help raise the funds needed for the establishment of these communities. (Text) (4) Gray, Lectures on Money, p. 169, as cited by Marx in A Contribution to the Critique of Political Economy, Ch. 2. Money or Simple Circulation, Sec. 1. The Measure of Value. Subsection B. Theories of the Standard of Money, p. 85, International Publishers, 1970. (Text) (5) The Poverty of Philosophy, Preface to the First German Edition, pp. 13-14, and Ch. 1, Sec. 2, p. 66, Norman Bethune Institute edition, Canada. (Text) (6) Theories of Surplus Value ("Volume IV" of Capital), Part (Volume) III, Chapter XXI "Opposition to the Economists (Based on the Ricardian Theory)", Sec. 2, p. 260, Progress Publishers, Moscow, 1971. (Text) (7) Marx, Ibid. Owen, as a utopian socialist, is something of an exception in that, while his influence spread the labor-money idea, he also advocated the abolition of capitalist relations and commodity production. (Text) (8) George Woodcock, Anarchism: A History of Libertarian Ideas and Movements, Ch. 14. "Various Traditions: Anarchism in Latin America, Northern Europe, Britain, and the United States", p. 457. The idea of the exchange of products of equal labor, and the exchange of a product for a labor note (paper certificate) denoting a certain amount of time, might seem to be different plans, but they are closely related. In practice, to achieve the exchange of "labor for labor", it is convenient to use the intermediary of issuing labor notes. (Text) (9) Proudhon went so far as to describe the entire practice of the trend of "mutualism", which he was known for, as various forms of equal exchange. He saw even mutual aid in this light. Thus he wrote that mutualism was "service for service, product for product, loan for loan, insurance for insurance, credit for credit, security for security, guarantee for guarantee. It is the ancient law of retaliation, an eye for an eye, a tooth for a tooth, a life for a life, as it were turned upside down and transferred from criminal law and the vile practices of the vendetta to economic law, to the tasks of labor and to the good offices of free fraternity. On it depend all the mutualist institutions: mutual insurance, mutual credit, mutual aid, mutual education, reciprocal guarantees of openings, exchanges and labor for good quality and fairly priced goods, etc." ( Proudhon, On the Political Capacity of the Working Classes, pp. 124-6, as cited in Selected Writings of Pierre-Joseph Proudhon: Edited with an introduction by Stewart Edwards, Translated by Elizabeth Fraser, pp. 59-60, emphasis as in the original.) . Also strong in Proudhon's thought was the emphasis on contractual relations between people and among groups. It was equal exchange and "rule by contract" that was to replace not just any form of government, but any form of the people deciding their common affairs directly. He supported "the notion of commutative justice, established by the primitive fact of exchange" and said that "Translate the legal terms contract and commutative justice into the language of affairs, and you have COMMERCE" and "Instead of laws we would have contracts. No laws would be passed, either by majority vote or unanimously." (Proudhon, The General Idea of Revolution in the 19th Century, pp. 187-9, 302-3, as cited in Selected Writings, pp. 96, 99, emphasis as in the original.) (Text) (10) System of Economic Contradictions: or, the Philosophy of Misery, translated by Benj. R. Tucker, Ch. II, Sec. 2, pp. 107-8, emphasis as in the original. Proudhon's defense of equal exchange, contractual relations, the necessity of economic competition, and so forth fits in with his praise of the free-market prophet Adam Smith. He regarded Smith as the economist who gave labor his due, unlike later ones, who supposedly were the only ones to justify capital. Proudhon wrote that "This force, which Adam Smith has glorified so eloquently, and which his successors have misconceived (making privilege its equal),--this force is LABOR." (System of Economic Contradictions, p. 95) He also believed that Smith had felt instinctively that equal exchange was the revolutionary solution to the social problem. Proudhon wrote, with respect to his own theory of "constituted value", that this had been "dimly seen by Adam Smith . . . But this idea of value was wholly intuitive with Adam Smith, and society does not change its habits upon the strength of intuitions; . . ." (Ibid., p. 106) Thus, in this respect, Proudhon felt that he was marching further on a road embarked upon by Adam Smith. (Text) (11) Proudhon, On the Political Capacity of the Working Classes, p. 114, as cited in Selected Writings, p. 63. (Text) (12) Woodcock, Ibid., p. 459. (Text) (13) Proudhon, The Solution of the Social Problem, pp. 259-80, and The Theory of Property, p. 37, cited in Selected Writings of Pierre-Joseph Proudhon, pp. 140, 76. (Text) (14) Marx, The Poverty of Philosophy, ch.1, sec. 2, p. 49. (Text) (15) I could not find Duhring's own book, so the description of his future society that I give is based entirely on Engels's book Herr Eugen Duhring's Revolution in Science (Anti-Duhring). See Part III, Chapters III and IV. The quotes from Duhring that I use in this paragraph are from Chapter IV; the quote from Engels in the next paragraph is from Ch. III, the quotes from Duhring are from Ch. III and IV. (Text) (16) Perhaps it might appear impossible to ban competition between communes and yet have them exchanging goods, and for the economic communes to possess the local resources, which were, nevertheless, owned by the all-embracing trade communes. But consider the old Soviet state-capitalist system. . Prior to Gorbachov's perestroika, most enterprises were told by the ministries involved who to sell their goods to, and who to buy them from. There was thus no direct competition among the enterprises in this buying and selling. But under the khozraschet or self-financing system, each enterprise was a separate legal and financial entity, and was supposed to strive to make a profit. But since they were part of the state sector, they were, in some overall sense, owned by the state. . Similarly, in Duhring's system, each economic commune was also an independent economic entity, with its own right to administer, control and profit from its own territory and production facilities. But all the productive facilities being overall owned by the trade communes, the trade communes had the right to regulate the relations between the economic communes. The economic communes could not directly buy and sell goods to other economic communes. They were supposed to sell their goods to the trading communes, which would presumably pool the goods from the various economic communes, and buy the supplies they needed from the trading communes, This buying and selling would be according to the rules of equal exchange. It would amount to indirect exchange with other economic communes, but regulated by the trading authorities, similar to Soviet enterprises having their buying and selling directed by the ministries. Since the trading communes would regulate the pool of goods from the various economic communes, there would be no direct competition between economic communes. It was, however, up to each economic commune whether it flourished under this trade, or shriveled up, just as under khozraschet it was up to each Soviet enterprise to make a profit. . Now in fact, in the Soviet system, competition developed in many ways. The private interests of the executives running the various enterprises, and of those in the ministries, resulted in a fierce under-the-table competition among them, in enterprises competing in gray markets for the means of production, in competition among the ministries, and so forth. Moreover, this competition was not minor or secondary, but one of the most important features of the Soviet economy. Without recognizing the existence of this competition, it is impossible to understand many of the key problems afflicting the Soviet economy. (See my article "The anarchy of production beneath the veneer of Soviet revisionist planning" in Communist Voice, vol. 3, #1, March 1, 1997.) Similarly, with respect to Duhring's system, Engels predicted that, although competition between the economic communes was formally eliminated by the role of the trading communes, various features characteristic of competition would appear. Thus, the experience of the Soviet economy provides a verification, in a different situation, of the basic idea behind Engels's claim. (Text) (17) See the end of Part III, Chapter IV of Anti-Duhring. (Text) (18) Kautsky, originally a supporter of the Marxist trend in the Second International, gradually lost his revolutionary fervor and, during and after World War I, betrayed revolution and emerged as an opponent of communism. But meanwhile, he had written some works of which Lenin wrote "that such works of his will remain a permanent possession of the proletariat in spite of his subsequent apostasy." (The Proletarian Revolution and the Renegade Kautsky, Ch. "The Constituent Assembly and the Soviet Republic", p. 54, Chinese pamphlet edition, emphasis as in the original) (Text) (19) Engels to Kautsky, 20 Sept. 1884, as cited in Charles Bettelheim's Economic Calculation and Forms of Property, translated by John Taylor, Monthly Review Press, 1975, p. 30. (Text) (20) See Karl Kautsky, The Social Revolution, Translated by A. M. and May Wood Simons, which includes two essays, "Reform and Revolution" and "The Day After the Revolution", both of which were originally lectures. The quotes given in the text are from the sections entitled "The Expropriation of the Expropriators", "The incentive of the Laborer to Labor", and "The Organization of the productive process". Further material on the mixed nature of the system outlined by Kautsky is contained in the section "The Remnants of Private Property in the Means of Production". Unfortunately, the Simons's translation seems fairly crude. This is a problem when seeking to understand passages that depend on subtle distinctions in Kautsky's wording. . The popularity of Kautsky's The Social Revolution is remarked on in Gary P. Steenson's "Not One Man! Not One Penny!": German Social Democracy, 1863-1914, which comments that "Two books came out of Kautsky's part in the revisionism debate: the very polemical Bernstein and the Social-Democratic Program: An Anti-Critique (1899) and The Social Revolution (1902). The latter was his most comprehensive discussion up to that time of the path from capitalism to socialism . . . The Social Revolution was one of his most successful books, selling thousands of copies and going through multiple printings very quickly." (pp. 204-5) (Text) (21) Kautsky wrote that "The value of each product is determined not by the labor time actually applied to it but by the socially necessary time for its production." This appears to be the usual Marxist definition of value, which contrasts labor of the ordinary skill and intensity with labor that is either especially clumsy or especially efficient, but was the labor actually applied to the product. But Kautsky meant not only this, but also something further, thus giving a different interpretation to the term "socially-necessary labor" than the usual Marxist definition. Kautsky regarded the socially-necessary time as the amount of time which it would take (with labor of the ordinary skill and intensity) to produce the precise amount of the product which it would take to satisfy the market, no more and no less. . He gave an example with regard to the production of trousers and suspenders. Suppose, he said, that society needed trousers that would take 10,000 labor days to produce, and suspenders that require 1,000 labor days. In his view, if only 80% of the required trousers were produced, taking 8,000 labor days, their value would still be the 10,000 labor days that would be socially necessary to fulfill the demand for trousers. Hence, Kautsky said, the value of individual trousers will rise 25% higher than otherwise, and, he says, the price will rise correspondingly. Similarly, if triple the necessary number of suspenders are produced, taking 3,000 labor hours, their value would be only the 1,000 labor-hours needed to produce the amount of suspenders that would satisfy the market. Hence, Kautsky says, the value of "individual suspenders" will be one-third of what they otherwise would be, and their price would be correspondingly reduced. Thus, instead of under or over-supply meaning that the price deviates from the value, it would mean that both the price and the value would go up and down according to the vagaries of supply and demand; moreover, they would go up or down in exactly the same proportion, so that price and value remained equal. . By way of contrast, Marx held that, in the case that the quantity of the mass of individual commodities of the same type didn't equal the market demand for them, the price will deviate from the value. He wrote that: "Should their quantity be smaller or greater, however, than the demand for them, there will be deviations of the market-price from the market-value." (Capital, Vol. III, Ch. X, p. 185). Also, "the oscillations of market prices, rising now over, sinking now under the value or natural price, depend upon the fluctuations of supply and demand." (Wages, Price and Profit, Ch. VI, p, 40, Foreign Languages Press, Peking, 1970). . However, Marx did hold that the socially-necessary labor of the total quantity of a mass of commodities of the same type might differ from the sum of the socially-necessary labor contained in each separate commodity. He wrote: "Lastly, suppose that every piece of linen in the market contains no more labor-time than is socially necessary. In spite of this, all these pieces taken as a whole, may have superfluous labor spent upon them" if there is overproduction of linen. (See Capital, vol. 1, Part 1, Chapter III, Section 2a, p. 120, Kerr edition. I doubt, however, that he held that the socially-necessary labor contained in the linen would be greater than the labor spent in producing it in the case of underproduction.) Thus, in Marx's exposition, in the case of overproduction, the value of the total quantity of linen is presumably reduced, similar to what Kautsky said. But this only applied to the total quantity of linen, not to each individual item. So, in the case of overproduction, the value of the total quantity of linen was less than the sum of the individual values of each item of linen, while in Kautsky's exposition, the value of each individual item of linen is proportionally reduced. (Text) (22) See "Preobrazhensky: theorist of state-capitalism" parts one and two, in Communist Voice, vol. 4, #2 (April 20, 1998) and Vol. 4, #3 (August 1, 1998). (Text) (23) Alec Nove, Socialism, Economics and Development, Part Two, Section 4, pp. 53-59. Nove's account of this meeting is based on the 1928 book Denezhnaya politika sovetskoi vlasti by the Soviet economist L. Yurovsky, who was an advocate of the use of money. Nove himself, insofar as he thinks there is any possibility of socialism, is a market socialist. So unfortunately, my knowledge of this meeting is filtered at third hand through the medium of two market socialists (Nove and Yurovsky), who have their own axes to grind about the plans of the participants. (As for Yurovsky, two years after writing his book, he suffered possibly murderous repression from the then-developing Stalinist regime. He was arrested and vanished.) (Text) (24) Nicolai Bukharin, Economics of the Transformation Period, with Lenin's Critical Remarks, Bergman Publishers, pp. 52, 100, italics as in the original. (Text) (25) This would also imply that even the monopolies and state institutions of modern capitalism itself might not be subject to the law of value, and sure enough, Preobrazhensky drew the conclusion that the law of value was "partially abolished" in monopoly capitalism (The New Economics, p. 140). Indeed, with respect to Germany in World War I, he wrote that the development of state capitalism and war planning meant that "Production which formally remained commodity production was transformed de facto into planned production in the most important branches. Free competition was abolished, and the working of the law of value in many respects was almost completely replaced by the planning principle of state capitalism." (Ibid., p. 153). (Text) (26) Anonymous, "Some Problems of the Teaching of Political Economy", Pod Znamenem Marksizma (Under the Banner of Marxism), #7-8, July-August 1943, as translated by Emily G. and Vladimir D. Kazakevich in an International Publishers pamphlet of 1944, pp. 31-21. The parenthetical remark is as in the original. (Text) (27) The Soviet Economy: A Collection of Western and Soviet Views, edited by Harry G. Schaffer, pp. 402-3, 404-5, 406-7, 408, 413-4, and Benjamin Ward, "Kantorovich on Economic Calculation", in Readings on the Soviet Economy, edited by Franklyn D. Holzman. Kantorovich had promoted the use of linear programming for economic planning as early as his 1939 book Mathematical Methods of Organizing and Planning Production, which, however, had been ignored. (Text) (28) Charles Bettelheim, Economic Calculation and Forms of Property, Monthly Review Press, pp. 6, 12. This is a 1975 translation of his 1970 Calcul economique et formes de propriete. Perhaps he didn't think it was easy to calculate the labor-content of a product, but more likely he was influenced by the various proposed modifications made to the labor content by Soviet economists in their proposals for pricing. He didn't like that these economists thought in terms of better prices, but he apparently thought that they had good reason to believe that the labor-content had to be modified to serve as an accurate guide to planning. (Text) (29) See Herbert S. Levine, "The Centralized Planning of Supply in Soviet Industry", p. 163, in Holzman, Readings on the Soviet Economy. Strictly speaking, the total amount of coal used in a year, plus the reserves at the end of the year, plus the amount exported would have to equal the total amount produced in a year, plus the reserves at the start of the year, plus the amount imported. Wastage and theft aside, these amounts should balance. . It is of course possible that the original planned balance doesn't correspond to what happens. If, for example, insufficient coal is produced, then a factory that was supposed to he so busy that it would require a large amount of coal, might actually get a small amount of coal. Cities might go cold or factories might shut down. But even in this case, there will be a balance between the actual amount of coal supplied and the amount actually used for various purposes. (Text) (30) Levine, Ibid., in a study based on discussions with Soviet economists in 1959. Gosplan sought to avoid having to make "second-round" corrections in material balances. That is, it would accept that the change in the material balances for one good would affect the material balance for other goods (the first-round corrections), but it sought to avoid these first-round changes in the material balances having a further second-round affect on other material balances, which would then have third-round effects, and so on. So if a first-round change apparently required, say, increasing the supply of some essential material, Gosplan sought instead to adjust the material balance for this material by reducing the size of reserve stocks, by increasing imports, and by demanding that various enterprises, although they used this material as an input, fulfill their output quotas despite a shortage of this input. Thus Gosplan avoided demanding that the production for this material be increased, as an increase in such production would require additional inputs, and thus require adjustments of yet more material balances. . Now, there may be nothing wrong with, say, calling on reserve stocks, if such stocks are sufficient. But these stocks often weren't sufficient, especially as the plans were often intentionally "taut" plans, that is, plans without any reserve for error. And so it seems that Gosplan frequently resorted to demanding that enterprises try to fulfill their production quota with supplies that, according to the plan, weren't sufficient. . Of course, Gosplan may have felt justified in demanding the apparently impossible because it felt that enterprises were hiding stocks of vital materials. Demanding the impossible was lauded as the practice of vyyavlat' reservy (causing reserves to appear). These fantasy figures, the false reports from enterprise to ministry, and the resulting demands for the impossible from the ministry, are signs of the anarchy of production that existed under the surface of planning in the Soviet Union. (See "The anarchy of production beneath the veneer of Soviet revisionist planning" in the March 1, 1997 issue of Communist Voice. For vyyavlat' reservy in particular, see page 18, col. 1, which cites a passage from Nove, The Soviet Economic System, Ch. 4, Industrial Management and Microeconomic Problems, p. 97.) (Text) (31) In mathematical terminology, one might say that the method of material balances regards the inputs not as a scalar quantity ("single number"), but as a vector ("list of numbers") in a multi-dimensional space of high dimension. (Text) (32) The Soviet Economy, edited by Harry G. Shaffer, Glossary, p. 450. (Text) (33) Alec Nove, The Soviet Economy: An Introduction, Third printing, 1963, pp. 157-8. (Text) (34) Ibid., pp. 159-160. (Text) (35) Ibid., pp. 158-9. (Text) (36) I talk of "the Soviet-bloc version of 'material balances'" to avoid reducing the question to an argument over words. The problem isn't the term "material balances", but the actual practices in state-capitalist society. . Perhaps someone will say that the term "material balances" can refer to any methods of economic accounting that keeps track of the physical ("material") amount of goods rather than using money accounts or accounts in some other single unit such as the labor-content. In that sense, material balances will cover a wide range of different planning systems, including communist planning. . But "material balances" is probably used more often to refer to Soviet-bloc methods, including their mathematical techniques for input-out analysis. These methods are by no means inevitable if there is to be planning in physical terms. In the text, I have so far referred mainly to the elitist and bureaucratic way in which they were applied; the Soviet-bloc methods were adapted to planning by the state-capitalist bourgeoisie, with the masses being passive. However, the class relationships underlying how the Soviet-bloc bourgeoisie looked at economic problems affected even their technical calculations and mathematical models for the economy. It has been mentioned that the Soviet use of material balances always served as an adjunct to financial calculations, or even as a way of setting prices. As well, the computer calculations for economic planning via linear programming and input-output analysis, that were at one time expected to revolutionize Soviet planning, and that are also used in a different form in the West, actually involved many assumptions that are only partially true. For example, I have pointed out that the logic of the method of material balances is that it measures the economic cost of producing something as not a single number but a list of numbers, indicating the amount of each input needed to produce something. The method of linear programming eventually applied in the Soviet Union and in the West to handle planning problems involving such lists of numbers implicitly assumed that such economic relationships are linear: in order to produce twice as much, you need twice as much raw materials, twice as much labor, etc. But this is a simplification. Sometimes when twice as much is produced, there are economies of scale, so less than twice the inputs are needed. Sometimes producing twice as much would be beyond the capacity of existing factories, so that it would require building more factories. In that case, it would take much more than twice as much inputs in order to produce twice as much. And what about taking account of the state of morale of the workforce, which can change the amount of inputs needed even if the amount of production needed remains constant? A linear relationship between input and output is only an approximation, reasonably accurately in a certain range of production. This doesn't mean that linear programming has no use: approximations can be of great value in economic planning, provided one realizes the limits in which such approximations are valid. But it does show that the linear and mechanical picture of the economy, that was the most sophisticated end product of the Soviet-bloc version of material balances, is simply one attempt at planning in physical terms, and not the only possibility. (Text) Last modified: July 28, 2010.
<urn:uuid:1c2967a6-ae6a-423d-912e-aa46e3b60e78>
CC-MAIN-2016-26
http://www.communistvoice.org/25cLaborHour.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965635
25,656
2.59375
3
Mining in Minnesota What's mined in Minnesota? Map of minerals mined in Minnesota. Are any minerals mined in the county where you live? Which ones? Minnesota is the largest producer of iron ore and taconite in the United States. Even though nearly all of the high grade natural iron ore in Minnesota has already been mined, advances in technology have found a use for lower grade iron ore, called taconite. The taconite is crushed, processed into hard, marble-sized pellets, and shipped to steel mills. The taconite pellets are melted in blast furnaces and then blown with oxygen to make steel. Minnesota currently has seven operating taconite plants which make the pellets. About 44 million tons of taconite pellets were shipped from the state in 1996. That's enough to fill over 500,000 railroad cars! In the past, iron ore was mined on three iron ranges - the Cuyuna, Mesabi and Vermilion - and also in Fillmore County in southeastern Minnesota. Today, only the Mesabi Range still has iron ore/taconite mining taking place. Clay is mined in the Minnesota River Valley. Clay is used in making bricks, porcelain, tiles, and medicines. Companies are currently exploring Minnesota for higher grade kaolin (KAY-a-lin) clay, which is a fine, white clay used to add a glossy look to paper. Today, Georgia is the largest producer of kaolin clay in the United States. About 11,000 years ago, glaciers covered Minnesota. These glaciers left behind large amounts of sand and gravel. There are sand and gravel mining operations in nearly every county in Minnesota. You may not think of sand and gravel as a valuable resource, but without it concrete could not be made. Highways, roads, bridges and many buildings are made of concrete. Sand is also used along with salt to melt ice on roads and to provide better traction in the snow. Silica sand is a very fine sand composed of quartz (a white to colorless mineral) and is mined in the southeastern part of Minnesota. It is used to make glass, as a source of silicon, and is used in oil drilling to improve the flow of oil to oil wells. Granite and limestone are used in the construction of homes, buildings, roads and tombstones. These rocks are often mined in large blocks from a quarry. When granite or limestone is mined this way, it is called dimension stone. Look at the buildings in your town. Are any made with limestone or granite? Peat is formed by partially decomposing plant material in wet environments, such as bogs or fens, where more plant material is produced than is decomposed. If peat is a plant, how can it be a mineral? Peat is the beginning of the fossilization of the plants. Fossil fuels, such as coal, began as plant material too. Peat is used mainly in the gardening industry, but it is also used for compost, turkey litter, absorbing oil, and fuel. Next time you are in the gardening store, look for peat. These are the only minerals currently mined in Minnesota. Manganese, copper, nickel, and titanium have also been discovered in the state in minable quantities, but are not of high enough quality under today's prices to mine profitably. Exploration for additional resources, such as gold, platinum, diamonds, zinc, and lead, continues today in Minnesota.
<urn:uuid:4cc8e9a8-9751-4508-a3a7-2a0487fb60c3>
CC-MAIN-2016-26
http://www.dnr.state.mn.us/education/geology/digging/mining.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968336
710
3
3
'A voice from heaven we have heard' In early America, sounds were the product of spiritual forces Long before Howard Dean howled in Iowa, Quakers in East Jersey were "tainted with the Ranting Spirit." They "howled" their members into a dissident clique of flailing shouters and whooping wailers, whose religious power was located not in any particular doctrine but in the hard, almost maniacal edge of their voices. Among their buttoned-up neighbors, the Puritans, these folks were considered possessed in 1675. But what's interesting, observes Richard Rath in this fascinating study, "How Early America Sounded," is that all sounds in those days indicated possession. Just as the noises we make when talking are considered the articulations of intelligence, so the sounds of thunder or church bells were understood by early Americans to be the products of spiritual, not mechanical, forces. They were active, not passive emanations. "Sounds did things in the world," Rath writes. "They moved people about, struck them, and in the case of thunder, actually killed." Yes, according to early Americans, thunder struck, not lightning, a conclusion Rath attributes to their heightened social sensitivity to sound. Until the 18th century, he writes, thunder was the terrible noise of God. When Increase Mather described the death by "thunderbolt" of an unfortunate Captain Davenport, his judgment was echoed by a Quaker, who celebrated Davenport's demise as the "sounding of God's Voice from heaven." Rath connects the myriad ways in which sounds exerted social influence. He writes of how church bells were sometimes baptized. Even when the practice fell from favor, bells remained the focus of early communities, the sound of their peals marking a settlement's boundaries. Bells were rung to call citizens to meetings or to warn of attacks. To live outside earshot was to live dangerously outside the control and protection of the government. For native Americans, sound embodied identity. Thunder, for instance, was not the sound of God, it was God - or, to be more precise, gods. The Ojibwe heard different "thunderers" at work in the first thunder, the thunder that hits something, the thunder that echoes, and the approaching thunder. For Africans in America, sound was a means of constructing identity. Rath points out that while the Continental Army's drummers were mostly white and untrained, the Hessians used outstanding African drummers in its more flamboyant military bands. The Africans mixed their own traditions with those of the German mercenaries, creating something compellingly new. The process was repeated with jigs and fiddling and was, presumably, not so different from what eventually produced jazz. Unfortunately, Rath, a professor of history at the University of Hawaii, has loaded "How Early America Sounded" with theoretical infrastructure and clunky prose that sometimes borders on parody. Writing about those African drummers, for instance, he offers this thick explanation: "Without the encoded 'text' or 'recipes' that were stored and represented in jigs and fiddling, creolized enslaved Africans would have been less likely to fill spots as Hessian drummers when the opportunities arose to 'read those texts aloud' as displays of a present power." And in one of several, unnecessary personal asides, he describes a punk band he belonged to in the early 1980s as "an unstable referent." That said, Rath is right to call attention to our often lazy ideas about what constitutes an oral culture. It is not, by definition, the opposite of everything literate and therefore civilized, as scholars of native America have pointed out for years. In fact, Rath shows that this "ear-based way of life" existed even "in the most literate culture in the world at the time, that of the New England Puritans." Rath also argues that the diversity of early American "soundways" may have delayed the formation of a single and distinctive American identity. Thoughts of revolution didn't stir until the old ways of thinking had been thoroughly disrupted - until, that is, a mass print culture had taken hold in the 18th century. This shift from ears to eyes provided the occasion for 1776. Finally, and most intriguingly, Rath says we may be living during just such a time again, as the printed page transfers some of its authority to a more fluid and ephemeral cyberspace. It's too early to tell what will come of it, of course. Perhaps in years to come, we'll be treated to another study: "How Later America Clicked." • Brendan Wolfe is a writer from Davenport, Iowa.
<urn:uuid:7afb5037-4959-4dc7-871f-a465596dcf9c>
CC-MAIN-2016-26
http://www.csmonitor.com/2004/0330/p14s03-bogn.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974717
969
3.234375
3
Back to the Data Model A Database is to be designed to record Student Assignments. This is the Start of the User specification of the Requirements :- Students - This table holds information about the students attending a college. Each student is given a unique identifier. Staff - Holds information about the staff, contains a unique identifier for each staff member Assignments - Holds information about assignments, has unique identifier for each assignment. Courses - Holds information about courses, has unique identifier for each course. Progress - This holds information about each student's progress of each assignment. This is also used by the staff to mark assignments, and used by the college intranet to display to students which assignments are currently unfinished. Many students can study on many courses. Many staff can teach on many courses. Each course contains many assignments. So from the database we need to: display new assignments for students (if any) that staff submit find out which course the student is studying, so we can direct the student to appropriate information display student names for a staff member for individual courses that that they study on, as well as all courses and also some other minor queries. This is the End of the User specification of the Requirements. A. The Things of Interest include :- A.2 Courses Offered A.3 Courses Scheduled A.5 Progress on Assignments. A.6 Student Registrations B. How are they related ? B.1 Students on a Course can be given zero, one or many Assignments. B.2 An Assignment is associated with one, and only one, Scheduled Course. B.3 A Scheduled Course has a defined start and end date. B.4 Each Assignment has a start and end date. B.5 During an Assignment, Progress can be reported at intervals. 4th. June 2003
<urn:uuid:de53fcfa-f3ea-41c7-a87f-b8f572d94a43>
CC-MAIN-2016-26
http://www.databaseanswers.org/data_models/student_assignments/facts.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00098-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913725
408
2.890625
3
Genocide, Assimilation or Incorporation? This is a video of a talk given by Professor Bonita Lawrence (Mi'kmaw), who is a friend and colleague of mine and is an Associate Professor at York University, where she teaches Native studies and anti-racism. From the link: On October 25, 2008, Dr. Lawrence took part in the 7th Annual New College Conference on Racism & National Consciousness, where she spoke for one full hour on "Genocide, Assimilation, or Incorporation: Indigenous Identity and Modes of Resistance." In her talk, Dr. Lawrence explores "aboriginal policy", the historical framework through which Canada has sought to erase the identity of Indigenous people, by systematically breaking down their cultures, belief systems, community and family structures, and their governments-in many cases, at the barrel of a gun. She hones in on three specific occurrences in the 19th century for causing the most damage. First was Canada's refusal to deal with Indigenous confederacies. Instead, the government singled out individual villages, around 620 altogether-which allowed the govenrment to politically, socially and economically segregate everyone across the land. Then came the banishment of Ceremony and the removal of strong Leaders, all of whom were over time replaced with "Christian converts" and individuals willing to represent Canada's short- and long-term interests. Finally, there was the political, social, and cultural disempowerment of Indigenous Women-and the forced assimilation of children-which allowed for the erosion and replacement of the fabric of indigenous identity. Link to video here. It's a full hour.
<urn:uuid:f1a8d77d-4a6a-47f3-9e4b-f1fd2c143590>
CC-MAIN-2016-26
http://rabble.ca/babble/aboriginal-issues-and-culture/genocide-assimilation-or-incorporation
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.955556
343
2.546875
3
Unlike most health departments, the Boston Public Health Commission oversees and operates the city’s Emergency Medical Services. The unique configuration meant that on April 15, when two homemade bombs exploded near the finish line of the Boston Marathon, public health was on the scene and ready to respond. Hours earlier, the commission activated its Office of Public Health Preparedness and Medical Reserve Corps in accordance with its regular responsibilities at the marathon, which this year attracted about 27,000 registrants. The day of the bombings, nearly 200 Boston health department personnel were already on site, overseeing medical activities and treating runners with injuries and health problems inside medical tents set up along the marathon route. Even before the bombings, which killed three people and injured more than 260, health personnel coordinated transportation for about 70 marathon-related illnesses and injuries, said APHA member Barbara Ferrer, PhD, MPH, MEd, Boston’s health commissioner and executive director of the Boston Public Health Commission. To continue reading this story, published in the July 2013 issue of The Nation’s Health, visit the newspaper online.
<urn:uuid:e67da4c0-a49e-4bdb-bec1-a4d580b54e0e>
CC-MAIN-2016-26
http://www.publichealthnewswire.org/?p=7753
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393997.50/warc/CC-MAIN-20160624154953-00088-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962805
220
2.53125
3
Astronomers use Hubble to measure the Expansion Rate of the Universe An open Universe expands forever because it does not contain enough mass, and so does not have enough gravity to slow down the expansion of space. A closed universe contains enough mass to halt the expansion, and eventually collapses. A universe with a 'critical density' of matter in space is exactly balanced between these two alternatives, and expands at an ever-slowing rate.Credit: About the Image |Release date:||9 May 1996, 19:00| |Size:||500 x 639 px|
<urn:uuid:f990ffda-e2e0-44e8-a87a-fe6551381353>
CC-MAIN-2016-26
http://www.spacetelescope.org/images/opo9621c/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.887737
121
3.5625
4
Five months later.—It is not a kangaroo. No, for it supports itself by holding to her finger, and thus goes a few steps on its hind legs, and then falls down. It is probably some kind of a bear; and yet it has no tail—as yet—and no fur, except upon its head. It still keeps on growing—that is a curious circumstance, for bears get their growth earlier than this. Bears are dangerous —since our catastrophe—and I shall not be satisfied to have this one prowling about the place much longer without a muzzle on. I have offered to get her a kangaroo if she would let this one go, but it did no good—she is determined to run us into all sorts of foolish risks, I think. She was not like this before she lost her mind. A fortnight later.—I examined its mouth. There is no danger yet: it has only one tooth. It has no tail yet. It makes more noise now than it ever did before—and mainly at night. I have moved out. But I shall go over, mornings, to breakfast, and see if it has more teeth. If it gets a mouthful of teeth it will be time for it to go, tail or no tail, for a bear does not need a tail in order to be dangerous. Four months later.—I have been off hunting and fishing a month, up in the region that she calls Buffalo; I don’t know why, unless it is because there are not any buffaloes there. Meantime the bear has learned to paddle around all by itself on its hind legs, and says “poppa” and “momma.” It is certainly a new species. This resemblance to words may be purely accidental, of course, and may have no purpose or meaning; but even in that case it is still extraordinary, and is a thing which no other bear can do. This imitation of speech, taken together with general absence of fur and entire absence of tail, sufficiently indicates that this is a new kind of bear. The further study of it will be exceedingly interesting. Meantime I will go off on a far expedition among the forests of the north and make an exhaustive search. There must certainly be another one somewhere, and this one will be less dangerous when it has company of its own species. I will go straightway; but I will muzzle this one first. Three months later.—It has been a weary, weary hunt, yet I have had no success. In the mean time, without stirring from the home estate, she has caught another one! I never saw such luck. I might have hunted these woods a hundred years, I never would have run across that thing.
<urn:uuid:caf3d103-0447-4e31-abee-c32f714b2753>
CC-MAIN-2016-26
http://www.bookrags.com/ebooks/142/167.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.981638
566
2.640625
3
teentech™ Program in a Box Inspire Girls in Science and Math with teentech™ Teentech™, launched by AAUW of New Jersey, is a daylong event that inspires middle and high school girls to pursue science, technology, engineering, and mathematics (STEM). AAUW’s research shows that young women lag behind young men in acquiring technical proficiencies. Since girls are more likely to use technology when they see its real-world applications, teentech™ offers a one-day conference that is designed to make STEM exciting and relevant to interest girls in high-demand careers in a global economy where women are still greatly underrepresented. High school girls from around your state will have the opportunity to learn problem-solving skills by engaging in hands-on workshop sessions with faculty and students in technology and engineering disciplines. 10 Steps to Get This STEM Program Started Put together a team of interested AAUW members and coalition partners to help decide details and share the workload. The teentech™ chair will be the key motivator and strategist for the event. The planning team will include those responsible for programming, coalition outreach, community and media promotion, registration, morning refreshments, and lunch. Be very clear about what needs to be done and who is handling what.2. Recruit coalition partners. Collaborate with an AAUW college/university partner member in your region to host the event on its campus and co-sponsor the event with you. Invite diverse groups in your community to co-sponsor the event as well. Be sure to include organizations that have the same goals and values and fully agree on the mission, outcomes, and plan.3. Decide on a format and workshop agenda. Be sure that your format will achieve your goals and accommodate the number of people you would like to attend. Keep in mind what type of funding you can realistically expect, how many qualified volunteers you can recruit, and what your location can accommodate.4. E-mail save-the-date postcards immediately. Design and send a save the date as soon as co-sponsors and your college or university location have been secured. The e-mail announcement only needs to include the title of the teentech™ conference and the date, time, location, and co-sponsors. Please note that schools need as much lead time as possible for their board approval of budgets, for scheduling buses, and for granting permission for the teentech™ student field trip.5. Decide on a target audience. Teentech™ participants usually include girls in grades 9–11 along with their accompanying chaperones (the AAUW of New Jersey program requires all participants register with a chaperone, usually a teacher with a group of students). The program can also be targeted to middle school girls. Estimate how many people will attend the event, and set a limit on the number of participants based on the college or university’s available conference facility and room space. Everything about the event, including the issues, co-sponsoring organizations, speakers, location, date, and time, should be designed with the audience in mind. The expected number of attendees (students and chaperones) also determines the budget, staffing, location, handouts, equipment, supplies, and the like.6. Develop a budget. While events can be held for little expense, you may incur costs for printed handouts, refreshments or meals, advertising, and postage for visibility and follow-up letters. In-kind contributions and donations from co-sponsors and other organizations are good ways to stretch your resources. A $20.00 registration fee is generally charged for each participant (students and their chaperones) to cover the cost of meals, handouts, and other expenses. Set a budget line to provide a stipend for workshop presenter materials if requested. (Generally, $50 per workshop is a very reasonable request.)7. Establish a time line. Create a time line that leads up to the event and shows each task that must be accomplished, the deadline for accomplishing it, and the person responsible.8. Select a date. Select a date and time that are mutually convenient for your target audience and for your co-sponsoring college or university. Avoid religious or government holidays or dates when participating schools have mandatory state testing scheduled. Generally, the best time to host a teentech™ conference is in late May. This is the time of year when the weather is good and when school testing has been completed. 9. Choose a location. Choose an accessible AAUW college/university partner member campus that will attract your target audience. Be sure the location is wheelchair-accessible. A college or university campus is an ideal location because your participants will be inspired to pursue higher education by seeing a campus and interacting with college students in the teentech™ workshops. 10. Brainstorm about ways to save money. Ask presenters and guest speakers if they will donate their time. Create relationships with local copy shops and discuss the possibility of getting discounted or free printing in return for advertising the shop’s name on the back of your brochures or flyers. Ask a college or university to co-sponsor the event by donating the space. (You may be responsible for janitorial service or a security deposit.) Planning Your teentech™ Event 1. Appoint a teentech™ committee chair (or co-chairs) and a committee team. The chair (or co-chairs) will identify a team of specialists to be responsible for programming, coalition outreach, community and media promotion, facility and meal coordination, and registration. Prepare checklists of committee team responsibilities to make sure everyone understands her or his duties and to avoid task overlap. 2. Schedule meetings with your committee. The initial meeting date is generally in August or September. A second meeting should be scheduled for early December, followed by one in early April. Two to three weeks before the conference, a final meeting should be held to work on preparing student and chaperone packets and to review the event, including finalizing the number of attendees. 3. Select and invite speakers, panelists, moderators, and other participants. Whenever possible, choose nonpartisan, credible individuals who will bring media attention to the event. Speakers need to represent a variety of ages, ethnic backgrounds, and physical abilities. Invite them well in advance of the event, and be specific about what you want them to do. Be sure to confirm and reconfirm with them before the event. 4. Work with your committee members to identify faculty from the host college or university who will present interactive, hands-on STEM activities. Plan separate morning and afternoon workshops. Request workshop titles with brief descriptions to include in a teentech™ brochure. Ask the faculty to engage undergraduate and graduate students to assist in the workshops. 5. Identify co-sponsors and contributors, and ask for their help. Contact local groups through e-mail or flyers or attend their meetings. Invite them to co-sponsor the event, help plan it, and send (and possibly sponsor) participants from their area schools. Local businesses might be willing to contribute goods or services to the event. Be sure to confirm and reconfirm with co-sponsors and contributors before the event. If you have co-sponsors or contributors, make sure you set up top billing and approval of all materials in writing in advance of their publication to assure accuracy and proper branding. Clearly specify in advance all requirements and limitations. 6. Spread the word. Decide how you will inform different audiences about the event. Generally, a save-the date e-message is sent out early in the school year, followed by a detailed brochure, which includes registration and workshop information. The brochure should be e-mailed to STEM educators at schools statewide. If possible, ask school administrators to send out notices to parents and students. Encourage schools to also send their guidance counselors to teentech™. Establish a contact person at each school, and keep in touch with her or him. Consider newspaper ads, flyers, radio and television announcements, Twitter, Facebook, online advertising, or community bulletin boards. Designate one person to be the public contact, and include her or his phone number and e-mail address on all publicity materials. 7. Market your event. Compile a list of media contacts that includes local and regional newspaper and magazine editors for publications and departments that cover women’s and education issues. Also include college and university and high school newspapers. Designate a committee member to be available to the media and include her or his name, phone number, and e-mail address on all outreach and mailings. Work coordinately with the college or university media personnel to collaborate on media outreach from their contact lists. Create press releases, and e-mail or fax them to local newspapers, magazines, and radio and television stations. About five days before the event, e-mail or fax an advisory to all of your media contacts and follow up with them one to two days before your event. 8. Organize the registration process. The sample teentech™ brochure includes a form with detailed registration information that you can use as a model. The form includes all the contact information and specific steps for registering each participant (if you have online registration set up, include a link in the brochure). If you are charging a fee ($20.00 per participant is suggested), decide what payment methods you will accept (generally schools will pay via check or purchase order). Confirm registration with the chaperone from each school as soon as you have received the completed registration forms and the fee. Include a list of their student participants as well as the chaperones who are attending. Approximately one week before the event, e-mail the chaperones directions to the campus, including details about the drop-off location, breakfast, and the morning session. Include a map pointing out parking locations for the buses and for individual vehicles. 9. Compile your handouts. At least one month before the conference, decide what handouts you will distribute at the event (including AAUW brochures) and be sure that you have more than enough copies. If you run out of handouts, have a method in place for people to request copies by mail or e-mail. Be sure you have a procedure in place for distributing handouts at the event. Generally, two-pocket folders are provided for each participant with the designated handouts pre-stuffed in each packet. Make sure you have links available for those with smartphones and laptops to get downloadable material. 10. Prepare media kits. Media kits should include the agenda, speaker bios and statements, press releases, background on AAUW and co-sponsors, and any other materials about the event. Be sure that you have more than enough copies for each media representative at the event (have at least 10 kits available if you don’t know how many reporters might be coming). Also provide a link to downloadable material online. 11. Assign participants to their workshops; create name tags and certificates. Maintain a spreadsheet listing the workshop preferences of all participants and compare those with the number of maximum attendees for each workshop. Participants will choose two workshops on their registration forms. Every consideration should be given to their choices; however, some randomization will be required when workshops reach their maximum. Use discretion to fairly balance the numbers in each workshop. Prepare name tags along with summary lists of students and faculty attending from each school. The name tags should include first name, last name, and school and will also designate pre-selected choices for one morning and one afternoon workshop slot (use number or letter designations to identify each workshop). The name tags should be bundled in envelopes with the designated school names to facilitate a smooth process on the conference day. Each chaperone will receive their school’s envelope with theirs and their students’ name tags and packets. The chaperone will check and verify their students’ name tags and distribute the packets to their students. Certificates acknowledging each participant’s attendance will be prepared for the girls and chaperones. Generally, they are given out at the end of the conference after receiving the evaluation form from each participant. These are completed during the last few minutes of each afternoon workshop. Your committee members should work coordinately with campus personnel to include professional development hours for the chaperones attending teentech™. What to Do the Day of Your Event Have a committee member check the conference meeting room to be sure everything is set up correctly. She or he should also coordinate with the presenting faculty to make sure that all workshops, meals, rooms, and equipment are in order. Make sure there are signs posted for parking and for getting to the event building.2. Assign someone to meet, brief, and escort each speaker. Designate several members of your committee to be the official greeters for guest speakers. Have water available for the speakers during the event, and make sure participants have access to water.3. Have a sign-in sheet ready. The sign-in sheet should list the students, chaperones, guest speakers, faculty presenters, and college student assistants and should have columns for any contact information you’d like to collect (such as e-mail addresses or phone numbers). Consider making the sign-in sheet electronic (on a laptop) so you don’t have to manually enter information onto a computer later.4. Be sure someone is available at all times to answer logistical questions and welcome latecomers. A committee member should be tasked with being the logistics liaison, including making announcements, answering questions, and directing people to the restrooms or elevators.5. Acknowledge speakers, sponsors, and contributors. In the welcome and wrap-up messages, acknowledge and thank everyone who has made the event possible.6. Prepare a few questions in case groups are slow to warm up after a presentation. This gives members of the audience time to think of their own questions. What to Do after Your Event 1. Thank speakers, sponsors, coalition partners, and others who contributed. Thank all event sponsors and contributors in writing immediately following the event, and consider sending small thank-you gifts. Include a certificate of appreciation with their thank-you letters. Send thank-you letters to the chaperones and their schools’ administrators. 2. Follow up with the media. Call all media contacts to see if they have any questions or need additional information or quotes. If they do, be sure to reply and send materials immediately by e-mail. Send an electronic media kit and follow-up press release to those who did not attend. Contact schools to have their media liaisons contact outlets in their area. Make sure they are provided with copies (via e-mail or fax) of the press releases. 3. Follow up on actions that came out of your event. Work with your committee and coalition to debrief and follow up on any actions that you discussed at or after your event. Your follow-up should occur as soon as conveniently possible after the conference. 4. Follow up with potential AAUW members. Use the registration list as a mailing list for future conferences and to contact potential AAUW members. Follow-up should occur as soon as possible after the event. Send the mailing list to firstname.lastname@example.org so that these audiences can be reached in national recruitment efforts. 5. Follow up with students and chaperones who tried to register after the maximum number was reached. Send out letters to the students and chaperones to thank them for their interest in teentech™ and let them know that you had reached the maximum number of registrations earlier than anticipated. Encourage the chaperones to register for future teentech™ conferences. Target these folks specifically for registration the following year, and consider expanding your event to accommodate more participants. Time Line for Your teentech™ Event Here’s a sample of the time line that AAUW of New Jersey suggests. Be sure your time line includes each task that must be accomplished, the deadline for accomplishing it, and the person who is responsible. Eight months before event - Appoint planning team - Identify and contact coalition partners - Identify college or university partners and conference location Seven months before event - Decide on the event format - Decide on an audience - Develop a budget - Finalize date and location - E-mail a save the date - Appoint teentech™ committee - Invite speakers, moderators, and panelists - Identify co-sponsors and contributors and request their help - Decide on light breakfast and lunch menu Six months before event - Finalize brochure and e-mail it to your audience - Finalize speakers, moderators, and panelists Four months before event - Compile list of media contacts Two months before event - Reconfirm speakers - Create press releases - Gather handouts and stuff in folders - Prepare media packets Two weeks before event - Finalize registration of schools - Gather handouts - Check audio or video presentations Five days before event - E-mail your media advisory - Finalize media kits - Call coalition partners to assess attendance - Prepare name tags Two days before event - Make media reminder calls One day before event - E-mail news release to media - Make sure rooms are set up correctly - Set up the refreshment table - Set up the registration table Day of event - Make sure the room is set up correctly - Check audiovisual equipment - Attend to speakers - Answer media questions - Thank participants, sponsors, and contributors in writing - Follow up with media — locally and regionally - Follow up with potential AAUW members (send your list to email@example.com so they can be included in wider recruitment efforts) - Follow up on future strategies that came out of the event Download a sample brochure and other teentech™ resources below or visit the Program in a Box Tool Kit for other planning resources for your branch programs and events.. - teentech™ Workshop Agenda - teentech™ Sample Brochure - teentech™ Sample Press Release - teentech™ Sample Evaluation Template - teentech™ Letter of Thanks for Educators and Presenters - teentech™ College University Certificates of Appreciation - teentech™ Sample Professional Development Certificate - teentech™ Student Participation Sample Certificate Have a question or want to share your successes with this PIAB? E-mail us at firstname.lastname@example.org. Programs in a Box (PIAB) help members consider and choose program activities for their branches with the “what, why, and how” to implement that program. Get the latest data on girls’ achievement in subjects related to engineering and computing, how few women are working in these fields, and what can be done. AAUW is leveling the playing field for women and girls in science, technology, engineering, and mathematics.
<urn:uuid:4dc68a2e-9671-4c80-a82e-0fac32d4a393>
CC-MAIN-2016-26
http://www.aauw.org/resource/teentech-program-in-a-box/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937197
4,018
2.6875
3
Cats have different dietary needs compared to dogs. Many of the special needs are due to a difference in liver and digestive enzymes between the two species. The Association of American Feed Control Officials (AAFCO ) has developed separate minimum requirements for dog and cat foods (See Table 1), and from these, it becomes evident why dog food should NOT be fed to cats. Special feline nutritional needs include: Protein is a source of nitrogen, and cats require a higher protein level than dogs. This may be due to the cat's inability to regulate the rate at which liver enzymes break down protein. If dietary protein is in low quantities or not available, the cat's body will soon start breaking down the protein in its own muscle. Taurine is an amino acid, which is necessary for proper bile formation, health of the eye, and functioning of the heart muscle. Cats require a high amount of taurine for their body functions, yet have limited enzymes which can produce taurine from other amino acids such as methionine and cysteine. Therefore, they need a diet high in taurine. If taurine is deficient, signs such as a heart condition called dilated cardiomyopathy, retinal degeneration, reproductive failure, and abnormal kitten development can occur. Arginine is also an amino acid. Most animals manufacture the amino acid ornithine through various processes, some of which require arginine. In cats, the only method to produce ornithine is to convert it from arginine. Ornithine is necessary because it binds ammonia produced from the breakdown of protein. If cats are deficient in arginine, there will not be enough ornithine to bind the ammonia, and severe signs such as salivation, vocalization, ataxia, and even death can result from the high ammonia levels. These signs often occur several hours after a meal, when most of the ammonia is produced. Although deficiencies are rare, they cat occur in cats who are not eating or have certain liver diseases such as hepatic lipidosis. Arachidonic acid is one of the essential fatty acids. Dogs can manufacture arachidonic acid from linoleic acid or gamma-linolenic acid. Cats can not. Arachidonic acid is necessary to produce an inflammatory response. In many cases, such as in allergies, the goal is to suppress the inflammatory response. But in other cases, the response is a necessary means by which the body can protect itself. Arachidonic acid also helps to regulate skin growth, and is necessary for proper blood clotting, and the functioning of the reproductive and gastrointestinal systems. Arachidonic acid is found in animal fats which must therefore be included as part of the diet. Like dogs, cats also require linoleic acid, another fatty acid. Active form of Vitamin A Cats lack the enzyme which can convert beta-carotene to retinol, the active form of Vitamin A. Therefore, they require a preformed Vitamin A, which is present only in foods of animal origin, and is usually included in cat foods as retinyl palmitate. Deficiencies of Vitamin A are rare, but signs include night blindness, retarded growth, and poor quality skin and hair coat. Many animals can synthesize niacin, a B vitamin, from the amino acid tryptophan. Cats can not manufacture it in sufficient quantities, thus require higher amounts in their diet. Deficiencies in niacin can lead to loss of appetite and weight, inflamed gums, and hemorrhagic diarrhea. Cats have less need for starch, and a decreased ability to digest it. Dogs need, and can tolerate, higher starch levels in their diet than cats. Table 1. Differences between AAFCO cat and dog food nutrient profiles
<urn:uuid:385c7469-d70e-485c-95f6-918b417a0cc5>
CC-MAIN-2016-26
http://www.peteducation.com/article.cfm?c=1+2244&aid=2575
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00098-ip-10-164-35-72.ec2.internal.warc.gz
en
0.941359
787
3.453125
3
Details about Very Young Children with Special Needs: Now in its fourth edition, Very Young Children with Special Needs: A Formative Approach for Today’s Children provides the best introduction to early childhood special education and early intervention for professionals preparing to work with infants, toddlers and preschool children with disabilities and their families. A foundational text that is both comprehensive and practical, it offers a thorough review of early intervention and early childhood special education, and the most detailed information available about the causes of disabling conditions in young children. Readers will be provided with “best practices” for supporting diverse families, five philosophical issues important to effective intervention and support to young children and their families, and unique coverage of typical child development across physical, emotional, language and cognitive domains. Through its use of narrative, case studies, and distinctive “close-ups”; updated information incorporating the newest provisions of the Individuals with Disabilities Education Improvement Act of 2004; inclusion of medical information regarding the etiologies of various disabilities; thorough introduction to emerging trends in the areas of personalization, relationship-based service, and evidence-based practices, students in fields as diverse as special education, social work, health care, and physical therapy will benefit from the interdisciplinary perspective and plethora of information found in the text’s entirety. It’s also a great resource for parents and families of children with special needs seeking ample information that’s also supportive and complete. Back to top Rent Very Young Children with Special Needs 4th edition today, or search our site for other textbooks by Vikki F. Howard. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Pearson. Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our Special-Education tutors now.
<urn:uuid:e13c0252-2164-4687-a029-c08c759c80f2>
CC-MAIN-2016-26
http://www.chegg.com/textbooks/very-young-children-with-special-needs-4th-edition-9780132080880-0132080885
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92209
372
2.703125
3
First let's start with the references. My friend Aaron Leiby has a blog entry on how to start programming the VFP unit here: A typical inline assembly template might look like this: asm ( assembler templateThe last two lines of code hold the input and output operands and the so called clobbers, that are used to inform the compiler on which registers are used. : output operands /* optional */ : input operands /* optional */ : list of clobbered registers /* optional */ Here is a simple GCC assembly example -that doesn't use VFP assembly- that shows how the input and output operands are specified: asm("mov %0, %1, ror #1" : "=r" (result) " : "r" (value)); The idea is that "=r" holds the result and "r" is the input. %0 refers to "=r" and %1 refers to "r". Each operand is referenced by numbers. The first output operand is numbered 0, continuing in increasing order. There is a max number of operands ... I don't know what the max number is for the iPhone platform. Some instructions clobber some hardware registers. We have to list those registers in the clobber-list, ie the field after the third ’:’ in the asm function. So GCC will not assume that the values it loads into these registers will be valid. In other words a clobber list tells the compiler which registers were used but not passed as operands. If a register is used as a scratch register this register need to be mentioned in there. Here is an example: asm volatile("ands r3, %1, #3" "\n\t"r3 is used as a scratch register here. It seems the cc pseudo register tells the compiler about the clobber list. If the asm code changes memory the "memory" pseudo register informs the compiler about this. "eor %0, %0, r3" "\n\t" "addne %0, #4" : "=r" (len) : "0" (len) : "cc", "r3" asm volatile("ldr %0, [%1]" "\n\t"This special clobber informs the compiler that the assembler code may modify any memory location. Btw. the volatile attribute instructs the compiler not to optimize your assembler code. "str %2, [%1, #4]" "\n\t" : "=&r" (rdv) : "r" (&table), "r" (wdv) If you want to add something to this tip ... please do not hesitate to write it in the comment line. I will add it then with your name.
<urn:uuid:f9ad240c-9af2-4d6b-8612-47575f1a7121>
CC-MAIN-2016-26
http://diaryofagraphicsprogrammer.blogspot.com/2009_01_01_archive.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.56/warc/CC-MAIN-20160624154955-00194-ip-10-164-35-72.ec2.internal.warc.gz
en
0.862865
602
2.96875
3
(a) To prove that an integral domain R is a Bezout domain if and only if every pair of elements a and b which has greatest common divisor d in R can be written as a linear combination of a and b such that for some, . Let R be an integral domain. Suppose R is Bezout domain and . for some d. We know that, for all and Also we know that and .
<urn:uuid:28485224-0341-4557-9585-12c6a2577ff7>
CC-MAIN-2016-26
http://www.chegg.com/homework-help/abstract-algebra-3rd-edition-chapter-8.2-problem-7e-solution-9780471433347
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947525
93
2.84375
3
A small circle fits between two touching circles so that all three circles touch each other and have a common tangent? What is the exact radius of the smallest circle? Two semicircle sit on the diameter of a semicircle centre O of twice their radius. Lines through O divide the perimeter into two parts. What can you say about the lengths of these two parts? Two perpendicular lines are tangential to two identical circles that touch. What is the largest circle that can be placed in between the two lines and the two circles and how would you construct it? $BO$ is a tangent to the two equal circles and hence angle $BOC$ is a right angle. If $OC=$3 units then the large outer circle has radius $6$ units. Pythagoras' theorem will give the radius of the circle centre $B$. You cannot assume that angle $BAC$ is a right
<urn:uuid:e9a9aaa6-831a-46d5-9e60-a8c86adcf769>
CC-MAIN-2016-26
http://nrich.maths.org/423/clue
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399385.17/warc/CC-MAIN-20160624154959-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.919785
197
3.390625
3
There are three main categories into which learning theories fall: In addition to these main theories, there are other theories that address how people learn. These are listed below after a list of general resources addressing all major theories for teaching and learning. Educational Psychology Interactive: Readings in Educational Psychology - extensive list of online materials relating to learning theories compiled by William G. Huitt. Educational Theory - Educational Theory is published by the Social Science Information Gateway. The general purposes is to foster the continuing development of educational theory and to encourage wide and effective discussion of theoretical problems within the educational profession. Learning & Instruction: The TIP Database summarizes 48 major theories of learning and instruction by name, learning domains and concepts. Apple Education - This site provides guidance to schools engaged in technological and pedagogy restructuring. Includes links to resources for information relating to technology planning, pedagogy, curriculum design (including specific subject matter guides), and other related topics. Mobile Learning Theory - The paradigm shift in education from a "supply" theory to a "demand" theory. Meaningful, Engaged Learning - In recent years, a strong consensus has been forming from research on the importance of engaged, meaningful learning and on what constitutes engaged learning in schools and classrooms. Jones, et al. (1994), at NCREL, developed a list indicators of engaged learning presented at this site. Multiple Intelligences - Summary of the often criticized multiple intelligences theory by Howard Gardner Multiple Intelligences Flaws - Keith McGuinness points out the inherent flaws with the commonly accepted multiple intelligences theory by Howard Gardner (specifically addressing the issue of lack of evidence to support Gardner's claims). Perspectives on Instruction: Behaviorism, Cognitivism, and Constructivism - theories are used in the field of Instructional Design as guidelines for understanding how to develop instruction that will be most effective for the learner. Teaching and Learning Methods and Strategies - explanation of various theories of how we learn created by the University of Arizona. Teaching and Learning Process Model - This model has been developed to categorize the variables that have been studied in an attempt to answer the question: "Why do some students learn more than other students in classroom and school settings?" Technology and Learning Theory - The use of instructional technology provides some new possibilities for learning theories. It is important to address how various technologies can impact how we teach, learn, and think. Through applied and basic research, as well as theoretical and conceptual inquiry we are attempting to guide the design, development, implementation, and evaluation of a new generation of learning environments. Theoretical Sources. Very extensive collection of resources compiled by Martin Ryder at the School of Education, University of Colorado at Denver. Contemporary Philosophy, Critical Theory and Postmodern Thought - extensive collection of links to online resources, corollary sites, readings, and people within the discipline compiled by Martin Ryder Dear Habermas, A Journal of Postmodern Thought - forum for students and their faculty provides sociological and philosophical discussions of law, gender, the privileging of subjectivity, forgiveness in the interest of good faith public discourse, intertextuality and our role in the creation of texts, and narrative. A Post-Modern Mandate for Educators - article by Written by Mary L. McNabb published by North Central Regional Educational Laboratory. Postmodern Culture - journal published by Johns Hopkins University Press with support from the University of Virginia's Institute for Advanced Technology in the Humanities Situated Learning - Lave argues that learning as it normally occurs is a function of the activity, context and culture in which it occurs (i.e., it is situated). Changing Schools through Experiential Education. ERIC Digest. Authors: Stevens, Peggy Walker; Richards, Anthony. In its efforts to restructure schools, the education community has begun to address the challenge of designing a curriculum that young people find significant. This Digest describes how experiential education can help provide such a curriculum and the impact it can have on students, teachers, administrators, and school organizational structures. It also describes ways experiential education can help educators make the transition from a traditional program to an activity-based program requiring the collaboration of teachers and students. Experiential Education Resources on the Internet AEE Guide to Experiential Education Resources on the Internet. National Society for Experiential Education. The National Society for Experiential Education (NSEE) is a membership association and national resource center that promotes experienced-based approaches to teaching and learning. Interesting examples of visuals for learning: http://www.onlineschools.org
<urn:uuid:7bc71038-1a38-4ff5-be3d-b1cc7f693ec7>
CC-MAIN-2016-26
http://innovativelearning.com/teaching/teaching_methods.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00055-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920253
945
3.78125
4
What is the Difference between Goals and Actions (To-Do List)? Simply stated Goals are the purpose toward which we direct our attention. Actions (the things you have on a To-Do List) are the things you do to achieve your purpose (or goals). Goals are what you desire to have or create, actions (your to-do list) are what you are going to do to create your goals. - The essence of Goals is the answer to the question, "Why?" - The essence of Actions is the answer to the questions, "What?" and"How?" Would You Rather Vacation In Fargo or Cancun? Truth is, all too often we live in a world of actions - and tell ourselves that we have goals. That's tantamount to deciding to deciding on going on vacation via a plane, without ever knowing where you want to go. Sure the plane might be the right choice, but then again, it might not. Imagine if you decided to go on a vacation and just went to the airport and got on the first plane that had an open seat. Sure, you could end up in Cancun, where you would praise yourself for not wasting your time deciding where to go. But you could just as easily end up in Fargo, North Dakota (where you would blame the plane, not your lack of selecting a destination first). Don't get me wrong, I love North Dakota, but it's NOT where I want to Vacation. The airplane represents what we are going to do, but the Cancun and Fargo represent our goal. If we don't first choose our destination, we have little chance of really getting there. In all honesty, as silly as this may sound for planning a vacation - it's just as silly for living your life. STOP Picking WHAT and HOW First - Instead Pick WHY - The WHAT and HOW Can Follow Everyday we ask ourselves the question, "What do I need to get done today?" (what is the mode of transportation?). But how many people really ask themselves "why they need to get those things done?" (where do I want to go?). Unfortunately, in today's fast paced world this becomes a bigger and bigger problem. Each day, we pile on more actions that need to be done - but very rarely do we actually slow down long enough to figure out why they are doing them - or even if there are better ways of doing them. I believe that when you establish Goals and face them everyday, your Goals drive your Actions - and this leads to you achieving the life you want to create. But, if you haven't defined your goals, you end up taking Actions with no real intent, then you will end up getting whatever LIFE decides to give to you. The first key to creating the life you want - is first deciding what it is you want. If you want to change your life, stop focusing on changing your actions (your to-do list). Instead, start by defining your goals. Only then will you be able to figure out which actions need to be done, which ones don't and, of course, which new ones need to be added to the list. Finally, here are some 5 easy steps to Making Your Goals Drive Your Actions: - Start simple - make a small list of short term goals (1 week to 1 month). - Figure out what you need to make those goals happen - make your to-do list. - Everyday, before you start working on your to-do list, review and concentrate on your goals - empower yourself with purpose. - Periodically review your goals and make sure that your actions (your to-do list) are actually leading you toward your goals . . . or not - Make changes as you need to make to your to-do list to stay on track with your goals - but don't change your goals. Good luck. If you have any questions, please feel free to contact me at email@example.com.
<urn:uuid:a931436f-7cde-4550-9d3d-c685d9f2decc>
CC-MAIN-2016-26
http://bizsuxs.blogspot.com/2009/06/your-list-of-goals-is-not-to-do-list.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396945.81/warc/CC-MAIN-20160624154956-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964581
835
2.734375
3
Definition of CSF (colony-stimulating factor) CSF (colony-stimulating factor): A laboratory-made agent similar to a substance in the body that stimulates the production of blood cells. The colony-stimulating factors (CSFs) include granulocyte colony-stimulating factor (G-CSF) and granulocyte-macrophage colony-stimulating factor (GM-CSF). Treatment with colony-stimulating factors can help the blood-forming tissue recover from the effects of chemotherapy and radiation therapy.Source: MedTerms™ Medical Dictionary Last Editorial Review: 6/14/2012 Get the latest treatment options.
<urn:uuid:4e9ca566-acfd-4bb2-a27d-b3760ecfafb1>
CC-MAIN-2016-26
http://www.rxlist.com/script/main/art.asp?articlekey=10320
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.782374
139
3.25
3
In 5 of Canada’s 10 provinces (Ontario, Alberta, New Brunswick, Prince Edward Island, Newfoundland and Labrador), the relationship of power is marked by a division between Liberals and Conservatives. In Manitoba, Saskatchewan and British Columbia, a major feature of the partisan dynamics consists in the longstanding support of a major fraction of the electorate for its province’s New Democratic Party (NDP). In Saskatchewan, Manitoba and Nova Scotia, moreover, the popularity of the provincial NDP goes hand in hand with the relative weakness of the provincial Liberal Party. Furthermore, it is worth emphasizing the existence in all provinces of a strong feeling of membership in, and identification with, the provincial political space – which translates into a highlighting of this space in the political discourse of all provincial political discourses, regardless of whether or not the province’s name figures in their appellation, as is the case in Quebec (Parti Québécois) or Saskatchewan (Saskatchewan Party). In addition, while a number of provincial Liberal and Conservative parties – notably in Alberta, Newfoundland and Labrador and Ontario – do not tout themselves as being regionalist have, nevertheless, considerably participated in the phenomenon known as “province-building,” which consists in strengthening the political and administrative capacities of their respective provinces. The number of provincial legislative assembly members since 1980 Quebec is the province whose legislature has the highest number of assembly members. The expansion of the legislative body stems from a desire to more fully represent the regions of Quebec, a function filled primarily by the former Legislative Council until 1968. Elsewhere in Canada, it is increases in population that have driven growth in the number of assembly members, particularly in British Columbia. In contrast, New Brunswick, Newfoundland and Labrador, Prince Edward Island and Ontario reduced the size of their legislatures during the 1990s, primarily for reasons of ideology and budget cuts. In Ontario, the elimination of more than a quarter of seats at the Legislative Assembly in 1999 is the outcome of the implementation of Conservative Premier Mike Harris’ “Common Sense Revolution,” a program designed to downsize the provincial government. Saskatchewan also shrank its Legislative Assembly, in 2003. Only Nova Scotia and Manitoba have kept their legislatures at their current size since 1980.
<urn:uuid:be8acd3d-c664-4767-bd86-685690fe6ee9>
CC-MAIN-2016-26
http://etatscanadiens-canadiangovernments.enap.ca/en/nav.aspx?sortcode=2.0.1.0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.935279
458
2.953125
3
A 60-year-old Egyptian woman suspected of infection with the Middle East Respiratory Syndrome, commonly known as coronavirus, died in the early hours of Monday CAIRO – A 60-year-old Egyptian woman suspected of infection with the Middle East Respiratory Syndrome (MERS), commonly known as coronavirus, died in the early hours of Monday. If confirmed this would be Egypt's first fatality from the deadly respiratory virus that has spread across several countries of the Middle East. Awatef Mansour died in Port Said Fever Hospital before the results of her MERS tests come out. She had suffered MERS-like symptoms days after returning from Saudi Arabia, where the virus has killed at least 112 people since it first hit the kingdom in late 2012. Along with Saudi Arabia, coronavirus has been reported in Qatar, the United Arab Emirates, Tunisia, Jordan and Oman. MERS, for which no known cure is available, destroys the lungs and kidneys. Symptoms, which include persistent fever and cough, are similar to those associated with the SARS virus. It is presumed that long-term physical contact can lead to infection. Copyright © 2014 Anadolu Agency
<urn:uuid:05cb7142-d42d-4594-a9f8-1366f31809fa>
CC-MAIN-2016-26
http://www.turkishpress.com/news/404061/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00153-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972833
251
2.515625
3
Although China's first emperor Qin Shi Huang unified the country's language over 2,000 years ago, the nation's 1.3 billion population still encounter communication troubles, whether with people several provinces away or in the village down the road. During the on-going "popularizing mandarin week," Yuan Zhongrui, an official from the Ministry of Education, said linguistic unification is vital to any nation's modernization process. As the world's most populous and third largest country, China boasts 56 ethnic groups and hundreds of dialects and ethnic languages. This can mean that residents of the capital Beijing have a hard time communicating with south China's Cantonese, while even those from neighboring villages in east China's Zhejiang Province can not understand each other. Experts said that in an open and mobile society, language should not become a hurdle in people's daily life. However, in China, language is still such a hurdle. Just a few weeks ago, a Hong Kong journalist misheard "zhisha," sand-control, as "zisha," committing suicide, while reporting in Beijing. In Chongqing, one of the four municipalities in China, some Taiwanese businessmen were unable to understand the local dialects, leading them to suggest that the municipal government further popularize mandarin Chinese. Wang Jun, a well-known Chinese linguist, said language rationalization severely hinders the country's economic development and modernization process. In the early 1950's, the People's Republic of China defined Putonghua, meaning standard Chinese or mandarin, stipulating that it be based on the northern dialect with Beijing pronunciation as the standard. Seeing that testing the level of Putonghua is an important step for its spread, China implemented an examination in October 1994, which has so far been taken by 5 million Chinese people. On January 1, 2001, China created a "National Common Language Law," which stipulates that announcers, anchors, movie actors and actresses, theater performers, teachers and government employees as well as other people specified by the department concerned should pass the Putonghua level test and reach the grade specified by the state. Yuan Zhongrui said civil servants represent the government's image and they are the executors of the nation's laws, so their Putonghua level is quite important. Consequently, Beijing's civil servants are expected to pass the Putonghua test before the year of 2004, while in China's biggest city Shanghai, the 100,000 civil officials are required to take the test within the coming two or three years. Education is also considered an important front in the country's language unification and to date most urban schools have done well in teaching students standard Chinese. However, some schools in the countryside, especially those located in the landlocked western region, still teach in dialects. The spread of Putonghua and standard Chinese characters does not however mean restriction on the use and development of ethnic minority languages, Wang Jun said. In autonomous regions and areas where ethnic minorities live incompact communities, Putonghua and the local minority language can be used simultaneously. However, Wei Dan, a Ministry of Education official, noted that the small-scale farmer economy that has existed for thousands of years in the populous and diverse country perpetuates the problem. Many people have formed a closed language mindset and are so used to their local language and sometimes they feel reluctant to accept the common language, she added. Facing all these challenges, China and its people must be more urgently mobilized to build up openness to language and exercise Putonghua. (Xinhua News Agency September 19, 2002)
<urn:uuid:54b9ee50-da56-457a-8c88-a4486f463967>
CC-MAIN-2016-26
http://www.china.org.cn/english/culture/43647.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00090-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957869
745
2.734375
3
We’ve all seen the failure videos at some point or another. The ridiculous contraptions recorded falling to pieces on silent, black and white film that are always included in montages of early 20th century culture and history with a lively big band tune playing in the background. You know, these types of things: Flight has been something that has fascinated humans for millennia and logically the first ideas about recreating it came from nature. As early as the 4th Century BCE, there are legends involving ornithopters and wings made for people out of feathers. These stories eventually developed into actual designs like those of Leonardo DaVinci in the late 1400’s. As time progressed, some designs allowed for gliding, but there were no ornithopters created that allowed for actual, human-powered, flapping flight. That is, until recent history. In 1929, Alexander Lippisch’s invention flew about 250 meters, which, although some argued was merely an extended glide prompted by a tow launch, others claimed was true flapping flight which was only hindered by the fact that humans tire easily. From this point forward, the development of ornithopters proceeded at an increasingly faster rate, much like most of the rest of modern technology. Today, there has most definitely been progress in ornithopter technology. There are many manned and unmanned ornithopters that work quite well today and are even developed for military use due to their similar appearance to birds and insects. They even take shape in the form of hobby’s for craftsmen and participants in the Science Olympics. According to a source, ornithopters run on an engine which runs flapping wings that create thrust and lift for the craft. The wings are connected by a section at the center that is moved up and down to create the flapping motion. “The wings’ thrust is due primarily to a low-pressure region around the leading edge, which integrates to provide a force known as ‘leading-edge suction’.” Sometimes imitating nature is not a good idea. As seen by the many different types of flying machines today, ornithopters are not the most reliable, or the most efficient. In fact, they are probably some of the worst in both categories, but without the ornithopter as an initial starting point to foyers in human flight, would we be flying today? The beginning interest in flight, so many years ago, may have lead no where without the failed attempts of many centuries and the need to keep trying again and again.
<urn:uuid:09ff1750-a574-46e4-ac4f-a522f825a0c9>
CC-MAIN-2016-26
http://blogs.bu.edu/biolocomotion/2011/11/17/ornithopters-then-and-now/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968868
529
3.265625
3
The U.S. Supreme Court opened the door for discrimination in voting with the decision handed down Tuesday gutting the historic Voting Rights Act of 1965. Its ruling in Shelby Co. v. Holder will stand with its Citizens United decision as twin wrecking balls, destroying laws meant to institute justice in place of the law of the jungle. With the invalidation of key provisions of the Voting Rights Act, the court has opened a new chapter of political and legal warfare, forcing jurisdictions throughout the nation to refight battles previously won by the civil rights movement 50 years ago. Section 5 of the Voting Rights Act required states with previous histories of racial discrimination in voting, mostly in the South, to gain approval from the U.S. Justice Department before making changes in their voting laws. The court held that times have changed and conditions prevailing in 1965 no longer exist. Thus, subjecting counties in Alabama or elsewhere to a higher degree of scrutiny is no longer justified. Defenders of the law pointed to recent history. During the 2012 election Republicans mounted a strategy of voter suppression that was blunted in part by Section 5. In states from Texas to Pennsylvania states sought to pass voter ID laws, erase names from registration rolls, curb early voting and gerrymander districts to make it harder for minority voters to vote. As the Brennan Center for Justice at New York University has pointed out, Section 5 blocked a photo ID requirement in Texas that could have prevented 600,000 eligible voters from casting ballots. Section 5 required Florida to reinstate early voting hours favored by minority voters. Section 5 invalidated redistricting maps in Texas found to discriminate intentionally against Latino voters. In addition, as the Brennan Center points out, the fact that changes in the law would have to win Justice Department approval deterred other jurisdictions from trying to pass discriminatory laws. South Carolina legislators rejected a restrictive voter ID law because they knew the Justice Department would strike it down. Now deterrence will be gone. In the wake of the court’s decision Tuesday, the Texas attorney general crowed that the state’s voter ID law would take effect immediately and the redistricting maps passed by the legislature would take effect without Justice Department approval. A wave of discriminatory laws seeking to make it harder for minorities to vote will not be long in coming. Civil rights groups and communities will now be forced to defend their rights by mounting lawsuits to prove that the new laws are discriminatory. In 2012 they succeeded in persuading the courts to reject a spate of discriminatory laws, but without the Justice Department’s role as a protector of standards, we can expect a bitter new round of civil rights struggles to follow the court’s decision. The Voting Rights Act of 1965 was one of the great monuments of the nation’s history. Men and women gave their lives to challenge the reign of terror prevailing in the South, where murder and terrorism were the methods deployed against a movement dedicated to the methods of peace. It is not ancient history as recent efforts to intimidate African-Americans have shown. At the same time, African-American voters in 2012 showed they were not easily intimidated. They stood in lines in some cases for long hours to cast their ballots. Efforts by Republicans to slip a new regime of Jim Crow rules into place to make voting inconvenient or impossible for minority voters could well rebound against the Republicans, as they did in 2012. Meanwhile, Sen. Patrick Leahy, disgusted by the court’s decision, immediately announced that the Senate Judiciary Committee would hold hearings on voting rights in anticipation of crafting legislation to guarantee that abuses do not occur and the gains of the nation’s great civil rights struggle are not turned back by a new era of racial bigotry.MORE IN Editorials - Most Popular - Most Emailed
<urn:uuid:1e600880-14a9-44fd-b2d2-fe73e746cc7d>
CC-MAIN-2016-26
http://www.timesargus.com/article/20130626/OPINION01/706269969/1021
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00029-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959114
768
2.609375
3
Determining the proper length for any Web page requires balancing four factors: Researchers have noted the disorientation that results from scrolling on computer screens. The reader's loss of context is particularly troublesome when such basic navigational elements as document titles, site identifiers, and links to other site pages disappear off-screen while scrolling. This disorientation effect argues for the creation of navigational Web pages (especially home pages and menus) that contain no more than one or two screens' worth of information and that feature local navigational links at the beginning and end of the page layout. Long Web pages require the user to remember too much information that scrolls off the screen; users easily lose their sense of context when the navigational buttons or major links are not visible: In long Web pages the user must depend on the vertical scroll bar slider (the sliding box within the scroll bar) to navigate. In some graphic interfaces the scroll bar slider is fixed in size and provides little indication of the document length relative to what's visible on the screen, so the reader gets no visual cue to page length. In very long Web pages small movements of the scroll bar can completely change the visual contents of the screen, leaving the reader no familiar landmarks to orient by. This gives the user no choice but to crawl downward with the scroll bar arrows or risk missing sections of the page. Long Web pages do have their advantages, however. They are often easier for creators to organize and for users to download. Web site managers don't have to maintain as many links and pages with longer documents, and users don't need to download multiple files to collect information on a topic. Long pages are particularly useful for providing information that you don't expect users to read online (realistically, that means any document longer than two printed pages). You can make long pages friendlier by positioning "jump to top buttons" at regular intervals down the page. That way the user will never have to scroll far to find a navigation button that quickly brings him or her back to the top of the page. All Web pages longer than two vertical screens should have a jump button at the foot of the page: If a Web page is too long, however, or contains too many large graphics, the page can take too long for users with slow connections to download. Very large Web pages with many graphics may also overwhelm the RAM (random access memory) limitations of the user's Web browser, causing the browser to crash or causing the page to display and print improperly. It makes sense to keep closely related information within the confines of a single Web page, particularly when you expect the user to print or save the text. Keeping the content in one place makes printing or saving easier. But more than four screens' worth of information forces the user to scroll so much that the utility of the online version of the page begins to deteriorate. Long pages often fail to take advantage of the linkages available in the Web medium. If you wish to provide both a good online interface for a long document and easy printing or saving of its content: In general, you should favor shorter Web pages for: In general, longer documents are:
<urn:uuid:33fbe1a4-d8a0-4ec9-b777-a5dee97d336e>
CC-MAIN-2016-26
http://www.webstyleguide.com/wsg2/page/print/length.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.14/warc/CC-MAIN-20160624154955-00096-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902273
642
3.265625
3
(Image credit: Flickr user The Library of Congress)1. STRETCHING TO THE (VERY) OLDIES While Harriet Beecher Stowe was busy writing Uncle Tom's Cabin, her sister Catherine Beecher was busy blazing a different sort of trail -one that the Richard Simmonses of the world would be dance-walking down in the years to come. After learning about aerobic exercise at seminary, Beecher developed her own brand of calisthenics that included arm stretches, lunges, and squats. Then she got fancy and added live piano music to the mix. The result was an early version of Sweatin' to the Oldies. But it wasn't just fitness freaks who were moved by Beecher's music -several schools around the country embraced her program and added it to their curricula. 2. ALL JACKED UP The jumping jack goes by many names -the star jump, the side-straddle hop. But whatever you call it, there's only one man to blame: U.S. Army General John "Jack" Pershing. The general came up with the eponymous exercise early in his career as a no-nonsense cadet captain at West Point. But it took a whole different Jack to take the exercise public. That honor goes to the late fitness guru and TV personality Jack LaLanne (pictured), who famously bounced around, both onscreen and off, in a trademark jumpsuit. Over the years, LaLanne became so synonymous with the jumping jack that many credit him as the inventor -an indiscretion that would have earned a punishment of 100 jumping jacks from the exercise's originator. 3. KICKING IT OLD SCHOOL If you like kickball but hate the baseball-style rules, why not play it like they did in the 1920? To start, as many as 30 players could play at one time. Batters would place the ball on home plate and kick it without a pitcher. As for the fielders, they had to be at least 20 feet away from the kicker, and if the ball failed to reach them, the batter was ruled out. But perhaps the strangest part of the game was the base running. When the ball was kicked, the runner ran to the base. Yes, the base: there was only one! A runner on base would either try to score when his teammate kicked the ball or stay put, meaning 14 players were allowed to stay on base at one time. If they didn't return home by the time the last batter on a team kicked, they were out. Room for improvement, yes, but also great heart. 4. YOU MEDALING KIDS... In the 1940s and '50s, New York University's Dr. Hans Kraus conducted a series of fitness tests on American and European school children. In one study, he asked the kids to perform simple exercises such as leg lifts, sit-ups, and toe touches. The results were unnerving: 56 percent of American children failed at least one part of the test, compared to just 8 percent of Europeans. When President Eisenhower heard the news, he responded by launching the President's Council on Youth Fitness. A decade later, President Johnson furthered the cause with the Presidential Physical Fitness Award, recognizing the country's fittest 15 percent. These days, the award is still a staple in phys ed classes, although you no longer have to be at the top of your gym class to get recognized. Those below average win the Participant Physical Fitness Award for showing "room for improvement" but also "great heart." _______________________The article above, written by Adam K. Raymond, is reprinted with permission from the Scatterbrained section of the January-February 2012 issue of mental_floss magazine. Get a subscription to mental_floss and never miss an issue! Be sure to visit mental_floss' website and blog for more fun stuff!
<urn:uuid:078a2a34-e7cf-4c86-aad1-8f3d7c6a21e8>
CC-MAIN-2016-26
http://www.neatorama.com/2012/03/30/4-winning-moments-in-gym-class-history/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00134-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977665
811
3.265625
3
By Teachers, For Teachers Could you have the next Rick Riordan in your class or possibly another Dr. Seuss? Is there a hidden talent in the class just waiting to be discovered or has writing taken a back seat to all other subject areas? Why is it so hard to engage students in writing? We all have students who will write to fulfill an assignment and get the grade, but how do we ignite the love of writing? How do we recreate the excitement and joy of putting pen to paper to create a story or poem where the action and drama stem from personal imagination? Writing in the classroom is valuable tool that provides benefits such as the following: So how do we engage our students in the writing process and bring out those hidden writers? One method to engage students is using student journals across the curriculum. Student journals are personalized notebook that is sure to start the creativity flowing and cure the writing blues. To introduce journal writing, allow students to decorate the journal, personalizing with stickers, glitter, and pictures. Students are eager to participate in activities they have been allowed to create. Transform an every day composition notebook into a scientific method journal where students can keep science notes, lab activities, reflective thoughts on special activities, and answer those challenging “what if” questions. The journal can also serve as a data tracker for those experiments where you monitor progress over a period of time, such as watching a seed grow, or keeping track of meals/calories for a health lesson. Learning about the world we live in comes to life in a journal where facts, pictures, maps, and adventures are kept. Turning the journal into personal passport is a fun way learn facts about locations around the globe. Students can add pictures, write diary entries of places to visit, and draw and label maps. Posing the question, “What do you think about this?” on the cover, students can be given a current event for the week and write responses to the article. This is a great way for students to express opinion and learn how to back up the opinion with supportive facts from the article. How would they respond? What should be done? Concepts such as planning and organizing steps are taught and practiced in this journal. Keeping a spelling journal or having students create a personal dictionary will help students learn new words and practice them daily. For younger students, you can have pages that reflect word families, blends, or rhyming words. Older students can be have pages with challenging words or words to know. A math journal is great tool for defining math terms, listing steps to solving specific problems, writing out word problems and how to solve the problems (again organizing thoughts and listing steps is practiced). Illustrations and charts are added to help the problem solving process. To write about a reading assignment, students have to pay more attention to it. They have to read more carefully. Before, during, and after reading a story or poem, this journal allows for reflection, definition of challenging words, character profiles, setting descriptions, plot time lines, and so much more. Keeping a reading journal after every chapter recalling chapter events and relating the text to self or to another text helps those students who have difficulty with comprehension or writing the book report at the end of a reading. Whether the topic is chosen by teacher or student, this journal allows for self expression, time to think about your thoughts and write them out in a clear organized manner, and encourages a creative flow that can help students use their imaginations, explore possibilities, problem solve, and storytelling. This creative writing, allows students to explore vocabulary and writing styles they wouldn't normally use in other graded assignments. There are so many uses for journals in the classroom and not all of them should be assessed for correct punctuation, capitalization, and sentence structure. Some may be assessed for understanding of the topic and creativity. The idea is to get students to write and enjoy the process. As they practice writing on a daily basis, the tools needed to become a successful writer will continue to develop. As they develop, students will become more confident in their writing and you may find they are writing a great deal more. You may just notice a few great writers in the midst! What activities or ideas do you have to encourage writing in your class? Share in the comments section!
<urn:uuid:d7bc6a16-45b7-4df5-93ff-c737c1acdcc2>
CC-MAIN-2016-26
http://www.teachhub.com/student-journal-activities-all-subjects
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00059-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947287
884
4.09375
4
The right to water and sanitation is recognised in international law, but it is often left up to each local community's initiative to secure that right. And a village in the Thar Desert of western India has recently been singled out by The Hindunewspaper for its exemplary water rationing system: In Kalyanpur village of Barmer, one of the most parched and barren districts of Rajasthan, the villagers have found a solution to their water woes in water rationing. There are no fights over water distribution, no quarrels over breaking the queues or attempts at snatching other people's share of water… [the village's well] is a blessing in the barren zone for its water is very sweet and light, devoid of fluoride or other contaminations … [A The right to water and sanitation is recognised in international law, but it is often left up to each local community's initiative to secure that right. And a village in the Thar Desert of western India has recently been singled out by The Hindu newspaper for its exemplary water rationing system: In Kalyanpur village of Barmer, one of the most parched and barren districts of Rajasthan, the villagers have found a solution to their water woes in water rationing. There are no fights over water distribution, no quarrels over breaking the queues or attempts at snatching other people's share of water… [the village's well] is a blessing in the barren zone for its water is very sweet and light, devoid of fluoride or other contaminations … [A steering committee has] laid down rules after assessing needs of the 1,100 families in Kalyanpur, said Loon Chand, secretary of the committee. The [well] was constructed through public participation and the water rationing system also is being run successfully by the committee. Each family's share is about 4,000 litres per month. That is barely enough water to sustain a family of four or five according to international standards, but the people of Kalyanpur could not consume more than that without depleting their one well. So they are making it work. Meanwhile, 900 kilometres away in the megacity of Mumbai, residents of a slum known as Kadam Chawl have developed their own urban-style water rationing system. Because their single municipal tap runs for only 20-30 minutes each evening, the chawl's residents have devised a well-choreographed rationing system in which all women and men gather at the tap 365 evenings a year to fill and quickly haul numerous large water pots to all homes. When water becomes scarce, whether in the global South or North, rationing happens. Experience and research have shown that urging voluntary reductions in consumption is of little value, while raising prices to reduce demand is cruel and unworkable. In contrast, mandatory rules for restrained but equitable water consumption tend to foster a sense of common purpose in the face of scarcity. Efficiency against fairness In many cities around the globe, residential water supplies are routinely restricted to certain hours of the day. But the past year has seen a global outbreak of emergency water rationing in the face of sudden, extraordinary scarcity. In a diverse group of countries, including the Dominican Republic, Venezuela, Australia, Kenya, Ghana, Tanzania, Zimbabwe, South Africa, India, Pakistan, China, Taiwan, Malaysia and the Philippines, a wide variety of rationing plans have had to be put into practice. Rationing has even become necessary in normally moist, green places, most prominently the United Kingdom, Ireland and New Zealand. But rationing cannot help when the community water supply is wholly inadequate. That is the case in many slum areas of Mumbai and other cities, where family members must trek several kilometres to purchase water from bootleggers by the one-litre plastic pouch. Those customers pay five to 10 times the price that middle-class or affluent families pay for their piped-in water. We may find those bootleggers contemptible, but in their own defence they would argue, correctly, that they are putting free-market principles into practice, simply responding to signals from the market. Any economist can show you how the most efficient method of allocating water works out to be "marginal cost pricing", under which the first litre per week or month is the most expensive and the cost falls as consumption rises. That, of course, penalises low-income households and rewards heavy consumption. Therefore, many municipalities, from Durban to Las Vegas, have turned marginal cost pricing on its head. Under what are called increasing block tariffsystems, each household has a monthly right to an initial "block" of that is free or very cheap, with the price escalating sharply for subsequent blocks. |Peru's 'cloud catchers aim to solve water scarcity But there will always be a wide gap between what it costs to provide municipal water and what many urban dwellers can afford to pay for it. Even fairer pricing cannot guarantee the right to water when the system is expected to fund itself fully through fees or even to turn a profit if privatised. The situation is aggravated when lavish consumption is permitted in affluent areas while other areas suffer inadequate service. Treating water as a market commodity almost inevitably leads to conflict. Going to the source A whopping 86 percent of the world's total fresh water consumption is accounted for by production of food, fibre and other agricultural products, and 9 percent is attributable to industrial production. Although a scant 5 percent of the footprint is residential water use, it is in the domestic supply where shortages are felt most immediately and most intensely by the majority of people. Often, rationing is necessary. In many situations, the trigger for rationing has been much more complex than chronic drought or high population density. Thanks to greenhouse emissions, local climates are becoming increasingly fickle. The severe shortages that hit Dublin in late March can be traced to Europe's recent stretch of frigid weather, which froze pipes and caused leaks throughout the municipal water system. Early this month, water rationing in cities of northern and southern Taiwan - a policy made necessary by alarming drops in reservoir levels - coincided with heavy rains that cause flooding and landslides near the centre of the island. Beyond climate disruption, a much wider variety of events and conditions can disrupt the flow. Headlong economic growth in Pune, India, and rapid industrial development in Moshi, Tanzania, are creating a need for water rationing in those cities. Mining of natural gas through hydraulic fracturing requires huge quantities of water, and it is competing with more immediate needs in the Midland-Odessa region of Texas - a place where strict water rationing has already been in place for years. The Three Gorges Dam on China's Yangtze River has boosted water supplies in some areas, but it has forced other, downstream towns and cities to ration. Water-stressed Pakistan has similar concerns about India's plans to continue building dams upstream on the Indus and other rivers. Left alone, Palestine's West Bank would have ample reserves of renewable groundwater; however, neighbouring Israel's heavy extraction of water resources from lower western edge of the West Bank's massive aquifer and from the northern and eastern borders of Gaza - along with its policy of forbidding well-drilling by Palestinians - has created an artificial scarcity that makes tight rationing necessary in the cities and villages of the occupied territories. Israel uses that pilfered water to maintain its high per-capita water consumption (which equals that of Australia or Denmark), while the average West Bank resident's daily ration is only 50 litres per day, and many get by on barely 20 - perilously close to the minimum supply required simply for bare survival. Talk of looming worldwide conflict over water resources has been going on for years. But it is often conflict itself - state versus state, class versus class, and, increasingly, humanity versus nature - that triggers water scarcity in the first place. The only long-term solution is to resolve such conflicts, to ensure that every community has an adequate water supply. But even then, as in Kalyanpur village, resources may not be bountiful, and rationing by some means other than ability-to-pay will be necessary. If we cannot manage to conserve and share water fairly, there is little chance that we will manage share other resources fairly. Enforcing the right to water is, or at least should be, less complex and contentious than ensuring rights to, say, energy, food, or medical care. As Maude Barlow concluded in her 2007 book Blue Covenant: The Global Water Crisis and the Coming Battle for the Right to Water, "If ever there was a time for a plan of conservation and water justice to deal with the twin water crises of scarcity and inequity, now is that time. The world does not lack the knowledge about how to build a water-secure future; it lacks the political will." Stan Cox's book Any Way You Slice It: The Past, Present, and Future of Rationing, will be published by The New Press. Source: Al Jazeera
<urn:uuid:0c402fa8-9596-471e-8885-7f092d8845c4>
CC-MAIN-2016-26
http://www.aljazeera.com/indepth/opinion/2013/05/201352111015642145.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00087-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961431
1,862
2.828125
3
Payments for environmental services (also known as payments for ecosystem services or PES), are payments to farmers or landowners who have agreed to take certain actions to manage their land or watersheds to provide an ecological service. As the payments provide incentives to land owners and managers, PES is a market-based mechanism, similar to subsidies and taxes, to encourage the conservation of natural resources. This approach recognises the important role that the environment plays in contributing to our wellbeing and economic prosperity, and the potential of market-based approaches to promote conservation and address environment-related market failures. In PES schemes, people managing and using natural resources, typically forest owners or farmers, are paid to manage their resources to protect watersheds, conserve biodiversity or capture carbon dioxide (carbon sequestration) through, for example, replanting trees, keeping living trees standing or by using different agricultural techniques. In some cases payments are made by the beneficiaries of the environmental services, such as, for example, water users and hydropower companies. In other cases, national or local governments pay on behalf of their citizens, who are indirect beneficiaries. The role of the private sector is typically growing among PES schemes at both international and local levels. While the scheme is widely used on land, it is only just gaining momentum in coastal and marine ecosystems. PES is an increasingly popular conservation and resource management tool in developing countries. IIED works with Southern country partners, particularly in Costa Rica, Brazil, Vietnam and Uganda, to explore the extent to which PES can help reduce poverty, and satisfy economic and environmental objectives. But the insecure land and resource tenure of many poor people remains a key obstacle to them participating in and benefiting from PES schemes. Other obstacles many PES schemes face are the complex and often bureaucratic project procedures and high project transaction costs. IIED’s research findings are targeted at developing country governments, private firms, donor agencies and other organisations working in the field of PES so they can learn from the knowledge we have gained through our hands-on action research approach with poor and marginalised groups in the south. Learning lessons from Costa Rica’s PES scheme Costa Rica’s pioneering programme of payments for environmental services (PES), which began in the 1990s, was a unique experiment in developing countries at that time. Farmers who owned forests could receive payments for the benefits their forests produced, and people who benefited from those services were expected to pay for them. How has the scheme evolved over time? What challenges has it faced? And who really benefits and loses from these programmes? What are the social impacts of PES on people? A clearer understanding of these issues in PES-type projects is becoming increasingly important for the design of large-scale projects such as REDD+, in Costa Rica and elsewhere. The results of our in-depth study looking at the social impacts of the programme shows that payments tend to go to relatively large farms and private companies. More needs to be done for PES to have genuine social and economic benefits for the poor. The publication recommends steps that could be taken to help make this happen. Read Spanish? Read this Spanish translation. This short briefing paper reflects on the experiences and lessons learnt from Costa Rica’s PES programme and identifies four key areas for future action to expand on and improve existing efforts. This paper explains how focused data on poor farmers will help get payments for protecting forests where they will count the most. For more up to date insights into PES issues and opinions read our related blog posts. Paying for watershed services: an effective tool in the developing world Payments for watershed services (PWS) are an increasingly popular conservation and water management tool in developing countries. Yet financing PWS schemes remains a challenge because the actual evidence for their effectiveness is still scanty — it is hard to prove that they actually work to benefit both livelihoods and environments. Getting more direct and concrete data on costs and benefits will be crucial to securing the long-term future of PWS schemes, particularly given the considerable expansion in the number of schemes and proposals for PWS. The marine sector and PES Coastal and marine resources provide millions of impoverished people across the global South with livelihoods, and provide the world with a range of critical ‘ecosystem services’. Yet across the world, these resources are fast-diminishing. Traditional approaches to halt the decline of fisheries and other marine resources focus on regulating against destructive practices, but to little effect. A more successful strategy could be to establish PES schemes, or incorporate an element of PES into existing regulatory mechanisms. Examples from across the world suggest that PES can work to protect both livelihoods and environments. But to succeed, these schemes must be underpinned by robust research, clear property rights, equitable benefit sharing and sustainable finance. For up to date opinions on key marine issues read our related blog posts. Paying local landowners for ecosystem services to protect chimpanzee forests in Uganda Chimpanzees in Uganda are under threat as their habitat is lost to agriculture and human settlements. Central to this problem is the attitude of most farmers that chimpanzees and forest habitat conservation are a threat to their own livelihoods. IIED aims to show how an equitable and financially sustainable payment scheme can compensate local landholders for conserving and restoring forest habitats and for protecting chimpanzee populations.
<urn:uuid:e3ac27c9-ecbd-4ec4-9479-1be4c6613a6f>
CC-MAIN-2016-26
http://www.iied.org/markets-payments-for-environmental-services
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00143-ip-10-164-35-72.ec2.internal.warc.gz
en
0.943891
1,104
3.390625
3
by Dr. Abdel H. Ragab Mummification is an ancient mysterious art. It was important because the ancient Egyptians believed in an afterlife where the spirit, the ka, would return to the original body. The process of mummification took many years to perfect, probably a thousand years. The whole process depends on desiccation. Bacteria and fungi, like all other living organisms, require water to survive. Therefore, if water is removed from the body, putrefaction will not occur. Mummification was carried out in special places called "The House of the Dead." The morticians were qualified people who carried on this art. They were outcasts from the rest of the population, since people feared they that carried infections from dead people. The process of mummification took 70 days and was expensive. It was reserved for royalty and nobility. The process began by laying the corpse on a table. An incision was made on the left side of the abdomen, and all the organs were taken out, but not the heart. The individual organs were wrapped in a cloth with natron salt and put in canopic jars (Fig. 1). Natron salt is a combination of sodium bicarbonate and sodium chloride, and was obtained from the Natron Valley in the Western Desert (Wadi El-Natron). The brain was removed by making a puncture through the nose and then the remnants of the brain were removed and discarded. The eyeballs were also removed and artificial eyeballs were put in their place. The next step was to cover the body with natron salt for 40 days (Fig. 2). The salt was changed every few days. At the end of this period the body was completely dessicated and had lost 70% of its weight (since the body is 70% water). After that the abdomen was again opened and filled with myrrh and frankincense to cover the smell. Different products of resin and coconut oil were spread on the skin making it impermeable to the atmospheric humidity (Fig. 3) Afterwards the body was wrapped in linen bandages (Tutankhamun's body was wrapped in 12 layers of bandage) (Fig. 4). Amulets were spread over the body to keep the evil spirits away and a copy of the Book of the Dead papyrus rolls were put next to the body to guide him through the afterlife (Fig. 5). Figures 4 and 5 The body was then put in a sarcophagus and carried to the burial chamber. The pharaoh or the nobleman's favorite animal was also mummified and all his important possessions were also put in his tomb. The priest then performed the Opening of the Mouth ceremony to ensure the return of his senses and his ability to respond to questions in the afterlife.
<urn:uuid:49b14a45-47f6-418f-8816-63aeaefb29f6>
CC-MAIN-2016-26
http://www.touregypt.net/village/exhibits_mummification.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408828.55/warc/CC-MAIN-20160624155008-00062-ip-10-164-35-72.ec2.internal.warc.gz
en
0.984965
577
3.828125
4
November 29, 2012 On November 14, believers in the Hindu religion, along with many other Hamilton students, celebrated the festival of lights—the all-important Hindu holiday of Diwali. Diwali is considered to be the most important holiday in the Hindu tradition and marks the end of the financial year for businesses. “Diwali is as important to us as Christmas is for Christians,” Yan Zhong Zhen ’13 explained. It is typically celebrated each year in October or November and lasts for five days. Each day has a special significance. On the third day of Diwali, for example, observers pray for wealth for the following year. Diwali is celebrated differently throughout India. In one tradition, Hindus celebrate the return of Lord Rama after his fourteen years of exile. Nevertheless, certain things are universal in the celebration of Diwali. For instance clay lamps are used by all Hindus, along with powder patterns in the form of lotuses signifying welcome. Additionally, it is typically celebrated with sweets treats, friends, family and fireworks. The main idea that transcends all celebrations is the he victory of good over evil. “Diwali is one of my favorite events of the year because I’m able to share the epic story of Ramayana with the campus,” said Luxsika Junboonta ’13. Sponsored by the South Asian Student Association and the Asian Cultural Society, many students packed the Annex to listen to this history of Diwali, witness the Cornell Bhangra Dance Team and eat a delicious dinner from Minar. Puru Gautam ’16, a Hindu student, was glad he was able to celebrate such an important holiday for his religion, particularly since it is his first year away from home. “I’m just glad people can learn about other cultures, particularly mine,” said Gautam. Eliza Kenney ’15 enjoyed the experience as someone who is not of the Hindu faith. “I was so glad to see how the Hindu people celebrate Diwali, especially since I have never been a part of such a celebration before,” said Kenney. “I think it’s really great they included non-Hindus in this celebration as well.”
<urn:uuid:f852649b-d6b0-4430-aacd-b95efa043424>
CC-MAIN-2016-26
http://students.hamilton.edu/spectator/news/p/hamilton-celebrates-diwali/view
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393146.70/warc/CC-MAIN-20160624154953-00065-ip-10-164-35-72.ec2.internal.warc.gz
en
0.964793
483
2.609375
3
The factors affecting the power consumption of a fridge or freezer can be broken down into 4 broad categories: Fridge Design: This is a very important factor when it comes to energy consumption. Unfortunately, it cannot usually be easily modified by the end user. Positioning: In some circumstances the fridges position can be changed. Usage patterns: Determined by users. Maintenance: Determined by Users Each of these broad categories can be further broken down. This is probably the main factor in energy consumption. Unfortunately, fridges are normally built to a price and not a performance level. Such factors include: The thickness (and type) of insulation in the external walls of the fridge. I remember an add for one brand of fridge 6 or 7 years ago which had as one of it's selling points: "Look how thin the walls are". The worrying thing is that it was one of the most effecient fridges on the market at the time! The effeciency and positioning of the compressor pump. The pump is normally located at the bottom of the fridge. Most of the pumps are quite ineffecient and get very warm (Too hot to touch for more than a second or so). What happens to the heat these pumps give off? As hot air is prone to do, it rises and warms up part of the cabinet immediately above it. This heat finds its way into the fridge, which means the compressor has to work harder to get rid of it, which means it heats up more etc etc. Defrosting method used. Cyclic defrost models have a low wattage heater in the evaporation plate which is meant to turn on & off as required. With my fridge, the heater appeared to be on most of the time the compressor was off. Frost-free models have the heaters inside the walls of the fridge. Exposed Condenser coil at rear or 'clean back': The newer 'Clean back' fridges normally have the condensor coils built into the rear and side walls, providing a larger area for cooling than those with the condensor coils exposed at the rear. Also, because they're not exposed, they don't gather the dust that reduces the efficiency of exposed coil models. Note that some new smaller fridges (220 litres and less) still have the exposed coils at the rear. Also note that it may be possible to add extra external insulation to the models with the exposed rear condensor, but not the 'clean back' models. The quality and condition of the door seals. This is also an ongoing maintenance issue Fridges should ideally be in the coldest part of the house, with good air flow around them. Unfortunately, the kitchen doesn't normally fit into this category. (How often have you seen a dedicated 'fridge cavity' located beside a stove?) Some suggestions to consider where appropriate: Encourage air circulation around the fridge. This may involve moving the fridge out a few cm from the walls, or elevating it a little to facilitate natural air flows. A solar (or normal) fan could be installed near the compressor to improve air movement. If the house is elevated, a hole (with appropriate vermin protection) could be drilled under the fridge, which would draw cooler air up from under the house. Another warm air exhaust to the outside could be installed above the fridge. Organise things in the fridge, so you don't have to stand there with the door open for long periods while you search for that elusive jar. Temperatures vary in some fridges, so things that need to be cooler could be placed in the cooler parts of the fridge. (Use a fridge thermometer to find the cooler parts). Only refrigerate what needs to be refrigerated. Things have a habbit of gravitating into the fridge when they don't really need to be there. It may be appropriate to refrigerate some things in summer, but not in winter. Think about what you're refrigerating and why. Do things like drinking water really need to be refrigerated? Why not leave a water container out on the bench. If people want water or other drink cooler than the ambient temperature, use an insulated flask with eaither cool water/drink and/or ice blocks in it. At least then you'll only have to open the fridge once every few hours for cold drinks rather than every few minutes.... If you have large empty spaces in the fridge, fill them up with containers of water. This way, when the fridge door is open, less of the cold air will 'fall out'. This also adds to the thermal mass of the contents, which helps maintain a steady temperature inside the fridge. Note: Adequate room should be left around containers for the air to circulate, so don't cram everything up too tightly. Check the temperature inside the fridge/freezer with a thermometer. Note that different temperatures may be recorded in different positions. Adjust the temperature dial so the correct temperatures are achieved (Normally a max of 4°C for the fridge, and -18°C for the freezer). Note that internal temperatures can vary with external temperatures, so the dial settings may need to be changed from time to time. Check that the door seals are clean and in good condition. Check that the door is sealing properly. If you have a fridge with exposed coils at the back, dust them down from time to time. Check that there's not a build-up of ice around the fridge/freezer. Defrost if necessary. My Fridge & Freezer The graph to the right demonstrates the combined power consumed by my fridge & freezer with varying ambient temperatures. The graph is a 'best fit' line of data taken over a six month period. Two people were living in the house during this period. Fridge: 330 litre upright fresh-food only (no freezer) with exposed condensor coils at rear. Purchased 1994. Quoted power consumption: 490 kWh / year Freezer: 220 litre chest freezer. Purchased 1994. Quoted power consumption: 350 kWh / year The fridge has been modified slightly: The cyclic defrost heater has a switch in series with the heating element. I leave this switched to off most of the time. This has reduced the fridge power consumption by around 300 Wh /day (Approximately halved it's consumption in Winter from 600 to 300 Wh/day). Last Updated: 06/02/03
<urn:uuid:d84df799-b03c-4958-81ec-e71b5d5c12cb>
CC-MAIN-2016-26
http://users.tpg.com.au/users/robkemp/Power/fridge.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00046-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94789
1,323
2.515625
3
Labor Day weekend rocket fun! GREENBELT, Md. (NASA GODDARD) – Model rocket enthusiasts are invited to launch their rockets September 1 from the Visitor Center at NASA’s Goddard Space Flight Center in Greenbelt, Md., and learn about NASA’s next mission to the moon: the Lunar Atmosphere and Dust Environment Explorer (LADEE). All are welcome to view the launches and learn about the mission. In partnership with NASA’s Ames Research Center in Silicon Valley, Calif., Goddard’s Wallops Flight Facility will launch LADEE in September, a robotic mission that will study the moon’s thin atmosphere and dust particles. Ames designed, developed, built and tested the spacecraft and will manage the 100-day mission—-which will attempt to confirm whether dust caused a mysterious glow on the lunar horizon astronauts observed during several Apollo missions—and Goddard plays a variety of key roles in LADEE. The Goddard event begins at 1 p.m. EDT. The launch system will be provided, and technical support will be available for those who have never launched a model rocket before. Rocketeers will need to supply their own rocket, engine and wadding. The engine limit is D engine or less. These items are available for purchase at the gift shop located next to the visitor center. Attendees can also inspect a model of LADEE, and Sarah Noble, LADEE program scientist, will give a talk about the mission. Lora Bleacher, NASA educator, will be available to answer questions about the mission, the moon, and NASA’s other recent lunar missions, including the Lunar Reconnaissance Orbiter. Directions to the Visitor Center are available at: For more information on the gift shop, refer to: After LADEE is launched, Ames will control the spacecraft and execute mission operations. Goddard is responsible for the LADEE launch and several important LADEE components, including contributing a scientific instrument, demonstrations of the mission’s payload, including a state-of-the-art laser communications system, and science operations. For more about LADEE, refer to:
<urn:uuid:d2fd47d8-e9c1-4fda-8eb7-d592be9b75d5>
CC-MAIN-2016-26
http://wtvr.com/2013/08/29/labor-day-weekend-rocket-fun/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00201-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917814
447
2.8125
3
Learn about feeding, hand-feeding, range, size, flight, torpor, where they sleep, migration and more …… The Hummingbird is a small bird of the Trochilidae family. The rapid beating of the hummingbirds wings (60 to 80 beats per second) makes the distinctive humming sound from which they get their name. Interesting facts about the hummingbirds: The hummingbird is the smallest bird and also the smallest of all animals that have a backbone. They have no sense of smell Because the hummingbird can rotate its wings in a circle, they are the only bird that can fly forwards, backwards, up, down ,sideways and hover in mid air. To conserve energy while they sleep or when food is scarce, they can go into a hibernation-like state (torpor) where their metabolic rate is slowed to !/15th of its normal rate. During migration, some hummingbirds make a non-stop 500 mile flight over the Gulf of Mexico. During courtship dives they can reach speeds up to 60 miles per hour and can average speeds of 20 to 30 miles per hour. They are the second largest family of birds with 343 species. Their wings can beat their wings up to 80 times a second during normal flight and up to 200 times per second during a courtship dive. The hummingbird has a heart rate that can reach up to 1,260 beats per minute. Percentage wise, the hummingbird has the largest brain of all birds (4.2% of its total body weight). They have very weak feet and use them mainly just for perching. The Hummingbird Range (where do hummingbirds live in the world) Hummingbirds are found only in North America and South America. Their range extends as far north as southeastern Alaska and extends as far south as southern Chile. South America has the biggest variety of hummingbirds and more than half the species are found there. The country of Ecuador in northwestern South America has the largest number of hummingbirds of any one country with 163 different species. There are over fifty species of hummingbirds that regularly breed in Mexico. Sixteen different species of hummingbirds breed in the United States, but the Ruby-throated Hummingbird is the only one that breeds east of the Mississippi River. And four species breed in Canada. male Ruby-throated hummingbird male Black-chinned hummingbird 16 Species of hummingbirds that breed in the United States: for pictures of the 16 species of hummingbirds click on the link Species of hummingbirds * Allen’s Selasphorus sasin * Anna’s Calypte anna * black-chinned Archilochus alexandri * broad-tailed Selasphorus platycercus * calliope Stellula calliope * Costa’s Calypte costae * rufous Selasphorus rufus * berylline Amazilia beryllina * blue-throated Lampornis clemenciae * broad-billed Cynanthus latirostris * buff-bellied Amazilia yucatenensis * lucifer Calothorax lucifer * magnificant (Rivoli) Eugenes fulgens * ruby-throated Archilochus colubris * violet-crowned Amazilia violiceps * white-eared Hylocharis leucotis The Hummingbird Size Information Hummingbird’s size ranges from the smallest (the Bee Hummingbird of Cuba that weighs about 2.2 grams) ….. to the largest (the Giant Hummingbird of South America that weighs about 20 grams). The smallest hummingbird, and as a matter of fact, the smallest bird on earth, is the Bee Hummingbird. With a length of only 2.25 inches, the Bee Hummingbird isn’t much larger than a bee. The largest hummingbird, the Giant Hummingbird, is about 8 inches in length, or about the size of a large starling. The common Ruby-throated Hummingbird that most people are familiar with….. weighs about 3 grams. A hummingbirds weight will almost double this amount as they put on fat in getting ready for migration. Ruby-throated hummingbirds are about 3 1/2 inches from the tip of their beaks to the tip of their tales. The female Ruby-throated hummingbirds are about 15 to 20% larger than the male Ruby-throated hummingbird. Giant hummingbird videos: This baby Giant Hummingbird landed on our ledge. Normally they are very timid, but this one stayed around for over an hour. Even as a baby they are still larger than all other hummingbirds. Most people recognize hummingbirds as the very tiny creatures we often see near our homes. However, not all hummingbirds are small. Giant hummingbirds are the largest of the 320 hummingbird species. The North Carolina Zoo exhibits five giant hummingbirds, which are very rare in captivity, in its Sonoran Desert habitat. The Hummingbird Flight Information Unlike other birds, a hummingbird can rotate its wings in a circle. Because of this special hummingbird fact, they are the only bird that can fly both forwards and backwards. They can also fly up, down, sideways, hover in one spot, or fly upside down for short distances. The hummingbird flight muscles make up 30% of a hummingbirds total body weight. The Hummingbird video: watch this incredible video of a hummingbirds wing movement in super slow motion. They flap their wings up to 70 times per second; it’s heart rate can reach 1,260 beats per minute. Normal flight speed for hummingbirds is about 25 miles per hour, but they have been clocked at speeds in excess of fifty miles per hour during their courtship dives. During normal flight the wings beat about 60-80 times per second. In their courtship dives they might beat up to 200 times per second. A courtship dive is an elaborate display of flight performed by the male hummingbird at the start of the nesting season. The male hummingbird will climb high into the air (up to 60 feet or more) and dive towards the ground and forming a wide arc, climbs back into the air to about the same height. These dives, forming a wide U-shaped pattern, my be performed 3 or 4 times in rapid succession. These hummingbird courtship dives are performed to attracted the attention of the female hummingbirds and to ward off other male hummingbirds that might be in the area. Click on the play button below to view the Anna’s hummingbird video on courtship which shows everything from the adding of pieces of lichen and plant down to the nest, to the courtship dive of the male bird to get the females attention, the male and female Anna’s in flight together, followed by the eggs in the nest. Then see the female Anna’s hummingbird feeding the very tiny newly hatched baby hummingbirds. See the baby hummingbirds test their wings and finally the female Anna’s hummingbird feeding the baby hummingbird after it has left the nest. First hummingbird video of the Marvelous Spatuletail’s amazing courtship display by Greg R. Homel, Natural Elements Productions and distributed by American Bird Conservancy, http://www.abcbirds.org.This rare hummingbird inhabits the highlands of Peru. more facts about the hummingbird: The Hummingbird Life span Most hummingbirds unfortunately die during their first year but, those that do survive that first year have an average hummingbird life span of 3 to 4 years. The longest recorded life span is from a female Broad-tailed Hummingbird that was tagged then recaptured 12 years later, making her at least 12 years old. The oldest known surviving Ruby-throated Hummingbird is a banded bird that was 6 years 11 months old. The oldest known hummingbird life span for a Rufous Hummingbird is a banded bird that was 8 years 1 month old. The Hummingbird Feeding Information (interesting hummingbird facts) They will feed 5 to 8 times every hour for 30 to 60 seconds at a time. The large portion of a hummingbirds diet is sugar which they get from flower nectar and tree sap. They also eat insects and pollen to get protein to build muscle. They are also easily attracted to the Hummingbird nectar feeders. My favorite feeder is Aspects 12oz Hummzinger Ultra With Nectar Guard It’s inexpensive and has several features that makes it well worth the price. The HummZinger has patented Nectar guard tips which are flexible membranes attached to the feed ports that prohibit entry from flying insects, but allow Hummingbirds to feed as usual. The HummZinger also has a built in ant moat that will stop crawling insects from getting to the nectar. It also has raised flower ports that divert rain. This mid-size nectar feeder has a 12 oz. capacity and can be hung or post mounted with hardware provided. It has four feeding ports for hummingbirds and is made of unbreakable polycarbonate. Easy to clean. For ease of cleaning and protection from bees, wasps and ants, this feeder can’t be beat. Hummingbirds have the highest metabolism rate of any animal on earth. They have a high breathing rate, a high heart rate, and a high body temperature. To maintain all of this and to provide energy for flying they may consume anywhere from 2/3 to 3 times their body weight in food each day. The Hummingbirds’ bills are long and tapered to match perfectly with the tubular shaped blooms on which they like to feed. Their tongue is grooved on the sides to collect nectar which they lap up at the rate of 13 licks per second. They are very territorial and will perch in trees, vines or bushes, between feedings to watch the area….. and will attack other birds that might try to feed at their food source. They are also very helpful in pollinating the plants on which they feed. There are some plants that are only pollinated by hummingbirds. As they lap up the nectar, pollen from the bloom is rubbed off onto the hummingbird, then carried to the next bloom as it continues to feed. the hummingbird video: facts about the hummingbird, flowers to attract hummingbirds, tips on attracting hummingbirds, tips on hummingbird feeders, making hummingbird nectar, hanging feeders and keeping ants away from the hummingbird feeder and more interesting hummingbird facts. Watch the video to see hundreds of hummingbirds feeding at the same time at Hummingbird’s Haven…….incredible! Another video at Hummingbird’s Haven interesting hummingbird facts cont. The Hummingbird Sounds The hummingbird sounds are of two types, vocalizations and the sounds their wings make. Hummingbirds lack a true song. Most of their vocalizations consists of chirping hummingbird sounds. Humming birds frequently vocalize to attract a mate or when they are excited. They are named for the humming sound they make through the rapid movement of their wings, when they are in flight. The hummingbird video: female Ruby-throated hummingbird sounds….you can really hear the hum of her wings as they beat about 60-80 times per second Video on the hummingbird sounds: Scientists at the University of California, Berkeley have analyzed the chirp made by male Anna’s hummingbirds as they swoop down towards a female. The hummingbird sounds video: A hummingbird (possibly a juvenile Anna’s) chirping Video about the hummingbird sounds: Colibri hummingbird chirping Torpor is a hibernation-like state that the hummingbird can enter to help conserve energy. While in a state of torpor, the humming bird will lower its body temperature by about 20 degrees and up to 50 degrees. This will help the bird conserve energy on cold nights or anytime that food might be scarce. The next morning the bird can raise its metabolism and get its body temperature back to normal usually within a few minutes but, it can take up to an hour. They can even lower their heart rate from 500 beats per minute to as few as 50. Also to conserve energy, hummingbirds may even stop breathing for periods of time. Even with all these energy conservation abilities, a cold night or difficulty locating enough food for a day, can prove to be fatal to the hummingbird. Where do hummingbirds sleep? They will find a tree in an area that offers some protection, where they will perch on a tree branch to sleep. Fir trees are trees that are thick and offer protection from the elements, that hummingbirds like to use. The hummingbird will grasp the branch with its feet and go into a state of torpor to help conserve energy while it sleeps. While in this state of semi-hibernation the hummingbird will sometimes loosen its grasp just a little and will be found hanging upside down on the branch. When the sun comes out and warms them up though, they will resume their normal activities. Below you will find several sleeping hummingbird videos that show them hanging upside down in a state of torpor. Here’s a where do hummingbirds sleep video: Below is a video that talks about torpor and about where do hummingbirds sleep. Here’s another video on where do hummingbirds sleep: Hummingbirds go into a state of torpor when they sleep (this one was hanging upside down outside our window for about 30 minutes). In this state of torpor, they become hypothermic, conserving 50 times less energy, have almost no pulse and only become alert when approached. Untrue hummingbird facts: hummingbirds migrate on the backs of geese. Another of the untrue facts: keeping your feeders out too long in the fall will upset the hummingbirds’ normal migration pattern. Their migration is causes by hormonal changes that take place within the hummingbird’s body. These hormonal changes are triggered by the changing length of daylight. Since it is the shorter hours of daylight in the fall that causes the hummingbirds to migrate, you don’t have to worry that keeping your feeders out too long in the fall will cause the birds to hang around and not migrate. Many species of Hummingbirds that migrate to the United States must travel very long distances from Mexico and Central America to get here. Many Ruby-throated Hummingbirds must travel 2,000 miles to go from Panama to their destination in Canada. One of the most incredible facts about hummingbirds is that this 2,000 mile journey will also include a500 mile non-stop flight to cross the Gulf of Mexico. another hummingbird migration article The Hummingbird that travels the farthest north to breed is the Rufous, that travels all the way to Alaska to breed. Click on the link, the hummingbird video, to view more hummingbird videos back to top of The Hummingbird Facts and Information page Copyright 2010| Michael D.Baughman The owner of this site gets a commission for products sold on this site
<urn:uuid:4b4db637-df21-454e-9e2d-d6a309282cfc>
CC-MAIN-2016-26
http://howtoenjoyhummingbirds.com/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00072-ip-10-164-35-72.ec2.internal.warc.gz
en
0.928563
3,232
3.53125
4
Running late to meet a friend and can't quite remember which alleyway the Urban-spoon-recommended café is on? No worries, your Samsung Galaxy Note 4 smartphone (so much smarter than you) can map* out the route and facilitate your instantaneous “B there in 5, soz!” message. You arrive, check-in on FB, go through some IRL catch-up formalities with your mate, send a sneaky Snapchat of them to another friend (Look who I found!), upload an Instagram pic of your poached eggs (captioned: #brunchenvy), check your newsfeed and then summarise the experience in a well-crafted 140-character tweet.** But this was not always the case. Oh, no. From smoke signals to carrier pigeons, newspapers etched in stone to the rise and fall of MySpace — the way we communicate with each other has changed significantly over time thanks to advancements in technology. People have been communicating with each other long-distance since way BC — but it’s not the instantaneous, meme-centric interaction we have come to know and love today. Read on for a brief recap of communication through the ages: Smoke Signals: Smoke signals are the oldest form of visual communication. Simplistic in design and execution, they were used first used in 200 BC to send messages along the Great Wall of China. In 150 BC, Greek Historian Polybius devised a system of smoke signals that were visual representations of the alphabet. This meant that messages could easily be sent by holding sets of torchers in pairs. State of the Art! Carrier Pigeon: In the 12th century AD Sultan Nur-ed-din built pigeon lofts and dovecotes in Cairo and Damascus, where pigeons were used to carry messages from Egypt to cities as far away as Baghdad in modern day Iraq. This extensive communication system, which used pigeons to link cities hundreds of kilometers apart, is recognised as the first organised pigeon messaging service of it’s kind. Pigeons also played a pivotal part in both WWI and WWII, unerringly delivering vital messages that helped to save the lives of thousands of civilians and combatants alike. One such bird — ‘GI Joe’ was awarded the Dicken Medal for bravery by the Lord Mayor of London for saving over 1,000 British soldiers in World War II. Good to know for when the wireless drops out in the office. Telegraph: No, not the Daily Telegraph. The telegraph is a now outdated communication system that transmitted electric signals over wires from location to location that translated into a message. In 1844, Samuel Morse sent his first telegraph message, from Washington D.C. to Baltimore Maryland. While the 21st Century saw the death of the telegraph, there’s no doubt it laid the groundwork for the communications revolution that led to the telephone, fax machine and Internet. Cheers, Morse! Landlines: Before the cellular phone, there existed these things called landlines. Most households had one from the 1950’s onwards, and only one person could make a call at a time. Ah, ‘twas a time of untraceable prank calls and hilarious family answering machine greetings. A time when your privacy was dependent on how long your home-phone cord was, and when the cost of calling a mobile phone was astronomical (forcing you to hide out from your parents when the monthly bill came in the mail.) Dial Up Internet: The archaic way to connect to the wide world web – a time before Wi-Fi when your mum picking up the phone meant that your LimeWire single-song download would be delayed for another whole day, your Neopets abandoned and your msn conversations with your BFF’s cut short. Social suicide. SMS: The first text message ever sent was in 1992. It simply read ‘Merry Christmas’ and was sent to the CEO of Vodafone. Now over 8.6 trillion are sent each year. OMG :O thts so0o many!!! Facebook: R.I.P Tom from MySpace. The social networking site Facebook was invented by Mark Zuckerberg in 2004 and was originally purposed to connect Harvard students with one another. Now, it boasts 1.23 billion users monthly (or 1/6th of the worlds population). Those users have made 201.6 billion friend connections and have clicked the 'like button' 3.4 trillion times. On Facebook, everyday is a reunion. That girl you went to high school with and haven't spoken to since year 7 maths (when you asked to borrow an eraser) is now flooding your newsfeed with status updates of her breakfast and photos of her newborn. Facebook means that the hippy you met in the depths of a Mexican jungle is now your friend for life. Facebook is not just a way for us to stay socially connected either, it’s also an extremely profitable marketing tool used by savvy businesses to connect with new and pre-existing customers. It’s hard to imagine life without it. The Samsung Galaxy Note 4 Smartphone: The future is now. The new Samsung Galaxy Note 4 is 5.7 inches of pure innovative technology that will help keep you connected with all the important people in your life across a multitude of social-network portals. All the while giving you a display resolution of 2560 x 1440 pixels — making web browsing* (Instagram trawling) an incredibly sensory experience. The screen can automatically adjust to various surroundings and lighting conditions, so when you're hanging out in the park, you can send a vital flirty text to your significant other. The Samsung Galaxy Note 4 also has a multi-window functionality, meaning you can have more than one window open at a time! The communication possibilities are huge. SMS, Twitter, Instagram, Facebook, Email* — stay on top of them all. The Samsung Galaxy Note 4 advanced S-Pen and air command lets you draw, write, drag and drop, select multiple items or hover for quick info as well. So. Much. Multi-tasking. The large battery, Ultra Power Saving mode and fast adaptive charging helps minimise the need to be chained to your charger. The 16 megapixel Smart OIS back camera, and 3.7 megapixel front-facing F1.9 lense with Wide Selfie Mode helps maximise the number of friends you can fit into one photo. Ultimate selfie! But, if it’s all too much, remember: all you need to send a smoke signal is a blanket and the wits/materials needed to build a fire. *Internet connection required. Data and subscription charges may apply. **Applications may need to be downloaded from Google Play. Internet connection required. Data, subscription and other charges may apply.
<urn:uuid:6e8a326f-54ee-475c-8154-92bed667e0df>
CC-MAIN-2016-26
http://mashable.com/2014/12/05/evolution-of-communication-brandspeak/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00155-ip-10-164-35-72.ec2.internal.warc.gz
en
0.932007
1,410
2.859375
3
Technological advancements have long played pivotal roles in the practice of architecture and related academic discourse. Advances in materials, systems, and manufacturing have reshaped our built landscape and reconfigured processes of design and construction. Contemporary design and construction processes have been heavily influenced by the systems of mass production developed at the end of the 19th century. While most buildings are singular, specific constructions, many facets of their composition are assembled with universal components. As a result, an architects ability to deviate from these norms has often been precluded due to issues of time and cost. Today, this is changing rapidly, as digital media are transforming the practice of architecture and its allied disciplines. While computing as a design tool has been in use for more than forty years and has been applied in production processes in the aerospace and automotive industries, only now has its presence permeated further into the practice of architecture. Boundaries between architect, consultant, and fabricator are shifting, and new approaches to building are emerging with the digital building model as the instrument of communication throughout the process, from file to factory. Ironically, the pervasiveness of the digital has ushered in a level of control over the physical structure absent for much of the past century. This new control is changing the way architects think about their tasks: as we enter an era in which computing power and manufacturing sophistication allow us to design and construct nearly anything conceivable, architects and schools of architecture must increasingly ask why? and to what end? The Carnegie Mellon School of Architectures Fabrication Lab provides a venue through which students and faculty can gain experience with this new reality of the profession. It will be a vehicle for the use of advanced digitally driven design, prototyping and manufacturing equipment, fostering a context through which students and faculty are better equipped to probe the potential of pervasive digital design and manufacturing processes. Fundamental to this is the understanding that architecture exists in the physical world and the belief that the physical realm of design investigation is a necessary complement to virtual simulation. As such, the Fabrication Lab is a bridge between the digital and the physical and is intended to be utilized throughout the design process at multiple scales. Furthermore, the Fabrication Lab will equip young professionals with the skills to thrive in an increasingly fluid and technologically sophisticated model of practice. This facility is a natural fit in a school of architecture with a strong legacy of innovation in design education and at a university renowned for the advancement and application of technology. – Jeremy Ficca, dFAB Director / Associate Professor
<urn:uuid:af11dc8a-26b4-4c6c-b3cd-6b8618c1b296>
CC-MAIN-2016-26
http://cmu-dfab.org/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397567.28/warc/CC-MAIN-20160624154957-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958645
506
2.84375
3
Monday Reads: The Arctic Nights Edition As portions of the contiguous United States find themselves (perhaps a bit uncomfortably) in winter’s chilly embrace, a recently published study in the scientific journal Marine Biology may shed new light on the wintry lifestyles of the Arctic regions of our country. During this season, Arctic areas like the Beaufort and Chukchi seas, off the northern coast of Alaska, experience months of ‘polar nights’—times when the sun fails to make an appearance (making for a veritable vampire haven, one might say). The extreme degree of coldness of these winter months is key to the survival of species like the polar bear and ringed seals, who depend on the restoration of thick sea ice (long since diminished during the warmer spring and summer months) in order to hunt and raise their young. The little that we do know about Arctic ecosystems and the roles they play in Earth’s greater ecosystem, though, is vastly overshadowed by what we don’t know. As Earthjustice Attorney Holly Harris put it, even “federal government scientists admit big gaps in what they know about the basic features of the Arctic Ocean, like where various species of fish and marine mammals live and feed at different times of the year, how ocean currents move and affect the food chain in this ocean and how an oil spill could be stopped and cleaned up under frozen ice.” The paper, “Bioluminescence in the high Arctic during the polar night”, underscored the vacuum of knowledge, reporting on fascinating, previously unknown behaviors from a study conducted in the Arctic region of Svalbard. For the first time, not only was bioluminescence observed during these polar nights, but seabirds from kittiwakes to guillemots were seen foraging (somehow) in the darkness. The scientists saw what was occurring, but, like so many aspects of Arctic wildlife, the whys and hows are yet to be understood. That these discoveries have up-ended previously held assumptions about the Arctic ecosystem, is a concise example of how much we still don’t know about the Arctic. Significantly, the researchers concluded: [These] results open new lines of enquiry regarding the function and process during a time of year when classical paradigms of Arctic ecosystems postulate that organisms are predominately in a state of hibernation … Ultimately, these questions also have implications for human activities (i.e., oil exploration) in the high Arctic, which up until to now has been considered ‘‘without life’’ during the polar night. Over the past several months, the Environmental Protection Agency has approved permits for Shell’s massive drilling fleet and the pollution the ships will bring to the Arctic air, while the Interior Department has affirmed the sale of Chukchi Sea Lease 193 for oil and gas development. All are unnerving steps towards irreparably harming critical ecosystems we know so little about, and that are already suffering from the far-flung effects of black carbon pollution and climate change. For many, the Arctic connotes icy, desolate wilderness, with little evidence of life and even fewer reasons to recommend its protection and preservation. The reality, however, is that it is a lush, complex landscape ranging from verdant summers to winters of renewal, as acclaimed wildlife photographer Florian Schulz has documented: The months and weeks are counting down to summer 2012, when Shell intends to begin an aggressive Arctic drilling plan. Stay tuned to Earthjustice to find out how you can lend your voice to help protect these irreplaceable ways of life.
<urn:uuid:4239363d-97bd-41a1-a127-a8b9380ae69e>
CC-MAIN-2016-26
http://earthjustice.org/blog/2012-february/monday-reads-the-arctic-nights-edition
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947644
753
2.703125
3
1. Design of Multiple-input Converters for Integration of Diverse Distributed Generation Sources – Prof. Alexis Kwasinski One of the main barriers for increased use of renewable energy sources is their variability and low availability (e.g., in the extreme ideal case of a place where clouds do not form, solar power is available on average 50 % of the time). One identified approach that mitigates individual renewable energy source variability in order to provide a more continuous power supply is by diversifying energy sources through hybrid systems that combine two or more types of sources. However, in these hybrid systems there is always a tradeoff between higher power electronic interface availability—achieved through modular redundant configurations—and lower cost—achieved through a single center converter. The PI has shown that multiple-input converters can achieve availabilities equivalent to those obtained from modular single input converters, but at a cost in between that found in center converters and in modular single-input converters. Multiple-input converters are built from their single-input version by multiplying their input stage and sharing the output stage. Multiple-input converters provide not only high availability, but they also provide a more effective way of integrating renewable sources. For example, multiple-input converters allow for individual maximum power point tracking in each photovoltaic module, which otherwise would be limited to the output of the solar panel by the array component with the worst performance. Therefore, the goal of this project is to study multiple-input dc-dc converters, with special focus on integrating energy storage and improving efficiency. 2. Data Collection Framework for Home Energy Management Systems – Prof. Alexis Kwasinski, Prof. Robert Hebner One of the envisioned advantages of smart grid technology development is the creation of a more customer-oriented electric power environment. Arguably, the critical component within this customer centric environment that allows more effective interactions between the power supply (i.e., utility) side and the power point-of-use (i.e., user loads) side is the Home Energy Management System (HEMS). The HEMS acts as a communication portal and electric management interface between the distribution side of the smart grid and the user’s home or business power installation. Thus, the HEMS collects and processes data both from the utility, such as pricing information, and from local electric devices at the user premises, such as local energy storage levels, or expected consumption needs from loads, such as smart air conditioners or electric vehicles. The goal of this research project is to explore HEMS data collection needs on the customer side of a smart grid by identifying which data to collect, how to collect it and where to measure it. For example, one of the questions to be addressed is what is the optimal sampling rate? Another relevant question to be investigated is can fewer points be measured and can that information be used to infer behavior for the rest of the system? This project is intended to support standards development and evaluation as part of the Pecan Street Project. 3. Dynamic Availability Estimator for Smart Grid Systems with Energy Storage – Prof. Alexis Kwasinski Some of the expected advantages of smart grids include higher power availability, higher penetration of distributed renewable energy generation sources, and an increased electrification of the transportation sector. Energy storage devices, and in particular batteries, have been identified as an important enabling technology necessary to achieve the benefits of smart grids. However, high costs have impeded penetration of energy storage, particularly at the consumer level, in homes, and in electric vehicles (EVs). One of the factors that create high costs of energy storage (especially batteries) is their relatively short life, which depends on both their operation (e.g., cycling frequency and depth of discharge) and on environmental conditions (e.g., temperature). The goal of this research project is to develop a dynamic availability estimator (DAE) that could be integrated within a home energy management system (HEMS) or an EV controller. The DAE collects operational and environmental data that is used to adjust failure and repair rates in a system availability model embedded within the DAE. This availability model is used for real-time evaluation of a home or EV energy system availability. The availability information yielded by the DAE can then be used by the HEMS or the EV controller in order to operate its energy system in a way that maximizes the life of energy storage devices. It is expected that a DAE can reduce energy storage life cycle cost as much as 10 to 20 percent. 4. Direct Current Power Architectures for Residential and Commercial Smart Grid Applications – Prof. Alexis Kwasinski, Prof. Ross Baldick, Prof. Robert Hebner This research work aims at studying power quality, efficiency, stability, and control issues in dc distribution power architectures for homes or commercial facilities. Direct current supplied by relatively low-power generators to loads located nearby was the technology chosen in the first power distribution systems in the early part of the 20th century. Yet, dc-based power distribution architectures were soon overshadowed by ac systems. Some of the fundamental reasons why ac prevailed over dc for power distribution systems were that ac power supply was better suited for feeding induction motors, and that voltage transformation was significantly simpler with ac, so that longer distance transmission necessitated ac. As a result of these advantages, ac power has been the standard for power transmission and distribution for the last 120 years. However, the advent of power electronics interfaces used to convert dc power has altered the ac paradigm for distribution. Presently, dc power distribution architectures are more suitable to integrate alternative and renewable distributed generation (DG) technologies, such as fuel cells and photovoltaic (PV) modules, with more efficient loads, such as LED lights and motors with variable-speed drives (VSD). Dc distribution can be a suitable choice in many applications, particularly because modern electronic loads are inherently dc. One such application is in residences. Use of dc distribution in houses allows using more efficient dc lighting fixtures and more efficient air conditioning systems driven with variable speed drives and is possibly a simpler interface for plug-in electric vehicles. Moreover, operation of dc power architectures became transparent to the grid as a dc home will interface with a grid through a rectifier/inverter. Since the interface between the utility side and the customer side of a dc power architecture is controllable, both the grid and its connected inherent dc systems can benefit from the flexibility provided by the independent control at the grid tie point. However, questions persist in terms of stability issues caused by constant-power loads and power quality/harmonic control properties of dc power architectures. The goal of this research is to address these issues. 5. Grid Interaction and Control – Prof. Surya. Santoso Short-term and long term voltage variations, including voltage fluctuations caused by the intermittent nature of wind and PV generators, are projected to increase substantially over the next decade and beyond. The PI will quantify the magnitudes and temporal profiles of voltage variations and their impacts on sensitive loads. These variations will be dependent on the amount of load and generation, the architecture of the microgrid, voltage regulation apparatus (including capacitor banks), and the control of intermittent generators. A novel voltage regulation coordination scheme specific to a microgrid with a high penetration of renewable energy sources will be developed. This system will coordinate local and remote bus voltage regulation using both generator and microgrid control mechanisms. Current practices lack this coordination and often lead to situations in which controllers’ actions conflict with each other. The research will consider optimum placement for line regulators and capacitor banks to enable control of local and remote bus voltages, as well as reactive power support. Algorithms for microgrids and generators will be developed and coordinated with and without direct communication links. The frequency of microgrid short-circuits is expected to experience a significant diurnal variation that scales with the total amount of energy produced. Unpredictable variability in short-circuit levels poses serious challenges for coordination of overcurrent protection . Bidirectional power flows complicate utility protection and coordination processes, such as fault-clearing and reclosing. We will address this problem by developing algorithms that estimate the short-circuit contribution of distributed generation units directly connected to the grid. The interaction of various sources, switches, and loads within a microgrid will be explored via modeling and simulation of these complex systems and validated using microgrid test facilities at UT-Austin, as well as infrastructure set up by the Pecan Street Project. 6. Distribution System Design and Control – Prof. Mack Grady In the electric power distribution systems of the future, customers will no longer operate as passive loads, but will instead utilize controllers that perform day-ahead energy planning to minimize net kWh or net electricity purchase costs. The customer will receive new forms of information, such as solar radiation, weather forecasts, projected hourly kWh price curves, and occasional distress signals from the utility to reduce load. Distribution systems will have to contend with this two-way power flow and be able to adapt their planning, operation, and optimization of the system. With the proliferation of electronic metering, there will be hundreds of monitoring points on each feeder capable of reporting voltage levels and power usage. Once customers can inject power onto the feeder on a regular basis, the complexity of operating distribution feeders will increase greatly. The intermittent nature of PV generation caused by cloud movement presents a particularly serious voltage flicker threat. In part because UT-Austin maintains the solar radiation data base for NREL and Austin Energy, the Electric Power Research Institute (EPRI) is beginning a study at UT-Austin with the PI to assess the impact of high penetration levels of PV on distribution and transmission systems, including the potential significance of voltage flicker. 7. Advanced Energy Storage Systems – Prof. Arumugam Manthiram, Prof. Jeremy Meyers Electrochemical energy storage devices, such as batteries and electrochemical capacitors, are the leading EES technologies, but current EES technologies cannot meet the full set of requirements for commercial, residential, and transportation applications. Substantial increases in energy and power densities, reduction in system cost, and improvement in durability and reliability are needed to realize the full potential of EES technologies for these applications. The difficulties are largely associated with severe materials challenges and complex system issues. A fundamental understanding of the complex atomic and molecular processes that govern performance and durability will enable the design and development of new materials, cell designs, and system concepts that can meet future energy storage requirements. The energy storage team has diverse expertise ranging from materials development to system design/integration to implementation in vehicles or grids. The team also has the capability to fabricate an EES system with the new materials developed and then to demonstrate the EES system in a vehicle or stationary power unit. 8. Lithium Ion Batteries – Prof. Arumugam Manthiram, Prof. Jeremy Meyers, Prof. Buddie Mullins Building on UT-Austin’s existing strength and leadership in the lithium-ion battery area, our approach will be to develop new materials and novel system designs that can increase energy and power, improve safety and durability, and lower the cost so that the lithium-ion technology will be viable for transportation and stationary applications. We will focus on the design, chemical synthesis, advanced characterization, and electrochemical evaluation of new low-cost, better-performing cathode and anode materials. Specifically, high capacity layered oxides, high-voltage spinel oxides, nanostructured olivine phosphates as well as nanocomposites of layered spinel oxide offer a combination of high energy and power and will be pursued as cathode hosts. Nanostructured alloys and oxides as well as their nanocomposites will be pursued as anode hosts. Selected properties such as Li+ ion transport and capacity of novel transition metal carbides will be screened and their feasibility as next generation anode hosts will be assessed. The causes of degraded performance will be examined in terms of material and composite layer characterization at the beginning and end of battery life (>105 charge/discharge cycles). By quantifying the sources of degraded performance and developing mitigation strategies, the range and utility of the batteries may be extended closer to their theoretical limits (full capacity or energy). 9. Flow Batteries – Prof. Jeremy. Meyers, Prof. Arumugam Manthiram Rechargeable batteries offer a simple and efficient way to store electricity, but battery development to date has largely focused on smaller scale systems for portable power or intermittent backup power. Metrics related to size and volume, such as energy and power densities, are less critical for grid storage than in portable or transportation applications. Batteries for large-scale grid storage instead require durability for large numbers of charge/discharge cycles as well as calendar life, high round-trip efficiency, an ability to respond rapidly to changes in load or input, and reasonable capital costs. Battery technologies are under development for such large-scale storage devices such as high-temperature batteries and redox flow batteries of various chemistries. Flow batteries are particularly promising for stationary power applications because there is no solid-phase electrode reaction. Reactants can be carried to and from the site of charge-transfer rapidly, and convective pumping can be employed to replenish the interface for charge transfer. Further, because the electrode does not participate in the reaction other than as a source or sink for electrons, morphological changes and degradation are not expected with repeated cycles. The electrolyte also can be stored separately from the cell, which allows for energy and power to be selected independently for specific applications. Researchers will identify inexpensive, reversible electrochemical couples that offer sufficiently large cell voltage. They will also focus on understanding electrolyte/membrane interactions, designing new, low-cost membranes, optimizing electrode utilization, and minimizing external pumping and control requirements. 10. Electrochemical Capacitors – Prof. Rod Ruoff While the energy density of ECs is lower than that of batteries, they provide an important advantage of fast charge-discharge rates with higher power density and longer cycle life compared to batteries. As hybrid devices in combination with batteries or fuel cells, they offer great potential for transportation and stationary applications. We will evaluate chemically modified graphene (CMG), which are 1-atom thick sheets of carbon functionalized with other elements as needed, as electrode materials for ECs. Graphene has a remarkably high theoretical surface area of 2630 m2/g, which is several-fold higher than that of the currently used activated carbons. The physical and chemical versatility of graphene-based systems is appealing to increase energy density. The system does not depend on the distribution of pores in a solid support; every chemically modified graphene sheet can “move” physically to adjust to the different types of electrolytes (their sizes, their spatial distribution), while still maintaining an overall high electrical conductivity for such a network of individual CMG’s. 11. Rates and Pricing – Prof. Ross Baldick, Prof. James Dyer, Prof. David Adelman, Prof. John Butler We will research how different rate structures, including possible time-of-use pricing, would affect the sustainability of the traditional utilities as distributed renewable penetration increases and evaluate the regulatory constraints on such rate structures and the effects of other regulations (e.g., Renewable Portfolio Standards) on utility business models. Electric utilities operate in the shadow of an elaborate web of environmental and commercial regulations that shapes business incentives and strategies. Business models therefore cannot be properly evaluated in the absence of a detailed understanding of the regulatory environment, and regulations can either ameliorate or aggravate the economic tensions between traditional utilities and renewable sources of power. Utility regulation and business models will have to change as levels of distributed generation become substantial. In particular, capital planning for new generation, transmission, and distribution will need to adapt economically to the net demand of customers (i.e., demand minus distributed generation). The projected change in net demand characteristics is, for example, likely to shift investment towards peak generation capacity (i.e., gas-powered) and possibly towards storage. Concomitant with increased levels of distributed renewable resources, many more “demand-side” resources may be available to compensate for intermittency of renewable generation. Billing structures, and by implication rate-setting regulations, must change to enable such participation because a customer will only cede control of appliances in return for some kind of compensation. The control of customer resources to help with grid-management objectives is likely to be in conflict with the private objectives of individual consumers and widespread concerns about privacy and data security. In return for either variable pricing that provides incentives for cooperation or direct compensation, the boundary of control will shift “behind the meter,” which will require evaluating potential incentive schemes for consumers and sophisticated models of consumer behavior. 12. Fleet Evolution, Fleet Use, and Fleet Charging/Storage in the Smart Grid – Prof. Kara Kockelman, Prof. Ross Baldick In order to quantify electrical demand, we will estimate how many plug-in electric vehicles (including PHEVs, battery-electric vehicles [BEVs], and extended-range EVs) are likely to be used in Austin in the next 20 years and their likely spatial and temporal charging patterns, which range from regular overnight charging of individual vehicles at residences, to regular daily charging of clusters of vehicles in parking lots at work, to random opportunistic charging at public charging stations. Model frameworks include a continuous cycling of vehicle holdings and use patterns with attention to power-demand profiles and correlation with the availability of residence, work, and public charging stations. First-hand data collection of vehicle owner and traveler preferences will be used in a simulation model, along with mining of existing household travel surveys and other data sets to determine vehicle-use profiles across owners. The effect of time-differentiated and spatially-differentiated electricity rates will be investigated. 13. Simulation Test Bed for the Sustainable Distribution Grid – Prof. Surya Santoso, Prof. Mack Grady Sustainable distribution grids consist of distributed generation, energy storage systems, controllable and uncontrollable loads, power apparatus, power quality monitoring, and utility communication /control devices. Developing a simulation-based test bed incorporating those devices is critical in designing, evaluating, and simulating how digital technologies can be deployed to control and operate a grid. We have developed various distribution grid simulation models for power quality and harmonic studies. These models will be combined to develop a test-bed based on the Pecan Street Project distribution grids. A time-domain simulation software package (PSCAD/EMTDC) along with MATLAB will be integrated allowing holistic analysis of the grid’s electrical, mechanical, communication, and sensing/control behaviors. Distributed generation (including PV modules, fuel cell, and wind turbines), energy storage systems, loads, digital controllers, sensors, and communication channels described above will also be incorporated into the simulation. The test bed will allow faculty and students to design and evaluate specific generation and storage technologies, controllers, and sensing devices prior to actual deployment as part of the Pecan Street Project. Smart grid operating scenarios such as self-healing and self-reconstruction, maximum renewable energy penetration, energy storage control and dispatch, and real-time price impacts can be studied and evaluated using the test bed. 14. Use of Synchrophasor Measurements for Large-scale Wind Power Integration – Prof. Mack Grady The greatest technical impediment to large-scale wind generation is the capacity of an electric grid to accommodate huge wind farms and still maintain stability. Texas is the power generation leader in wind, and the Texas State grid (ERCOT) has the highest penetration of wind generation, which accounts for as much as 15% of total power. Wind curtailments occur almost every day due to transmission constraints, and larger wind penetration levels will destabilize the grid because the large wind farms are 300 - 500 miles away from major load centers. UT has the only independent (i.e., non-utility owned) synchrophasor measurement network in the U.S. This new technology employs GPS time stamping so that, for the first time, voltage phase angles can be known. Power flow is proportional to phase angle differences, and phase angle is thus very sensitive to power oscillations and can give a “heads up” when the grid is approaching unstable levels. With advance notice, grid operators can take corrective actions such as re-dispatch before a serious threat to the grid develops. The capability that this system affords to observe system responses to sudden events (which occur frequently) will allow us to tune-up stability modeling data by matching simulations with measurements and also to determine the types of responses that are “normal” or “abnormal” for a grid. Synchrophasors are not yet widely deployed and few guidelines exist on how to best use them. Our first task will be to monitor and understand the synchrophasor information we collect—for example, what is “normal” and “abnormal” in ERCOT. The second task will be to use major grid events, which occur every week, to determine if generator and other component models used by electric utilities to assess grid stability are suitable, and if not suitable, to refine those models. 15. Enterprise Integration, Control and Security – Prof. Suzanne Barber UT will investigate the enterprise engineering enabling energy systems to integrate, control, and secure the disparate components of energy systems. Enterprise energy systems components will include (1) energy technology components, (2) hardware components, (3) software components, and (4) telecommunications/networking components. Research focusing on enterprise integration must be grounded on fundamental investigations exploring: Enterprise requirements engineering – the components of the integrated system must satisfy the numerous, often conflicting, and evolving stakeholder needs; Architectures – the components and their infrastructure must be accurately selected and configured to meet the identified needs; and Verification/validation – the system components must be tested to assure that enterprise requirements were met by the integrated whole. This research must also consider the challenges presented by emerging technologies in an immature marketplace. Enterprise security has also become increasingly important and is a pervasive challenge for energy providers. As energy systems become more digitally connected, both the software architecture and the data it stewards must be trusted and secured. Every node and every customer must be protected by system capabilities that anticipate and defeat malicious threats and misuses. Students performing research in these areas must deliver unprecedented advances in system engineering, and will be required to leverage an integrated knowledge set of energy systems, software engineering, computer engineering, and telecommunications. 16. Building Integrated Solar – Prof. Matt Fajkus, Prof. Ulrich Dangel, Prof. Alexandre da Silva, Prof. Atila Novoselac The goal of designing homes that are energy efficient enough to be net zero energy homes with the addition of on-site energy generation requires collaboration with architectural engineers (air quality, comfort, numerical simulation of thermal performance, lighting and ventilation), and architects (comfort requirements, systems integration, construction, functional aspects, construction, aesthetics). Climate-related building design is one of the most effective and efficient ways to reduce daily energy demands. The positioning, orientation, sizing, and construction of each window must be done in such a way that the right amount of fresh air and daylight can enter a building without excessive cooling demands in summer or heating demands in winter. Passive design strategies necessarily include the design of an optimal building envelope, including highly insulating building envelopes for the minimization of heat transfer. Very often, comfort- and energy related issues are neglected during the initial stages of the design process and considered only at a later point through active control systems for the indoor environment. The tremendous potential of passive technologies permits minimization of the energy demand for heating, cooling and lighting. Solar water heating eliminates the need to use electricity from the grid or natural gas to heat water. A typical single family home with 4 occupants can save 2,323 kwh and 3,215 pounds of CO2 annually by using solar water heating. There is a critical need for the analysis of the introduction of various components such as vacuum-tube and flat-plate collectors into the building skin as well as the integration of these systems in the building HVAC system. Solar absorption cooling can effectively reduce cooling energy use especially in a city like Austin. Options for solar-assisted cooling will be examined to determine the feasibility of widespread use. The various technological options, their technical potential and the strategies to integrate these alternative technical solutions in the design and building process will be analyzed. 17. High Thermal Conductivity Materials – Prof. Rod Ruoff Thermal management systems are now being developed alongside electrical systems as a way to improve overall energy efficiency. Harvesting and using the excess heat from power producing machines reduces total energy losses in the total energy system. However, thermal systems experience efficiency problems just as electrical systems do. Heat losses during transport from the source to the site of use can be minimized, though, by using conduits made of highly thermally anisotropic materials. Graphene and hexagonal boron nitride layered films have exhibited thermal conductivity on the order of 3,000 Wm-1k-1 within plane yet have low thermal conductivity between the layers, i.e., orthogonal to the plane of a typical film. The use of these materials in the production of thermal management devices could greatly improve their efficiency. The nature of the transport processes in these materials will be explored, considering both chemical and structural influences. 18. Integration of Passive Desiccant Systems into Building Materials – Prof. Atila Novoselac It is well known that approximately 40% of the energy consumption in the United States is attributed to the operation of buildings. However, it is less known that significant fraction of this energy relates to dehumidification which is considered a key feature of heating ventilation air-conditioning (HVAC). Passive desiccant systems designed to introduce “moisture capacitance” into a building’s performance, the way that thermal capacitance, or thermal mass, is now used ubiquitously. In commercial buildings latent cooling loads are generated by occupants and occupants related activity; they are often cyclical, as they increase during the day when workers are present, and decrease dramatically at night when workers leave the building and outdoor humidity subsides. This suggests an opportunity for the introduction of “night-dehumidification” strategies in conjunction with passive or integrated moisture collection systems such as desiccant-laced ceiling panels or wall coverings, analogous to night-cooling strategies used to reduce sensible loads. The goal of this research project is to develop building finishing materials and control systems capable to manage moisture transport in a way that is favorable for building energy performance. The conceivable impacts of such a system include a cut in the amount of electric energy needed to dehumidify air, reduction of peak electric energy demand, shifting of dehumidification energy demand to night time, and reduction of the incidence of mold and its associated health problems. 19. Building HVAC Control for Dynamic Energy Pricing - Prof. Atila Novoselac and Prof. Tom Edgar Keeping thermal comfort and indoor air quality parameters at desired level, is the primary purpose of the heating, ventilation, and air-conditioning (HVAC) control system. More sophisticated HVAC control systems have an additional task to minimize energy consumption. Beside sensors inside and outside of the building, most of these control systems use predictive daily and seasonal cycles such as: outdoor temperature, occupancy schedule, and on- and off-peak electric energy pricing schedule. For example, the benefit from thermal storage systems relies heavily on fixed daily energy pricing schedule. However, with the dynamic energy pricing, where electricity rate is provided by the utility company only a short period of time ahead, there is a new challenge for designers of HVAC control systems. To exploit the benefit of dynamics pricing, the control systems should be analyzed in greater details. These systems should use control strategies that take into account stochastic change of electric price beside the dynamics of cooling/heating loads and building components. To define demands for these new systems, we plan to use building energy simulations where building models (such as EnergyPlus, and TRNSYS) are coupled with control models (such as Matlab). The modeling of various building types, HVAC systems and control strategies should identify building systems and controls that can use advantage of dynamic pricing. Furthermore, we plan to use our test rooms (the environmental chamber and the facade thermal lab) that have reconfigurable HVAC systems and the-state-of-the-art control systems for testing of the most promising control methods.
<urn:uuid:dc655ae1-58b5-4bfa-b3cb-c32b7b1b968c>
CC-MAIN-2016-26
http://research.engr.utexas.edu/igertsustainablegrids/index.php/research/future-research-projects
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924444
5,906
2.828125
3
This calm, unassuming story tells how the birds of the world spread a message of peace at Christmas by singing their special song for children around the world. The proverbial wise old owl tells a mixed flock of birds the story of the Nativity, and the gathered birds are inspired to spread the song of peace themselves by reaching out to children. Told through simple, lyrical words, the story of how the birds fly off to spread a song of peace by singing to children is a touching and profound interpretation of the role of the Christ Child and the meaning of his birth. The birds share their song, and lines of children join hands on snowy hills under a guiding star. The message of the birds’ song is revealed on the final pages: “Let there be peace, peace on Earth!” with the word peace written in dozens of different languages on the last page. Minimalist illustrations are painted in muted hues on pale gray backgrounds dotted with snowflakes, creating a hushed atmosphere that makes the final message stand out. Even the cover is understated, with a single robin perched on the tip of an evergreen, snow falling in the background. A lovely, quiet book with something powerful to say. (Picture book/religion. 3-7)
<urn:uuid:64c10951-7027-47ee-91f4-241020f269c3>
CC-MAIN-2016-26
https://www.kirkusreviews.com/book-reviews/kate-westerlund/the-message-of-the-birds/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396224.52/warc/CC-MAIN-20160624154956-00052-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92569
262
3.375
3
In which Scrabble dictionary does MISFOCUSES exist? Definitions of MISFOCUSES in dictionaries: There are 10 letters in MISFOCUSES: C E F I M O S S S U Scrabble words that can be created with an extra letter added to MISFOCUSES All anagrams that could be made from letters of word MISFOCUSES plus a Scrabble words that can be created with letters from word MISFOCUSES 10 letter words 8 letter words 7 letter words 6 letter words 5 letter words 4 letter words 3 letter words 2 letter words Images for MISFOCUSESLoading... SCRABBLE is the registered trademark of Hasbro and J.W. Spear & Sons Limited. Our scrabble word finder and scrabble cheat word builder is not associated with the Scrabble brand - we merely provide help for players of the official Scrabble game. All intellectual property rights to the game are owned by respective owners in the U.S.A and Canada and the rest of the world. Anagrammer.com is not affiliated with Scrabble. This site is an educational tool and resource for Scrabble & Words With Friends players.
<urn:uuid:b65208aa-f219-4614-9e4b-8465c7158ab8>
CC-MAIN-2016-26
http://www.anagrammer.com/scrabble/misfocuses
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00107-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906931
269
2.78125
3
Fir WavesSometime around June of 2005, BobSmith noticed, in one of my photographs, a peculiar wavelike pattern of alternating bands of living and dead evergreens (spruce or fir, I wasn't sure which at the time). I also spotted it in some (but by no means all) of my other photos of New England evergreen forests. I Googled around and discovered that the phenomenon is called "fir waves", and seems to exist only in New England, upstate New York, and Japan. You can also find proof in this album that they exist in Quebec [at least near the U.S. border]. In addition, Matthew Becker of BYU informed me by email (citing a 1999 paper in Acta Oecologica by Puigdefábregas et al.) that a similar phenomenon is present in evergreen beeches (Nothofagus) on Tierra del Fuego. Becker's own paper reviewing fir waves together with "ribbon forests" and "hedges" is now available online: Linear Forest Patterns in Subalpine Environments. Matt Worster noticed that fir waves sometimes show up in satellite photos. This album is dedicated to fir waves. Post your best fir wave photos here. Note that a blowdown or stand of dead trees is not necessarily a fir wave. A wave, by definition, is a disturbance that propagates. That motion is not visible in a photo, but if you see alternating bands you're probably looking at a wave. External LinksThis background info from an online ecology course nicely summarizes the classic 1976 paper by Sprugel about the causes of fir waves. See Linear Forest Patterns in Subalpine Environments for a review of more recent developments in the field. [ View Gallery - 2 More Images ]
<urn:uuid:ab846070-ddf5-4c20-b745-a7537cd73c9b>
CC-MAIN-2016-26
http://www.summitpost.org/fir-waves/220736
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402516.86/warc/CC-MAIN-20160624155002-00045-ip-10-164-35-72.ec2.internal.warc.gz
en
0.911779
372
2.8125
3
Governor of California Arnold Schwarzenegger and his wife Maria Shriver have chosen gay icon Harvey Milk as one of 13 inductees into the state’s Hall of Fame. Harvey Milk became the first openly gay elected official from a major city in the United States when he was elected to the San Francisco Board of Supervisors in 1977. He was shot and killed in 1978 by Dan White, a former city supervisor. Harvey Milk is revered nationally and globally as a pioneer of the LGBT civil rights movement. Milk, a film of his life, won two Oscars, including Best Actor for Sean Penn, earlier this year, bringing his legacy to a worldwide audience. Among the other 2009 inductees are: entertainer Carol Burnett, former Governor and US senator Hiram Johnson, film-maker George Lucas, football commentator John Madden, author Danielle Steel and Air Force test pilot General Chuck Yeager. Maria Shriver founded the Hall of Fame at the California Museum in 2006. Previous inductees include Jane Fonda, Theodor Geisel (“Dr. Seuss”), Quincy Jones and Jack Nicholson. “Now more than ever, I see how the perseverance and passion of one person can have a lasting impact in the lives of people, not only in their community but across the world,” Ms Shriver said of the 2009 inductees. “When talent and a relentless drive are matched, the efforts of a single individual can create a legacy of change, hope and empowerment. “Every individual inducted into the California Hall of Fame symbolise the biggest hearts, the greatest drive and the deepest inspiration. “It’s an honour to induct these extraordinary individuals who have each made their own unique mark in history.” The California Hall of Fame induction ceremony will take place on December 1st at The California Museum in Sacramento, the state capital. The ceremony will be followed by a reception and unveiling of the new exhibit installation, featuring artifacts and mementos personally loaned by the inductees, their families and organisations, including many items that have never been exhibited before. All living inductees and family of posthumous inductees are scheduled to participate in the presentation of the Spirit of California medals by the Governor and First Lady. State Senator Mark Leno, who represents San Francisco, welcomed Harvey Milk’s induction. The first openly gay man to sit in the Senate, he is the author of a bill that creates Harvey Milk Day in California. It is waiting to be debated by the state Assembly. “Today’s announcement by First Lady Maria Shriver recognises the important leadership role Harvey Milk played in our state and nation and further illustrates the historic and international nature of his legacy,” he said. “I appreciate the First Lady’s support and admiration for Harvey’s work to further equal civil rights for all people. “He gave his life for what he believed in, and in doing so gave hope to generations of LGBT Californians who continue to struggle for full equality. “This honour, as well as the Presidential Medal of Freedom awarded to Harvey by President Obama, should only underscore to the Governor the need for Harvey Milk Day in California, and I hope he will return our bill, SB 572, with his signature when it reaches his desk in the next few weeks.” Last year Governor Schwarzenegger vetoed a bill that would have made Mr Milk’s birthday a day of “special significance” in California public schools.
<urn:uuid:dc154fc6-a777-470e-8164-a52fa59570f3>
CC-MAIN-2016-26
http://www.pinknews.co.uk/2009/08/25/harvey-milk-to-be-inducted-into-the-california-hall-of-fame/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00028-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957228
732
2.515625
3
You might expect satellites to be high above any weather, except perhaps the "solar wind" of charged particles from the sun. But satellites are influenced by the atmosphere, and are experiencing an unexpected side-effect of global warming. There is no clear boundary between the atmosphere and space; the air just gets progressively thinner. The Fédération Aéronautique Internationale defines space as starting at an altitude of 100kms. At this height the air pressure is barely one-thousandth as much as sea level, not nearly enough to breathe. Technically it's very hot, with the temperature sometimes reaching 1,500C. The air is so thin that the heat would not be noticeable, but it gives the zone from 90kms to 500kms its name: the thermosphere. It is home to many satellites, including the International Space Station. The thin air is enough to cause some drag on a satellite in low orbit. After some months or years it loses speed and falls into a lower orbit. Drag increases further, and so on. The process ends when the satellite burns up in the heat generated by air friction, becoming an artificial shooting star. Global warming should cause the atmosphere to expand slightly, producing more drag. In fact the opposite seems to be occurring. This is because carbon dioxide in the thermosphere radiates heat, and the cooling effect more than counteracts global warming at this height. So more carbon dioxide actually ends up slightly increasing the lifetime of satellites.
<urn:uuid:07540495-5d8c-4622-b3ce-96f41f7b8efd>
CC-MAIN-2016-26
http://www.theguardian.com/weather/2009/jan/08/weatherwatch
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00009-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93551
306
4.0625
4
A Room with a View Women and Femininity Quotes How we cite our quotes: Citations follow this format: (Chapter.Paragraph) All his life he had loved to study maiden ladies; they were his specialty, and his profession had provided him with ample opportunities for the work. Girls like Lucy were charming to look at, but Mr. Beebe was, from rather profound reasons, somewhat chilly in his attitude towards the other sex, and preferred to be interested rather than enthralled (3.8). First of all, the phrase “maiden ladies” is so fantastic. Secondly, to digress very briefly, so is Mr. Beebe. There’s something quite intriguing in this quote – could it be that the “profound reasons” he has for his coldness to women are motivated potential (but certainly unmentioned!) homosexuality? It’s unsurprising to find this kind of knowing innuendo in Forster’s texts, many of which deal at least tangentially with male homosexuality. Anyway, Mr. Beebe’s “interest” allows him to look objectively at women and their internal lives, rather than being “enthralled” by their physical charms. Conversation was tedious; she wanted something big, and she believed that it would have come to her on the wind-swept platform of an electric tram. This she might not attempt. It was unladylike. Why? Why were most big things unladylike? Charlotte had once explained to her why. It was not that ladies were inferior to men; it was that they were different. Their mission was to inspire others to achievement rather than to achieve themselves. Indirectly, by means of tact and a spotless name, a lady could accomplish much. But if she rushed into the fray herself she would be first censured, then despised, and finally ignored. Poems had been written to illustrate this point (4.2). Do the words “separate but equal” ring a bell? Sure, they usually are associated with racial injustice, but they fit just as well in this discussion of what makes women different from men from Charlotte’s hidebound perspective. Lucy’s childlike question, “Why were most big things unladylike?” demonstrates that she doesn’t exactly understand the dynamics of the social rules she adheres do… does anyone? There is much that is immortal in this medieval lady. The dragons have gone, and so have the knights, but still she lingers in our midst. She reigned in many an early Victorian castle, and was Queen of much early Victorian song […] But alas! the creature grows degenerate. In her heart also there are springing up strange desires. She too is enamoured of heavy winds, and vast panoramas, and green expanses of the sea. She has marked the kingdom of this world, how full it is of wealth, and beauty, and war […] Before the show breaks up she would like to drop the august title of the Eternal Woman, and go there as her transitory self (4.3). The medieval lady described here is exactly what we’re afraid Lucy will become: a distant, discontented, and overly idealized conventional woman. The “early Victorian” reference indicates that in Forster’s post-Victorian (otherwise known as Edwardian) age, this type of womanhood is losing its relevance, as the “medieval lady” becomes less and less happy with her lot in life.
<urn:uuid:474a8519-02d8-4e12-947a-4b28e610c894>
CC-MAIN-2016-26
http://www.shmoop.com/room-with-a-view/women-femininity-quotes.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00123-ip-10-164-35-72.ec2.internal.warc.gz
en
0.982753
745
2.703125
3
Shrimp Culture In Thailand John Hambrey and C Kwei Lin Asian Institute of Technology 1. Status and Development Thailand is now the world's largest producer of farmed shrimp, with production in 1994 /5 of 240,000mt with a farm gate value in excess of US$1.6 billion. The number of farms has increased rapidly in recent years, and there are now over 20,000, employing 100,000 people (Tavarutmanakeel and Tookwinas 1995). The industry supports a major processing and input supply industry, including feed manufacture. Extensive aquaculture has been practiced for many years in Thailand. Conditions are ideal: 2,700 km of coastline, much of it sheltered, warm, calm seas and abundant natural seed. Originally dykes were built round rice fields, sluice gates installed, and wild shrimp seed entered the fields and were retained. Production was mainly of Penaeus merguensis in the dry season and Metapenaeus spp in the wet season. In the 70s growing demand stimulated the use of supplementary feeds and a move to semi-intensive production. In the mid 80’s a combination of technical and economic factors allowed the development of increasingly intensive systems using hatchery reared seed and formula feeds. Intensive shrimp farming took off in 1988/9, and although there have been significant local problems, overall production increased from 33,000 mt in 1987 to 240,000 mt in 1994/5. The trend is illustrated in fig 1. Productivity has also increased steadily with average national yield (extensive + intensive) rising from 0.4mt/ha in 1986 to 3.2mt/ha in 1994 (fig 2) The geographical focus of shrimp farm development has shifted steadily since the mid 80’s. Intensive shrimp farming began mainly in the upper Gulf of Thailand, south of Bangkok, in areas previously devoted to extensive aquaculture and salt pans. In 1989 this area suffered severe losses from disease and unexplained mortality. The focus for development then shifted to the Eastern coast. This area experienced mixed success, with many producers suffering from poor water or pond soil quality. In the early 90’s the focus shifted south to the east coast of the Thai peninsula where success has been generally more consistent. In recent years most new developments have taken place on the Andaman Sea (SW) coast. It is worth briefly considering the reasons for this progression. 1) Decline in the Upper Gulf The serious losses which occurred in the upper gulf were related to a variety of factors including: - shallow muddy coast, long, narrow supply canals and inadequate water exchange; - self pollution as a result of poorly designed water supply and effluent systems; - upstream pollution from agriculture, domestic sewage, industry; - erratic salinity - seasonal gulf currents and major rivers; - lack of experience In addition, the booming industry of the area pushed up land prices, and encouraged sale and move to better areas. 2) Mixed Experience on the East Coast Although many farmers were and are highly successful in this area, conditions were often far from ideal. In particular many farmers suffered from: - siting on shallow muddy bays and estuaries; - variable salinity; - mangrove and acid sulfate soils; - low pH; high iron and aluminium; - pesticide runoff from fruit plantations? - limited experience. 3) Better Performance in the South Conditions are generally more favourable in the South, on both sides of the peninsular, and success has been greatest in these areas for a variety of reasons: - better soils - mainly rice paddy, coconut plantation and upper mangrove; - straight coastline, open sea, deep water; - high tidal range, especially on the Andaman Sea coast; - stable salinity; - little upstream pollution; - experience gained further north It remains to be seen whether these initial advantages can be consolidated into sustained output through better resource management. 2. Land Use, Resources, Environment There is increasing concern in the West and to some extent in Asia, about the impact of shrimp farming on coastal resources and the environment. In particular there is widespread concern about the impact on mangrove. Initial developments (upper Gulf) took place mainly in existing extensive ponds, salt pans, and intertidal mud flats and degraded mangrove. Developments on the East (Gulf) coast took place in mangrove, estuarine, paddy and fruit tree zones. In the SE most of the land converted was originally rice paddy, but coconut plantation and mangrove were also used. The most recent developments on the Andaman sea coast have taken place in rice paddy and mangrove. In general, extensive production has been much more highly concentrated in the intertidal mangrove areas, with newly developed intensive farms mainly taking place in the supratidal zone. Fig 3 shows the type of land which has been converted for intensive shrimp farming, and fig 4 shows conversion for extensive farming, based on a comprehensive survey conducted in 1994. Overall, about 20% of the original mangrove areas are now used for shrimp farming, though only a part of this has been the result of direct conversion of primary mangrove (i.e. much of the mangrove area was already reclaimed for agriculture or degraded through over-exploitation for wood and wood products). Furthermore, a large part of this conversion can be attributed to extensive, rather than intensive shrimp farming. Fig 5 shows the overall trend of mangrove destruction and the total area of land used for shrimp farming. It is apparent that the major phase of mangrove destruction took place before the development of intensive shrimp farming. 3. Socio-economics and Industry Structure Shrimp farming in Thailand is highly dispersed and decentralized. Although several large companies are involved in the industry, the 20,000 or so small farmers produce 70% of output. The average size of intensive farms is a mere 1.6ha, usually comprising 1-2 ponds. 78% of intensive farms are owner operated. There are only around 40 large farms (>30ha). Typically farms may be categorized as follows: - Small family run with 1-2 ponds (0.2-2ha) using family labour (dominant type); - Medium sized family run farms (3-9 ponds) with some hired labour; - Medium/large farms (10-30 ponds): owner or professionally managed, hired labour; - Large farms with >30 ponds established as corporations, managed and operated by hired professionals and labour; Most of the 2,000 or so hatcheries are also small scale, producing a few million post-larvae. In general they use very simple and adaptable technology: relatively small free-standing tanks and flexible hoses rather than complex fixed plumbing. Larger hatcheries which can afford to hold and condition broodstock supply nauplii to these smaller hatcheries. Some hatcheries may specialize in nursing post-larvae up to PL40. Figure 6 shows the trends in the industry in terms of farm area and number of entrants. There are no obvious trends toward centralization. Indeed, the reverse may be true. Fig 7 shows the distribution of farm size for different regions of Thailand. The more recently developed areas have a higher proportion of small farms. 4. Management Practices Pond soil management has become relatively standard with drying (typically for around 1 month), liming, application of rotenone or teaseed cake to kill predators. Substantial numbers of farms now screen intake water and store in a reservoir prior to stocking. During this time the water may be treated with chlorine (Calcium hypochlorite @ 250-300kg/ha) to kill potential pathogens and/or carriers. During the production cycle water is usually treated with lime (an average of more than 10t/ha in intensive ponds) and a variety of other products in more closed systems (see below). There has been a significant trend in the last few years in favour of semi-closed or closed systems with very little water exchange. In the traditional open system, water turnover rate often amounted to 20-30% a day, and up to 50% in some cases in the final stages of production. Semi closed systems only exchange moderate quantities of water in the last month of production. Closed systems dispense with water exchange almost entirely, and rely on heavy aeration and careful pond water management to maintain acceptable growing conditions. In some closed systems water may be recycled through storage/settling ponds during the final phase of production, while in others there is no water exchange, the farmer relying upon a combination of natural processes (plankton and bacterial growth) and the use of a variety of chemicals to maintain water quality. Although many farms now have the capability to recycle (i.e. they have water storage ponds) relatively few actually do this as yet (7% in the NACA/ADB survey). In the south of Thailand at the present time there are no rules regarding water exchange. If disease is widespread then water exchange is reduced to a minimum; if pond water quality becomes poor and does not respond to treatment, water is exchanged. Water treatments include use of Calcium chlorite to kill Oscillatoria and dinoflagellates ( @ 3-6kg/ha), pumping through a rice bag or using plankton eating fish to remove Oscillatoria and concentration and pumping out of Microcyctis (Pratungtum and Tookwinas 1996). Formalin may also be used to reduce plankton and pathogens in ponds. Related to the generally reduced water turnover rates has been an increase in the intensity of aeration. The early farms typically used around 12HP/ha; farms with restricted water exchange may now use up to 50HP/ha. The positioning of aerators has also changed in some cases. The peripheral location favouring water circulation may be inappropriate in more intensive systems where the high power causes excessive water velocity and concentration of wastes - and shrimp - in the centre. There is currently some disagreement about the intensity of aeration with some commentators favouring reduced use in the afternoon to reduce phytoplankton bloom (Pratungtum and Tookwinas 1995) and others favouring maximum use to encourage decomposition and mineralization of wastes. There is now a wealth of products on the market in Thailand for water treatment and conditioning, including various bioremediation products based upon bacterial inocculi. Although there is great interest in these products, most were originally developed for freshwater environments (sewage treatment etc.) and may be less effective in brackish or saline water. To date there has been no convincing demonstration of their efficacy, although there are some promising indications. The cost of these products remains a major constraint. A simpler approach adopted by many farmers in Thailand has been to add sugar to the water in very intensive systems. This provides a carbon substrate for desirable (nitrifying and denitrifying) bacteria which my help to reduce ammonia and nitrite levels and improve water quality. A recent survey (NACA/ADB) showed that 67% of extensive, and 65% of intensive shrimp farmers had suffered from disease outbreaks, with the intensive farms suffering at least 1 outbreak per year (0.7 for extensive). The financial loss attributed to disease amounted to US$6,629/ha/yr. Most farmers are unable to identify the disease, and most treatments fail. The most important diseases include luminescent bacteria (Vibrio harvyii), and a variety of viral diseases including yellowhead, and whitespot (SEMBV). Red body coloration is associated with several diseases including whitespot. Yellowhead disease caused severe losses in 1992/3. Recently "red body" (probably whitespot) has caused widespread problems. It seems likely that wild planktonic shrimp are a significant source and spreading agent for several of the diseases including whitespot and yellow head (Flegel 1995, Chanratchakool et al 1995. Exclusion (through screening and/or recycling) or elimination of carriers (e.g. using chlorine, BKC or formalin) may therefore be effective preventive measures. Lesser use of fresh feed which may also spread infection may also be appropriate and has been adopted by many Thai farmers. Antibiotics are generally ineffective but are still widely used. 6. Secrets of Success Thailand has, so far, been the most successful farmed shrimp producer in the world. This success is based upon a wide variety of factors which may include the following: - an extensive and suitable coastal environment; - previous experience in extensive coastal aquaculture; - wild seed available (at first); - a well established commercial formula feed industry (related to the chicken industry) able to provide pelleted supplementary feed in the semi-intensive phase (Csavas 1995); - highly developed distribution and marketing systems; - established processing capacity related to the capture fishery; - established support industries (e.g. pumps, tanks etc.) - availability of investment capital, especially in the late ‘80s; - hatchery technology developed elsewhere, tested by Thai DoF, and waiting to be applied; - demand for seed from semi-intensive producers coinciding with availability of reasonably skilled personnel and investment finance : small scale hatchery entrepreneurial boom; - excellent communications to major established export markets (Japan, US, Europe); - part of emerging Asian markets (HK, China, Malaysia, S.pore, Thailand) This is a formidable cocktail and goes some way to explaining the phenomenal growth of the industry. 7. Current Problems The question is, can this success be sustained? There are several worrying trends or developments. It looks as if production in 1995 was similar to that in 1994. Production in the last quarter of 1995 and the first quarter of this year was down, with serious outbreaks of red body disease. There are several possible explanations for these problems. Firstly, the very high price of artemia cysts last year caused a substantial shift to formula feeds for larval rearing, and this may have affected larval quality. Related to this was a push to sell larvae at an earlier age to reduce feeding costs. As a result there was a widespread tendency to stock young and weak seed at very high densities (commonly >100PL/m2), survival was low and disease spread. Secondly, the hatcheries continue to use large doses of antibiotics. This may reduce the resistance of the larvae to disease once they are exposed to less sterile conditions. The rampant use of chemicals on the farms may also have stressed stock and reduced resistance to disease. Another (and related) problem concerns the continuing haphazard development and the scant attention paid to water supply and effluent systems. A recent survey (NACA/ADB 1995) suggested that the average shrimp farmer in Thailand has another 34 within 3km, and shares a water supply with 20 others. 30% of farmers surveyed discharged their effluent to a common supply/drainage canal. The implications of this for water quality and the spread of disease are obvious. A problem of less immediacy to the average farmer, but nonetheless potentially catastrophic, is the poor image of shrimp farming in the US and Europe. It is widely seen as environmentally destructive (especially of mangroves) and not sustainable. This may have direct impacts in terms of moves to ban or limit trade in certain shrimp products, or indirect impact in terms of demand and product price. Given the will, most of these problems can be tackled, and some may even be turned to the advantage of some. Seed quality can and should be improved. Specific Pathogen free (or low) seed certification (screening with DNA probe) is possible. Treatment of seed with formalin (100ppm) prior to stocking can reduce disease incidence. Challenging of seed, as a test of quality can also be done. Extended nursing of Pls prior to stocking is undoubtedly beneficial in terms of subsequent survival. The further development of semi-closed or closed systems and wastewater treatment should help isolate farms from disease while at the same time minimizing waste and pollution. Broodstock domestication and genetic improvement is quite possible, but has so far been uneconomic. This is likely to change. The use of other species, or alternate cropping may also reduce disease problems. But perhaps more important than any of these technical possibilities is the need for improved organization, planning and development within the industry. The major crisis facing the industry is not technical constraint but disease and water resource management. Farmers themselves need to get together to identify water supply and effluent problems and improve them where possible (there are some signs tht this is beginning to happen. Government and development banks should also take a proactive role in this, as they have always done in the case of irrigation for agriculture. In addition there is a need to identify those areas of mangrove of particular value in coastal protection, as nurseries for commercial fish and shellfish species, and of exceptional biodiversity or conservation interest, and apply much stricter protective measures. Ideally this should be done in parallel with the above, to minimize potential conflict. Finally, there is real potential for the launching of a quality labelling initiative, related to both on farm production management practice, and district or regional aquaculture development planning. Only if shrimp farmers can demonstrate that shrimp farming is environmentally friendly - in terms of minimal pollution and habitat destruction - will they continue to attract premium prices for their product. These initiatives should in turn reinforce the needs for improved management and technology as noted above. Csavas I 1995. Development of Shrimp Farming with Special Reference to Southeast Asia. Paper presented at Indaqua ‘95 Exposition of Indian Aquaculture, 27-30th January 1995, Madras, India Flegel, T W 1995. Shrimp Health Management and the Environment. Paper presented to the Workshop on Aquaculture Sustainability and the Environment, Beijing, October 1995. NACA/ADB Phillips, M J, C Kwei Lin and MCM Beveridge,1993. Shrimp Culture and the Environment: lessons from the worlds most rapidly expanding warmwater aquaculture sector. Environment and Aquaculture in Developing Countries. ICLARM Conference Proceedings No 31, 359p,Manila, Philippines Tavarutmaneekul and Tookwinas 1995. Aquaculture Sustainability and the Environment. Thailand Study Report. NACA/ADB Chanratchakool, P. J F Turnbull, S Funge-Smith, and C Limsuwan 1995. Health Management in Shrimp Ponds (second Edition). Aquatic Animal Health Research Institute, Department of Fisheries, Kasetsart University, Bangkok | Article Submission Terms | Fish Supplier Registration | Equipment Supplier Registration © 2016 Aquafind All Rights Reserved | Powered by Successful
<urn:uuid:23983f5f-88ef-4e6b-b440-a9a17bb3ce97>
CC-MAIN-2016-26
http://www.aquafind.com/articles/shmcul.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403502.46/warc/CC-MAIN-20160624155003-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944486
3,929
2.703125
3
This discusses resource security using NTFS permissions. It specifically discusses security on files and folders within the NT File System (NFTS). The document covers NTFS file and folder permissions, lists, using NTFS permissions, planning NTFS permission, using special access permission, copying and moving data with NTFS permissions assigned, and troubleshooting NTFS permission problems. This document also introduces you to the next generation of NTFS, NTFS 5.0, which windows 2008 touts as its standard file system. In addition, this document outlines all of the components of using NTFS permissions on a NTFS 5.0 file system effectively on a Windows 2000 network. Once you have read and digested this document, you should be able to secure your windows 2008 network with NTFS permissions with ease. UNDERSTANDING NTFS PERMISSIONS This discussion covers the basics of file and folder permissions. It walks you through the kinds of permissions you can assign to files and folders and how to use them. The new and improved Access Control List is discussed, as well as the effects of multiple applied permissions and inherited permissions. First, let's answer a couple of common questions about NTFS permissions: - What is a permission? A permission is a rule associated with an object to regulate which users can gain access to that object and in what manner. - When can I use a permission? Permissions can be used only on NTFS formatted partitions or volumes, and that is why they are commonly referred to as NTFS permissions. - Who can set or apply permissions? Administrators, the user that owns the files or folders, and all other users or groups that have the Full Control permission to those file NTFS Permissions and Files NTFS file permissions are used to control the access that a user, group, or application has to files. This includes everything from reading a file to modifying and executing the file. There are five NTFS file permissions: - Read & Execute - Full Control The five NTFS file permissions are also listed in Table 1 with a description of the access that is allowed to the user or group when each permission is assigned. As you can see, the permissions are listed in a specific order. They all build upon TABLE 1: NTFS FILE PERMISSIONS allows the user or group to read the file and view its attributes, ownership, and permissions set. the user or group to overwrite the file, change its attributes, view its ownership, and view the permissions allows the user or group to run and execute the application. In addition, the user can perform all duties allowed by the Read permission. the user or group to modify and delete a file including perform all of the actions permitted by the Read, Write, and Read and Execute NTFS file permissions. the user or group to change the permission set on a file, take ownership of the file, and perform actions permitted by all of the other NTFS file permissions. If a user needs all access to a file except to take ownership and change its permissions, the Modify permission can be granted. The access allowed by the Read, Write, and Read & Execute are automatically granted within the Modify permission. This saves you from assigning multiple permissions to a file or group of files. In later discussions in this document you will see what happens when multiple NTFS file permissions are assigned and applied and how you can determine the net access the user or group has to that file or folder. |NOTE: A file's attributes are properties of the file such as Read-Only, Hidden, Archive, and System. The System attribute is usually applied only to operating system boot files. NTFS Permissions and Folders NTFS Folder permissions allow what access is granted to a folder and the files and subfolders within that folder. These permissions can be assigned to a user or group. This topic defines each NFTS folder permission and its effect on a folder. Table 2 displays a list of the NTFS file permissions and the access that is granted to a user or group when each permission TABLE 2: NTFS FOLDER PERMISSIONS allows the user or group to view the files, folders, and subfolders of the parent folder. It also allows the viewing of folder ownership, permissions, and attributes of that folder. the user or group to create new files and folders within the parent folder as well as view folder ownership and permissions and change the folder attributes. allows the user or group to view the files and subfolders contained within the folder. allows the user or group to navigate through all files and subfolders including perform all actions allowed by the Read and List Folder Contents permissions. the user to delete the folder and perform all activities included in the Write and Read & Execute NTFS folder the user or group to change permissions on the folder, take ownership of it, and perform all activities included in all other permissions. Notice that the only major difference between NTFS file and folder permissions is the List Folder Contents NTFS folder permission. By using this NTFS folder permission you can limit the user's ability to browse through a tree of folders and files. This is useful when trying to secure a specific directory such as an application directory. A user must know the name and location of a file to read or execute it when this permission is applied to its parent folder. Understanding the Access Control List Everyone who is familiar with Microsoft Windows NT 4.0 will find here a big change for the better. The ACLs or Access Control Lists of the past were written and assigned to a user once a successful Windows NT domain login had been established. The operating system would summarize the user's allowed access in an ACL. When a user in Microsoft Windows NT 4.0 tried to access a file or folder, the operating system would look at the user's ACL and determine whether the user was allowed access. One aspect of this feature turned out to be a huge drawback for everyday user access. If a user called the helpdesk or any other support person to gain access to a file or folder and that person made the appropriate change to the permissions, the user would have to log off and log back on. This is because the ACL in Microsoft Windows NT 4.0 was created only after a successful logon. As you will find out, windows 2008 has made a change in how ACLs work and how users NTFS 5.0 in windows 2008 stores an ACL with every file and folder on the NTFS partition or volume. The ACL includes all the users and groups that have access to the file or folder. In addition, it indicates what access or specifically what permissions each user or group is allowed to that file or folder. Then, whenever a user makes an attempt to access a file or folder on an NTFS partition or volume, the ACL checks for an ACE (Access Control Entry) for that user account. The ACE will indicate what permissions are allowed for that user account. The user is granted access to that file or folder, provided that the access requested is defined within the ACE. In other words, when user wants to read a file, the Access Control Entry is checked in that file's Access Control List. If the Access Control Entry for that user contains the Read permission, the user is granted access to read that file. |NOTE: If a user does not have an ACL of the file that he or she wants to access, access is denied. Consider the same user/helpdesk situation discussed earlier. When the support person makes the change to the permissions on the file the user needs access to, the change is immediately saved in that file's ACL. The user can then access the file without having to log out and back in. This is only the case when assigning permissions to users for file or folder resources. When a user is added to a group to gain access to additional resources or otherwise, the user must log out and back in to access those resources. That is because NTFS permissions granted to groups are read in a different Applying Multiple NTFS Permissions Multiple permissions can be assigned to a single user account. They can be assigned to the user account directly or to a group the user account is a member of. When multiple permissions are assigned to a user account, unexpected things can happen. To prevent any heartache we are going to discuss the rules and regulations for assigning multiple NTFS permissions to a single user or group. This will include how file and folder permissions work together, and how denying a specific permission can affect a users' allowed access. First of all, NTFS permissions are cumulative. This means that a user's effective permissions are the result of combining the user's assigned permissions and the permissions assigned to any groups that the user is a member of. For instance, if a user is assigned Read access to a specific file, and a group that the user account is a member of has the Write permissions assigned, the user is allowed the Read and Write NTFS permission to that File Permissions Override Folder Permissions NTFS file permissions override or take priority over NTFS folder permissions. A user account having access to a file can access that file even though it does not have access to the parent folder of that file. However, a user would not be able to do so via the folder, because that requires this "List Folders Contents" permission. When the user makes the attempt to access the file, he or she must supply the full path to it. The full path can either be the logical file path (F:\MyFolder\MyFile.txt) or use the Universal Naming Convention (UNC). To access the file via UNC the user must supply the server name, share, directory, and file, for example: If the user has access to the file but does not have an NTFS folder permission to browse for that file, the file will be invisible to the user and he or she must supply the full path to Deny Overrides All Other Permissions The concept of permission denial has not changed through the evolution of the Microsoft Windows operating systems and NTFS. If a user is denied an NTFS permission for a file, any other instance where that permission has been allowed will be negated. Microsoft does not, nor do I, recommend using permission denial to control access to a resource — for one main reason. For instance, if a user has access to a file or folder as being a member of a group, denying permission to that user stops all other permissions that the user might have to the file or folder. This can be very hard to troubleshoot on a large network with thousands of users and groups. This is another example of how multiple NTFS file and folder permissions are cumulative and what happens to the user's effective permissions. For an example of Deny overriding all other NTFS permissions look at In Figure 1, User A is a member of Group 1 and Group 2, where he is granted access to Folder A. Group 1 allows access to Folder A and both of the files within that folder. Group 2, on the other hand, denies access to a specific file, File 1. When a user account is denied access to a file or folder, all other permissions granting that user access to that file or folder are negated. Figure 1 shows that User A's combined access to File 1 is no access at all. Understanding Inherited NTFS By default, when NTFS permissions are assigned to a parent folder, all of the same permissions are applied or propagated to the subfolders and files of that parent folder. Alternatively, the automatic propagation of these permissions can be stopped. An example of NTFS permission inheritance is shown in Subfolders and files inherit NTFS permissions from their parent folder. As the windows 2008 administrator you assign NTFS permissions to a folder. All current subfolders and files with that folder inherit those same permissions. In addition, any new files or subfolders created within that parent folder assume the same NTFS permission of that parent folder. You can prevent NTFS permission inheritance, so that any file and subfolders in a parent folder will not assume the same NTFS permissions of their parent folder. Now here is the tricky part. The directory or folder level in which you decide to prevent the default NTFS permission inheritance becomes the new parent folder for NTFS permission inheritance. USING NTFS PERMISSIONS This discussion is about using NTFS permissions. The topics include planning and working with NTFS permissions. The discussion topic will give guidelines to use when planning NTFS permission on a windows 2008 network and will explain the step-by-step process for assigning such permission. Planning NTFS Permissions A windows 2008 network should be well thought out and planned for. The first thing that comes to mind is the Active Directory and windows 2008 domain infrastructure. This is very important, but a plan for NTFS permissions should also be thought out way in advance before a windows 2008 network Having a plan for NTFS permissions on your windows 2008 network will save time and money for your organization. You will also find that a network with well-planned NTFS permissions is that much easier to manage. Use the following guidelines to help you plan NTFS permissions on your windows 2008 network. Notice that some steps are not directly related to NTFS permissions themselves, but they help organize the data on your network. This makes it easier for you to manage the resources on your windows 2008 network and make sure those resources are secure. - The data on your windows 2008 network needs to be organized into manageable units. Separate the users' home directories from applications and public data. Try to keep data in centralized units. For instance, group all of the home directories into one folder and place them on an NTFS volume away from other data. By doing this you gain benefits such as not having to assign NTFS permissions to files, but only to the grouped folders. In addition, backup strategies become less complex. Now application files are grouped separately and do not have to be backed up with the home directories. Organizing your data can make many things easier to manage, including assigning NTFS permissions. - Assign user only the level of access that is required. If a user needs only to read a file, grant only the Read permission to the resource that they require access to. This precludes the possibility of a user damaging a file, such as modifying an important document or even deleting it. - When a group of users require the same access to a resource, create a group for those users and make each a member of that new group. Assign the NTFS permissions required to that resource to the newly created group. If at all possible avoid assigning NTFS permissions to users and only assign them to groups. - When assigning permissions to folders with working data, use the Read & Execute NTFS folder permission. This should be assigned to a group containing the users that need to access this folder and to the Administrators group. This will allow the users to work with the data, but will also prevent them from deleting any important files in the folder. - When assigning NTFS permissions to a public data folder, use the following criteria as a guideline. Assign the Read & Execute and Write NTFS permissions to the group containing the users that need access to the public data folder. The Creator Owner of the folder should be assigned the Full Control NTFS permission. Any user on the network that creates a file, including one in a public data folder, is by default the Creator Owner of that file. After that file has been created, the windows 2008 administrator can grant NTFS permissions to other users for file ownership. If the Read & Execute and the Write NTFS permissions are assigned to group of users that need access to the public data folder, they have Full Control to all files that they create in the public data folder and can modify and execute files created by other users. - If at all possible do not deny NTFS permission to a group or user. This is not a recommended way to manage resources on a windows 2008 network, because only NTFS permissions assigned for that resource elsewhere for the user or group are automatically stopped. This can cause a great deal of time and frustration in troubleshooting permission problems. - User education is always a good idea. If users have a basic understanding of the NTFS permissions and how to secure resources on a network, they can assign and manage their own files. Unfortunately user education does take a bit of time and money, but if done successfully it will pay off in the This is it for the NTFS permission guidelines. When planning how to organize your data on a windows 2008 network, remember to consider NTFS permissions and how they will be affected. Every business and organization is different, but if most of these simple guidelines can be followed, managing your resources in a secure environment will be that much easier. And remember that Total Cost of Ownership is the name of the game. Working NTFS Permissions After a newly created volume is formatted with the NTFS 5.0 file system in windows 2008, by default the Full Control NTFS permission is granted to the Everyone group. This, of course, should be changed as soon as possible. The reason is that allowing Everyone full control means just that, everyone. That includes guests, if the Guest account is enabled, and even anonymous Internet users, if security settings on the firewall are such that they can access files on that server. By default, even though you are running NTFS, no security at all is applied. The approved NTFS permission plan should be implemented immediately. If an NTFS permission plan does not exist yet, at lease change the access for the Everyone group from Full Control to Read. Then you can assign the appropriate NTFS permissions to users as they are Now let's look into working with NFTS permissions and how to assign them. Let's start by looking at - On your windows 2008 desktop, right-click My Computer. - Click Explore. This will start the Windows Explorer. - Click the plus sign to the left of an NTFS volume that you would like to view. - Find a folder and right-click on that folder. - Click the Properties option on the list. - Now use Alt-Tab to switch to the Securities tab, or select it by clicking on it. viewing the Securities tab from the Properties dialog box of a file, the List Folder Contents NTFS permissions is not listed in the Permissions list box. Now that we are all on the same page, let's look at the options available to us on the security tab. Table 3 lists the options available on the Securities tab and describes briefly what they are used for. TABLE 3: SECURITIES TAB OPTIONS || The name list box displays a list of the users that currently have access to the selected resource. You can highlight an object in the list and either change that objects' current NTFS permission or select remove to Remove it from the Permissions list box is a list of all the NTFS permissions. To allow or deny a NTFS permission to the object selected in the Name list box click the appropriate clicking the Add command button, the Select Users, Computers, or Groups dialog box opens. This is where you can select what objects to add to the Names list box. remove objects in the Names list box by selecting an object and then clicking Remove. For the purposes of this discussion we are going to skip the Advanced command button and what it does. That will be covered when we discuss the next topic, Using Special Access Permissions. The only other option on the Securities tab check box to allow inheritable permissions from parent to propagate to this object. By default when a folder is created on a NTFS volume this option is set. To turn it off, open the Securities tab and clear the check box. Figure 4 displays the message box that is displayed. USING SPECIAL ACCESS PERMISSIONS NTFS file and folder permissions for the most part are a sufficient way to secure your resources on a windows 2008 network. Where they do not provide the level of granularity required, you can use Special Access Permissions can be used.
<urn:uuid:2c297957-53a9-4dc5-b3e7-ff29548695f7>
CC-MAIN-2016-26
https://www.idp.net/NTFS/default.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00124-ip-10-164-35-72.ec2.internal.warc.gz
en
0.893435
4,594
3.296875
3
Relating to or denoting a technique of splitting a body of supposedly homogeneous data into two halves and calculating the results separately for each to assess their reliability. - For those familiar with the split-half method for assessing reliability, alpha has an additional interpretation worth noting. - Reliability was assessed by examining the split-half reliability utilizing even and odd questions of the LESP. - These findings are consistent with the fact that the split-half reliability correlations fall well within the 95% confidence intervals for the cross-procedure correlations. For editors and proofreaders Line breaks: split-half Definition of split-half in: What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:851b55be-5c0b-4397-bfe4-00f74cc5b020>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/us/definition/english/split-half
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.905395
163
2.671875
3