text
string | id
string | metadata
dict |
---|---|---|
Teaching to Standards: Experience Shows That Teaching with Standards-Aligned Materials Isn't Enough to Ensure That Students Meet Expectations. Teachers Also Need Professional Development in Planning and Evaluation
O'Shea, Mark R., Leadership
Now that California's standards-based educational system will soon be completed, our attention is returning to the classroom. What will teachers do differently to assure that their students meet the higher expectations of the standards?
California, perhaps more than any other state, has provided school leaders and teachers with a variety of resources for implementing standards. Diane Massell, a researcher in Philadelphia, has complimented our state for its efforts: "California published curriculum and program advisories, lists of educational materials (in addition to approved textbooks), model curriculum guides and task force reports to provide guidance while frameworks were being revised" (Massell, 1998).
Clearly, teachers in our state are in an enviable position. Unlike other states, we have approved curriculum products for use at the lower grades that are aligned with the standards. But recent experience in New Jersey suggests that teacher reliance on standards-based curriculum materials may be insufficient to reach the target: student work and test performances that meet standards.
The New Jersey study revealed some issues of importance to school leaders in California. Following extensive professional development in the use of standards-aligned curriculum activities in science, 27 teachers were asked to select one activity as the basis for a lesson plan. The teachers who volunteered for this exercise were asked to: 1) teach the lesson, 2) collect three samples of student work from the lesson and 3) write specific comments that explained where the evidence could be found that documented the achievement of the selected standard.
The New Jersey teachers were professional and trusting. They disclosed their plans, their student work samples and their comments about student work produced in their own classrooms. An analysis of the student work, the teachers' comments about the work samples and the lesson plans prepared from standards-based activities suggests that teachers struggle in their efforts to plan and implement standards-based lessons, even when those lessons use standards-aligned curriculum materials.
Principals and curriculum leaders may need to give some thought to how teachers are supported in their efforts to implement standards. Diane Massell is clear on this matter: "High-quality curriculum materials are necessary if not sufficient tools for implementing and achieving educational change. Indeed, the lack of quality, including the tendency of textbooks to cover so many topics in a superficial manner, was the initial impetus for the National Council of Teachers of Mathematics' groundbreaking effort in the 1980s to set academic content standards in K-12 mathematics."
Standards-based lesson planning
The teachers in the New Jersey study received extensive staff development in hands-on science activities and information about the New Jersey core curriculum content standards. The teachers did not receive instruction in how to plan differently for a standards-based classroom. What could have been done for these teachers to help them meet the standards?
A close look at the standards, the frameworks, teachers' lesson plans and student work provided some insights into effective standards-based lesson planning. Here are some of the components that were missing in the New Jersey professional development activity.
Selecting standards and indicators
When teachers use standards-aligned curriculum materials as the sole means of meeting the standards, they are not given the opportunity to consider deeply the higher expectations that the state frameworks describe for their students. Consider these frequently observed errors based on the analysis of lesson plans prepared by New Jersey teachers who were not provided with instruction in standards-based lesson planning:
1. Too many standards or indicators selected for one lesson. …
|
<urn:uuid:449cb038-996f-4a99-8000-1fa4f4d0006c>
|
{
"dump": "CC-MAIN-2015-48",
"url": "https://www.questia.com/read/1G1-82092511/teaching-to-standards-experience-shows-that-teaching",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398455246.70/warc/CC-MAIN-20151124205415-00119-ip-10-71-132-137.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9631409645080566,
"token_count": 731,
"score": 3.203125,
"int_score": 3
}
|
In the communicative classroom, teaching listening skills should be approached in the same way as the other skills – with a communicative purpose. Often, listening is taught with a linguistic purpose first and foremost – to improve and develop listening skills in the target language (this applies to other language skills as well). This is, of course, a key goal of most listening lessons; however, in the “real world,” how often do we listen with this goal in mind? Do your students go to the shopping mall on the weekend to buy a cell phone, and then listen to shoppers and store workers intent on improving their listening? In the shopping mall we listen because we need to get certain information, whether that information includes specific prices and options on a cell phone, or another shopper telling you why she prefers shopping at one store instead of another.
-Online TESOL Certificate Courses-
Testing Listening Skills vs Teaching Listening Skills
In the ESL classroom, simply playing a recorded dialogue and then asking students to correctly answer pre-cast comprehension questions based on that dialogue strips listening of nearly all of its real-world communicative context. You are left with a mainly linguistic exercise, which may give you some information about your learners’ current listening proficiency, but does not allow for actual development of listening skills. A cycle of listening / answering questions / checking answers / listening/ etc. is really just testing listening skills, and doesn’t help students learn how to develop their listening skills and improve their listening comprehension.
Developing Listening Skills
Good listening lessons will provide pre-listening activities to help students better predict what kind of information they will hear by creating a context and a purpose for listening. Better listening lessons will also help learners to clear up misconceptions and miscues as they listen. In other words, developing listening skills requires that students are provided feedback and support in the process of listening, not just based on their comprehension after they have finished listening. When listening is approached in this way, effective strategies for listening can be discussed and applied during the process of listening, making it easier for students to understand the relevance of those strategies and how they apply.
Recommended reading on teaching listening skills:
|
<urn:uuid:385aa161-526b-4e17-b6b1-171c030a2c9b>
|
{
"dump": "CC-MAIN-2017-22",
"url": "http://how-to-teach-english.ontesol.com/communicative-tesol-are-you-teaching-or-testing-listening-skills/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608648.25/warc/CC-MAIN-20170526071051-20170526091051-00130.warc.gz",
"language": "en",
"language_score": 0.9366716742515564,
"token_count": 447,
"score": 3.828125,
"int_score": 4
}
|
Roberto M. Gonzalez, Department of Economics, UNC, Chapel Hill
An interesting paper on Cuba’s Infant Mortality Rate (IMR) was presented at the 2013 meetings of the Association for the Study of the Cuban Economy by Roberto M. Gonzalez, a graduate student in Economics at the University of North Carolina. The paper is especially interesting as it focuses on one important indicator of the quality of the health system, human development and socio-economic development which ostensibly has been a major achievement for Cuba. Cuba’s exceedingly low Infant Mortality Rate has been a major “logro” of the Revolution and a source o pride since the early 1960s.
Gonzalez presents information and analysis that casts some doubt on the official IMR figures. His complete argument can be seen in the Power Point presentation that he made at the ASCE meetings here: Infant Mortality in Cuba
The essence of his argument is that Late Fetal Deaths (LFDs) or deaths of fetuses weighing at least 500 grams are abnormally high in Cuba compared to other countries while Early Neonatal Deaths (ENDs) or deaths occurring in the first week of life are abnormally low. In the chart below, Cuba’s high LFD in orange and its low END in green can quickly be seen as outliers for the countries of Europe.
What’s going on here? Perhaps it is reflects an erroneous mis-classification system, or purposeful mis-reporting or possibly late term and mislabeled abortions (if there is any chance of infant ill-health or a congenital health problems.)
While perhaps further work is needed to analyze this LFD-END puzzle, Gonzalez work has certainly raised serious questions about Cuba’s long-vaunted Infant Mortality Rate.
|
<urn:uuid:8ffdfbe0-a4ef-4e0a-a84c-f85e9b18fbba>
|
{
"dump": "CC-MAIN-2017-13",
"url": "https://thecubaneconomy.com/articles/authors/gonzalez-roberto-m/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187144.60/warc/CC-MAIN-20170322212947-00019-ip-10-233-31-227.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9447060823440552,
"token_count": 369,
"score": 2.5625,
"int_score": 3
}
|
RUMINATIONS AND REPLAYS
Overthinking can be connected to pride as it is a desire to control every aspect of a situation or an unwillingness to let go of certain thoughts. Excessively dwelling on thoughts or over analyzing situations to an unproductive or unhealthy extent is overthinking. Overthinking can involve continuously replaying events, worrying about things you have no control over, and analyzing every detail of a situation.
What does overthinking look like?
- Rumination over and replays of past events or conversations,
- Worry: feeling anxious or stressed about what might happen or anticipating problems),
- Analysis paralysis: being caught up in excessive analysis of options leading to inaction or indecision.
All these things can be overthinking and hinder problem solving and decision making. Moreover, excessive rumination leads to confusion, doubt and lack of confidence. Anxiety, perfectionism, past traumas, low self-esteem are also culprits which give rise to overthinking.
If one has difficulty focusing on the present moment as they are constantly preoccupied with thoughts and concerns, they may be an overthinker. Managing overthinking involves strategies like mindfulness techniques, setting boundaries for thoughts, seeking support from trusted individuals. Developing self-awareness, practicing relaxation techniques and challenging negative thought patterns can assist in combating overthinking.
SCRIPTURE OF THE STRONG
“Fear you not, for I am with you, be not dismayed, for I am your Elohim; I strengthen you: yes, I help you; yes, I uphold you with My victorious right hand.”Isaiah 41:10 (KJV)
|
<urn:uuid:f3fe3d7b-e966-49ed-9855-64dc84b5b86f>
|
{
"dump": "CC-MAIN-2023-40",
"url": "https://anulifeglobal.org/anu-blog-vol-75/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506479.32/warc/CC-MAIN-20230923030601-20230923060601-00391.warc.gz",
"language": "en",
"language_score": 0.9218448996543884,
"token_count": 341,
"score": 2.640625,
"int_score": 3
}
|
By David Corrigan and Stephanie Salazar
[Video courtesy USGS-Hawaiian Volcano Observatory]
HAWAII VOLCANOES NATIONAL PARK, Hawaii: January is Volcano Awareness Month on Hawaii Island and 2012 also marks the 100th anniversary of the U.S. Geological Survey Hawaiian Volcano Observatory.
So why not share some of the latest mesmerizing lava lake video, from the fiery pit of Halemau’ma’u on the summit of Kilauea.
This footage shows vigorous spattering along the south margin of the Halema`uma`u lava lake. Lava is upwelling in the northern portion of the lake which is out of view, and slowly migrates to this southern margin where it sinks back into the conduit.
According to the Hawaiian Volcano Observatory’s Kilauea status report for Thursday the spattering sink on the southeastern edge of lake continued building a small spatter rampart and feeding very small lava flows on the inner ledge. The lava level is estimated to be 260 feet below the floor of Halema`uma`u Crater.
The HVO also says the most recent sulfur dioxide emission rate measurement was 1,500 tonnes per day on January 22… new measurements must await the return of moderate trade winds, scientists say.
|
<urn:uuid:6197ed48-dbe5-4f72-94ad-b903756a3b61>
|
{
"dump": "CC-MAIN-2015-22",
"url": "http://www.bigislandvideonews.com/2012/01/28/video-kilauea-volcano-summit-lava-lake-churns/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927458.37/warc/CC-MAIN-20150521113207-00094-ip-10-180-206-219.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8617036938667297,
"token_count": 266,
"score": 2.625,
"int_score": 3
}
|
GLOSSARY OF TERMS
by W.R.F. Browning (NY: Oxford University Press, 1996)
dragon An apocalyptic monster identified with Satan in Rev. 12:9.
by Barbara Smith (Philadelphia: Westminster, 1965)
dragon. Large sea animal. Ps. 73:13; Isa. 27:1. The description in Ezek. 29:3-5 seems to fit the crocodile. In the book of The Revelation, "dragon" is a symbol of the power of evil at war with God. Rev. 12:9.
edited by Paul J. Achtemier (San Francisco: Harper and Row, 1985)
You are strongly recommended to add to your library the excellent revised edition of Harper's Bible Dictionary titled, The Harper Collins Bible Dictionary, Revised Edition, edited by Paul J. Achtemeier, with the Society of Biblical Literature (NY: Harper Collins, 1996). It is currently the best one-volume Bible dictionary in English, and it is available at Border's Books, Christian Science Reading Rooms, http://www.borders.com, or http://www.christianbook.com.
dragon, a reptilian monster well known in the mythology and iconography of the ancient Near East. In the Babylonian creation myth, Enuma elish, the dragon Tiamat is slain by the god Marduk and her supporters taken captive. In a Hattic myth, the dragon Illuyankas defeated the storm god and later was slain by him. The Ugaritic myths from Ras-Shamra refer to various monsters defeated by the storm god Baal or his sister Anat. In the Bible the dragon appears as the primeval enemy of God, killed or subjected in conjunction with creation (Pss. 74:13-14; 89:10; Isa. 51:9; Job 26:12-13), but appearing again at the end of the world, when God will finally dispose of it (Isa. 27:1, using traditional language attested in the Baal myths of Ras-Shamra). The book of Revelation takes up the latter theme. The dragon (identified now with the Devil) and its agents campaign against God and his forces but are finally defeated (Rev. 12-13; 16:13-14; 20:2-3, 7-10). For now, however, it is kept under guard (Job 7:12), its supporters lying prostrate beneath God (Job 9:13). Referred to variously as Tannin, Rahab, or Leviathan, it is usually conceived of as a sea monster, as in the Enuma elish and sometimes at Ras-Shamra. As a great opponent of Godís people, Egypt was known as Rahab. The oracle of Isa. 30:7 gives Egypt the name ĎRahab [is] put down,í alluding to the dragonís defeat by God, and Ps. 87:4 simply assumes Rahab as an accepted name for Egypt. The king of Egypt was portrayed as a sea monster lurking in the Nile, whom God would catch and kill (Ezek. 29:3; 32:2). There may be no mythological allusion here, and there is certainly none when the words tannin and leviathan are used to refer to the monsters of the deep created by God (Gen. 1:21; Ps. 104:26), summoned to praise God (Ps. 148:7), and beyond human capture (Job 41:1). The apocryphal Bel and the Dragon (23-27) relates Danielís unorthodox disposal of a dragon worshipped by the Babylonians.
For links to some other Bible-related webpages, browse http://www.bibletexts.com
To contact the BibleTexts.com website administrator, email or click on firstname.lastname@example.org.
|
<urn:uuid:2d05adb5-30d5-44a1-97de-57f446b4b984>
|
{
"dump": "CC-MAIN-2020-10",
"url": "http://www.bibletexts.com/glossary/dragon.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00229.warc.gz",
"language": "en",
"language_score": 0.9219335317611694,
"token_count": 794,
"score": 2.9375,
"int_score": 3
}
|
|The earliest images of the Crucified Christ are quite well known - mainly because there aren't that many of them. These two images vie for the honour of being the oldest.|
| Actually, there
is no need to compete over this; neither is the earliest known. This
intaglio gemstone (left below) dates from the late second early third
century. It is generally thought to be an amulet or good luck charm.
These would normally feature a pagan image, and no-one seems to be able
to work out why there is a crucifixion scene on it. The Christian church
of the time, it seems, did not approve of magical amulets; the most
likely theory is that a Christian thought he would have one
Perhaps contemporaneous with the amulet is this notorious graffito showing a crucified donkey and the message 'Alexamenos worships God'. It is not clear whether this was anything more than someone winding up a friend - who can tell?
| The crucifixion
has been the key image of Christianity for a millennia and a half - but
why did it take so long to establish itself? Might the Alexamenos
graffito offer a clue?
A mark of shame
It needs to be remembered that early Christian art was a means to teach and reassure believers, and 'sell' the faith to others. As the graffito points out, crucifixion was a shameful way to die. The Unique Selling Point of Christianity was the promise of resurrection and eternal life. Images of a painful death didn't square with this. Of course, the early theologians were fully aware of the significance of the Crucifixion. As Cyril of Jerusalem tells us, 'Let them not be ashamed of the Cross of our Saviour, but rather glory in it, for the word of the Cross is unto Jews a stumbling block, and unto Gentiles foolishness, but to us salvation.'* However, this was probably a far too difficult message to put over in art. Christ needed to be portrayed as a powerful teacher and healer who was still living - though not in human form. The suffering and dying human on the cross didn't fit. This explains also the lack of nativity scenes at this time - a helpless babe dependant on his mother wouldn't work either. It also needs to be remembered also that such early Christian art that has survived marked a death - catacomb frescos and sarcophagi - and the focus had to be resurrection.
It should be recognised, however, that the cross itself - without the figure of Christ - was an accepted and important symbol by the end of the second century: Tertullian in De Corona Militis (204) describes the already established tradition of marking the sign of the cross on the forehead.
There is much debate on the combining Greek letters to form monograms that became Christian symbols. The chi-rho is the most familar (left below) formed from the letters chi (X) and rho (P) - a symbol of Christ made from the first letters of his name. Less well know is the staurogram or tau-rho (right below) formed from the letters tau (T) and rho. It is claimed by Larry W. Hurtado in The Earliest Christian Artefacts: Manuscripts and Christian Origins, that this mongram is a symbol of the crucifixion itself and was used as such from the very beginnings of Christianity.
|So why did things change? One answer may involve St Helena, mother of Constantine, and the legends of her journey to Palestine to discover the True Cross, as told by Eusabius in his Life of Constantine, written c 330 - 339. Maybe the gradual change from symbol of shame to symbol of power and redemption began here.|
|*13th Catechetical Oration, c350|
|
<urn:uuid:88c43f53-bf8d-4e1f-bb34-dbf6f8de1025>
|
{
"dump": "CC-MAIN-2017-39",
"url": "http://www.virtual.magic-nation.co.uk/passion21d.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689471.25/warc/CC-MAIN-20170923033313-20170923053313-00580.warc.gz",
"language": "en",
"language_score": 0.966969907283783,
"token_count": 803,
"score": 2.875,
"int_score": 3
}
|
Researchers at University of Alberta have discovered potential treatment for a deadly disease called pulmonary hypertension.
Pulmonary arterial hypertension, which is high blood pressure in the lungs, currently has only a few treatment options but most cases lead to premature death.
It is caused by a cancer-like excessive growth of cells in the wall of the lung blood vessels. It causes the lumen, the path where blood travels, to constrict putting pressure on the right ventricle of the heart which eventually leads to heart failure.
Evangelos Michelakis, his graduate student Gopinath Sutendra and a group of collaborators have found that this excessive cell growth can be reversed by targeting the mitochondria of the cell, which control metabolism of the cell and initiate cell death.
By using dichloroacetate (DCA) or Trimetazidine (TMZ), mitochondria targeted drugs, the activity of the mitochondria increases which helps induce cell death and regresses pulmonary hypertension in an animal model, says Sutendra.
Current therapies only look at dilating the constricted vessels rather than regression, so this is a very exciting advancement for the lab.
"In the pulmonary hypertension field they're really looking for new therapies to regress the disease, it might be the wave of the future," said Sutendra.
"The other thing that is really exciting is that TMZ and DCA have been used clinically in patients so it's something that can be used right away in these patients."
|
<urn:uuid:c913f6c8-65ac-4b8c-aed8-1bf8b1d2613a>
|
{
"dump": "CC-MAIN-2019-22",
"url": "https://www.medindia.net/news/researchers-discover-potential-treatment-for-pulmonary-hypertension-72650-1.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258620.81/warc/CC-MAIN-20190526004917-20190526030917-00300.warc.gz",
"language": "en",
"language_score": 0.9400171041488647,
"token_count": 301,
"score": 3.03125,
"int_score": 3
}
|
Talvinen maisema --- Another great idea for teaching value using shades and tints. The original poster of this project, art teacher Jessica Young, gives a great explanation of the steps and tips she used with her fourth-grade students.
Pirasta makes enormous coloring posters, chock full of fun, detailed drawings that kids—and adults—will be extra excited to explore and fill in. Hang it on a wall, or roll it over a floor, and spend hours coloring.
Use crayons to color different spots of bright colors on some paper. Color over the area with a black crayon. Cut out a shape. Use the popsicle stick to scratch out the black and reveal the colors underneath. or hang
|
<urn:uuid:16ba231c-1728-4c46-a74d-8221a85dcbe5>
|
{
"dump": "CC-MAIN-2018-30",
"url": "https://gr.pinterest.com/bigring83/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592475.84/warc/CC-MAIN-20180721090529-20180721110529-00089.warc.gz",
"language": "en",
"language_score": 0.8674212694168091,
"token_count": 149,
"score": 3.125,
"int_score": 3
}
|
Select the image below to View Joseph in the Summer in a new, full-screen window.
The small town of Joseph Oregon, population 1200, is captured on a crystal clear morning in June.
The east side of the town's neighborhoods are shown, as well as the the Wallowa Mountains that frame the town. Early settlers engaged in cattle, sheep, fruit and timber operations, later moving onto mercantile, banking and lodging. Platted in 1882, Joseph Oregon was the areas first town, even before there was a county designated 'Wallowa.' When outside interests in the form of a Mercantile and Mining Company came to make their stake in the town they were turned away by local businesses. That didn't stop them, they simply moved north six miles, creating the town of Enterprise, which later took the county seat.
This area was home to the Wallowa Band of the Nimi'ipuu (translated to mean 'we the people' or 'real people') or Nez Perce as the whites call them, even though no record exists of them piercing their noses. They called the Joseph area 'Hah-um-sah-pah', meaning 'big rocks lying scattered around.' 'Wallowa,' another Nimi'ipuu name, translates to 'Fish Trap' or 'Land of Winding Waters' depending on who you ask. This was their home for hundreds if not thousands of years, used primarily as a hunting, fishing and root and berry gathering area during the summer. Had it not been for the kindness of the Nez Perce in 1805, Lewis & Clark's Expedition of Discovery might never have never made it. Chief Joseph, the town's namesake, was a honorable, peaceful man, by all accounts a friend of the settlers until forced from the land by a treaty he never signed. His Grave (right) sits in a place he called home at the foot of Wallowa Lake.
From the left, over the road is Bonneville Mtn, next is the large Chief Joseph Mtn, then the very snowy Hurricane Divide, followed by Twin Peaks, Sawtooth Peak, a peak that sits forward as close as Chief Joseph Mountain named Hurricane Point and the last major mountain, the one with a cloud on the right is Ruby Peak.
|
<urn:uuid:29643137-dd67-4f26-ab4a-aeac760182aa>
|
{
"dump": "CC-MAIN-2020-05",
"url": "https://josephoregon.com/joseph-or-overlook-summer",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250628549.43/warc/CC-MAIN-20200125011232-20200125040232-00100.warc.gz",
"language": "en",
"language_score": 0.9728124737739563,
"token_count": 470,
"score": 2.71875,
"int_score": 3
}
|
The declaration by the World Health Organization that saw the coronavirus or the COVID-19 as pandemic caught on a string of reactions by several countries throughout the world. From international travel bans to domestic social distancing – all methods to curb the spread of the virus are being practiced by everyone around the globe. The panic and distress caused by this news and the adoption of preventive policies can be observed irrespective of the number of reported cases in that particular country. Many gatherings and events have been called off including sports matches, cultural festivals, international business conferences and the world is set to go in a temporary lockdown. This was the response received by the COVID-19 within its existence of two months where the current number of reported cases are 118,000 in 114 countries and 4291 people have lost their lives.
This makes one wonder why things went south with climate change? The very first climate emergency declaration took place in December 2016, approximately three years prior to the COVID-19. Climate change that has impacted our food, air, health system, water and environment has an estimated death rate more significant than the virus. It is found to be responsible for an additional 2,50,000 deaths per year through heat stress and a predicted net increase of 5,29,000 adult deaths by 2050 due to reduced food productions. Yet these calculations do not include every aspect of the impact that the climate crisis imposes.
Individuals and environmental agencies across the world are still spending a considerable amount of their funds in convincing people the reality behind climate change despite scientists constantly warning us of its existence. The reaction to coronavirus went on to further prove one thing, i.e., it is not about believing or not believing but about what effects the individuals and the government more directly.
This graph shows the number of cases cause by the coronavirus as of 16thMarch 2020 and the this graph shows the calamities and the impacts of global warming. One of the reasons behind the silence to climate change can be understood by analyzing these graphs.
It can be noted that on one hand, the people affected by the virus are irrespective of their wealth or class, in fact the larger proportion of the reported cases may be belonging to the upper class people due to easy access to medical facilities. On the other hand, however, the second graph clearly mentions how through global warming currently (until 2025), only ‘some’ people and regions would be adversely affected and we can even anticipate some positive impacts on the markets in the short period. The ‘some’ here are the people and regions stricken with poverty, those who cannot buy the rights and resources through money and power.
This concept is further supported by critiques who deny the Anthropocene. Anthropocene, a geological epoch that commences the significant impact on earth’s climate due to human activities is often critiqued for its nature of ‘all humans’. Many researchers and people in academia present the argument that human activities that impact the earth’s climate are not related to activities performed by ‘all’ humans rather specific to only what the capitalists benefit from, for example- mining, fossil fuel exploitation and deforestation on a large scale. Many indigenous groups have proved that it is possible to coexist with the nature in a balanced manner and that consuming to sustain does not harm the environment.
This also makes it easier to understand why the environment has shown a considerable improvement in terms of air quality index or clearer water bodies. NASA air quality researcher Fei Liu said – “This is the first time I have seen such a dramatic drop-off over such a wide area for a specific event.” It is important to understand that it is not the entire human population being under lockdown that has resulted in this but the result of capitalism coming to a halt.
Some might wonder whether it really is necessary to talk about climate change in the middle of a pandemic, where they do not realise that climate change is a bigger crisis than the current COVID-19 threat. The world’s response to this threat indicates that it is possible for us to amend our ways in order to survive the climate change but only when there are government efforts to support behavioural change. By the end of the day some experts have reports that reduction in pollution may have even saved more lives than the death toll caused by the deadly virus in China.
Vanshika Mittal is a second year undergraduate from Ashoka University pursuing her major in Economics and minor in Environmental Sciences.
|
<urn:uuid:345b626d-b143-4200-a1f7-57692b361e47>
|
{
"dump": "CC-MAIN-2022-05",
"url": "https://nickledanddimed.com/2020/04/08/pandemic-lockdownclimate-change-ignorance/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00044.warc.gz",
"language": "en",
"language_score": 0.962397575378418,
"token_count": 912,
"score": 3.46875,
"int_score": 3
}
|
This action might not be possible to undo. Are you sure you want to continue?
Chris Rayment Scott Sherwin Department of Aerospace and Mechanical Engineering University of Notre Dame Notre Dame, IN 46556, U.S.A. May 2, 2003
1 Preface 2 Introduction 2.1 Fuel Cell Basics . . . . . . . . . . . . . . . . . . . . 2.2 History of Fell Cell Technology . . . . . . . . . . . 2.3 Why are we studying Fuel Cells? . . . . . . . . . . 2.3.1 Why Fuel Cells are an Emerging Technology 2.3.2 What are the applications of Fuel Cells? . . 9 11 11 12 13 14 15
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Fuel Cell Basics and Types
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19 19 22 24 25 26 26 27 29 29 29 30 30 31 33 33 34
3 Open Circuit Voltage and Efficiency 3.1 Open Circuit Voltage . . . . . . . . . . . . . . . . . . . . . . 3.2 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Efficiency Related to Pressure and Gas Concentration 3.3 Nernst Equation Analysis . . . . . . . . . . . . . . . . . . . 3.3.1 Hydrogen Partial Pressure . . . . . . . . . . . . . . . 3.3.2 Fuel and Oxidant Utilization . . . . . . . . . . . . . . 3.3.3 System Pressure . . . . . . . . . . . . . . . . . . . . . 4 Causes for Voltage Loss 4.1 Introduction . . . . . . . . . . . . . . . . 4.1.1 Common Terminology . . . . . . 4.2 General Voltage Loss Descriptions . . . . 4.2.1 Initial Theoretical Voltages . . . . 4.2.2 Description of Operational Losses 4.3 Activation Losses . . . . . . . . . . . . . 4.3.1 Tafel Equation . . . . . . . . . . 4.3.2 Maximizing the Tafel Equation . 3
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
.2. . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Combining the Losses . . . . . . . . . . . . 7.2 Anode and Cathode . . . . . . . . .7 Fuel Crossover/Internal Current Losses Ohmic Losses . . . . .1. . . of Supercharging . . .2 Static Electrolyte Alkaline Fuel Cells 5. . . . . . . . . . . . . . . . . . . . . . . . . .1 Molton Carbonate Fuel Cell Components 6. . . . . . . . . . . . . . . . . . . . . . . . .3 Rolled Electrodes . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Types of Alkaline Electrolyte Fuel Cells . . 6. . . . . . . .3. . . . . .2 MCFC research and systems . . . . . 6 Molten Carbonate Fuel Cell 6. . . . . . . . . . . . . . . . . . . . . . . . .2 Raney Metals . . . . . . . . 7 Polymer Electrolyte Fuel Cell (PEMFC) 7. .1 Introduction . . . 8. . . . . . . . . . . . . . 35 36 36 37 37 39 40 40 41 41 42 42 42 43 43 45 47 47 47 48 48 48 49 49 50 51 52 53 54 55 57 57 58 59 59 60 5 Alkaline Fuel Cells 5. . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . .3 Cathode . . . . . . .2 Description of Operation . . . 8. . . . .2 The Polymer Membrane . . . . . . . .3 General Voltage Loss Descriptions . . . . . . . . .4 Manifolding . . . . . . 5. . . . . . . . . . . . . . . .5 4. . . . 6. . . . .1 Introduction . . . . .1. . .1. . . . . . . . . . . . .1 Mathematical Understanding of the Effects 7. . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . 8. . . 8. . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . Conclusion . . . .1. . 7. . .1. . . . .1 Air Flow’s Contribution to Evaporation . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . .2 Anodes . . . . . . . . . . . . . . . . . . . . . . .3 Water Management . . . .3. . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . .1 Typical Losses . . . . . . . . . .1 Mobile Electrolyte . . . . . . . . . . . . . . . . . . . . . .3. . 7.6 4. . . . . . . . . . . . . . . . . . . . .2. . . . 5. . . .3 Operating Pressure and Temperature . . .5 Conclusion . . . . . . .1. . . 4 . . . . . . . . . . . . . .4 4. . . . . . . . . . .4. . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Sintered Nickel Powder . . . .4. . Mass Transport/Concentration Losses . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . .1 Electrolytes . . . . . . . 5. . . . . . . . . . 6. . . . . . 5. . . . . . . . . 7. . . . . . . . . . . .2 Electrodes for Alkaline Electrolyte Fuel Cells 5. . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Effects of Pressure . . . . . .3 Dissolved Fuel Alkaline Fuel Cells . . . . 8 Direct Methanol Fuel Cells (DMFC) 8. . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .1 The Electrolyte . . . . . . . . . . . 5 . . . 9. .1 Introduction . . . . 61 63 63 64 65 65 66 66 67 69 69 70 70 71 72 74 74 74 75 75 76 77 9 Phosphoric Acid Fuel Cells 9. . . II Fuel Cell Applications and Research . .3 Cell Components . . . . . . . . . . . . 10. . . . . . . . . . . . .4 Manufacturing Techniques . . . 10. 9. . . . . . . . . . . . 10 Solid Oxide Fuel Cell (SOFC) 10. 10. 10. . . . . . . . . . . . .5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introduction . .6 Temperature Effects . . . . . .4 Conclusion . . . . . . . .3 Compressor Power . . . . . . . . . . . .1 Compressors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 81 81 82 83 84 85 85 85 87 87 88 11 Fuel Cell System Components 11.6 Fans and Blowers .3 The Stack . . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Configurations . .7 Research and Development . . . . . . . . . . . . . . . . . . . 10. . . . . . . . . 10. . . . . . . . . . .1 Planer . . . . . . . . . . . . .6 Conclusion . . . . . . . . . . . . . . .2 Compressor Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Effects of Temperature 10. . . . .5. . . . . . . . . . . . . . .2. . . . 11. . . . . . . . . . . . . 12.2. . . . . . . .2 Hydrogen Production from Natural Gas . 10. 9. . . . . . . . . . . 9. . . . . . 11. . . . . . . . . . . . . . . . .2 Tubular . . . . 11.5 Operating Pressure . . . . . . . . . . . 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. . . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . . . . . .8. . . . . . . .5. . 11. . .1 Tape Casting . . . . . . . . . . . . . . . . . . . .5 Ejector Circulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. . . . . . . . .4 Stack Cooling and Manifolding 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Effects of Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Turbines . . . . . . . . . . . . . . . . . . . . . . .3 Effects of Impurities .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 The Electrodes and Catalysts . . . .5. . . . . . . . . . . . . . . .7 Membrane/Diaphragm Pumps 12 Fueling the Hydrogen Fuel cell 12. . . . . . . . . .
. .3. .2 Fuel Reforming . . . . . . . .2 Cryogenic Liquid . . 14 Manufacturing Methods 14.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Conclusions . . . 16 The New fuel for a New fleet of Cars 16. . .5. . . . .3 Fuel Delivery and Crossover Prevention 15. . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . .1 Conclusions .2 Effects of Compression . . . . . . . .3. 14. . . .1 Bipolar Plate Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . 14. . . .1 Silicon Based Microreactor . . . .5 Methods for DMFC . . . . . . . .3. . . . . . . . . . . . 14. 15. . . . . . 14. . . . . . . . .5. . . . . . . . . . . . . . . .2 Air Movement .1 PEM Simulation and Control . . . . . . . . . . .3 Hydrogen Production from Coal Gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 System Issues . . . . . . . . . . . 15. . . 13. . . . . . . . . . . 13 PEM Fuel Cells in Automotive Applications 13. . .1 Gasoline Reforming .1 Introduction . . . 16. . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . .1 Thermal Management . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . .5 System Integration . . . . . . .2 Solutions . . . . . . . . . . . . . . .2 Methanol Reforming . . . . 16. . . . . . . . . . . . . . .3. . . . . . . 15. . . 14. . . . . .6 Methods for SOFC . . .2. . . . . . . . . . 15 Portable Fuel Cells 15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16. . .2 Carbon/Carbon Composite Bipolar Plate for PEMs . . . . . . . . . . . . . . . . . . . . . . 14. . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15. . . . . . . . . . . . . .4 Load Management . . . . . . . . . . . . . . . . 6 90 92 93 93 95 99 99 100 101 101 102 103 104 104 106 108 109 119 119 119 120 121 121 122 123 124 125 127 127 128 128 128 129 129 130 130 . . . 15. . . . . . . . . . . .4 Introduction to SOFC and DMFC Manufacturing Methods 14. . . . . . 14.2 PEM Cost Analysis . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 MEA Thickness and Performance . . . . . 16. 16. . . . 16. . . . . . . . . . .3 Electrolyte Matrix . . . . . . . . . . . . 15. . . . . . . . . . . . .3 Fuel Storage . . 14. . . .4 Conclusion . . . . . . . . . . . . . . . . . . .12. . . . . . . . . . . . . . . 12. . . . . . . . . . . . . . . .7 Conclusion . . . . . . .4 Hydrogen Production from Bio Fuels . . . . . . . . . 14. . . 15. . . . . . . . . . . 16. . . . . . .1 Compressed Gas . . . . . . . . . . . . . .
. . . . . . . . . .2.3. . . . . . . . . . . .2 Technological Developments 18. . . . . . 18. . . . . . . . .3. . . Bibliography . . . 18. . . . .2 Cost Analysis . . . . . . . . . . . . . . . . . .2 System Integration . . . .2. . 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Fuel .1 Reliability . . . . . . . . . . 17. . . . . . . . . . .3 Conclusion .1 Introduction . . . . . . . . .3 Government Interaction . . .3 Technical Issues . . . . . . . . . . . . . . . . . 18. . . . . . . . . . . . .2. . . . . . . 18. . . . . . . . . . . . . 133 133 134 135 137 139 141 141 142 146 146 146 148 150 156 7 . . 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Optimization of a Cogeneration Plant 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Fuel Cell Challenges 18. . . . . . . . . . . . .17 Commercial and Industrial Use 17.1 Cost Reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18. . . .1 Thermodynamics . .3. . . . . . . . . . . . . . . . . .
Mihir Sen and Paul McGinn. The goal of this report was to produce a document showing our work for the semester and also to make available to other students interested in fuel cells or taking an introductory fuel cell course. The first half of the course consisted of an introduction into fuel cells and the various types whereas the second half of the course consisted of applications and current research in the fuel cell field. This report is a result of each weekly presentation. Their knowledge and experience in Engineering was greatly beneficial to the success of this course and thus the report. and evenly distributed between the two of us. Chris Rayment and Scott Sherwin. The course outline was determined by us. Chris and Scott. Chris Rayment Scott Sherwin c Chris Rayment and Scott Sherwin 9 . This is representative of the general layout of the report. who agreed to conduct the course.Chapter 1 Preface This document was produced for a directed reading class at the University of Notre Dame. who were interested in learning about fuel cells and two professors. The course consisted of weekly presentations on the written chapters. The class was a result of two students. We would like to thank Professor Mihir Sen and Professor Paul McGinn from the University of Notre Dame for their time and guidance in conducting this course.
This design gives the maximum area of contact between the electrodes.Chapter 2 Introduction 2. Problems arise when simple fuel cells are constructed. The basic design of a fuel cell involves two electrodes on either side of an electrolyte. heat and water. Simple fuel cells also have high resistance through the electrolyte as a result of the distance between the electrodes. as a result of these problems. fuel cells have been designed to avoid them. heat and water are produced.1) The electrons can be harnessed to provide electricity in a consumable form through a simple circuit with a load. the electrode. Hydrogen fuel is supplied to the anode (negative terminal) of the fuel cell while oxygen is supplied to the cathode (positive terminal) of the fuel cell. Hydrogen and oxygen pass over each of the electrodes and through means of a chemical reaction. electrolyte and gas thus increasing the efficiency and current of the fuel cell.1 Fuel Cell Basics A fuel cell is a device that uses hydrogen as a fuel to produce electrons. A design solution includes manufacturing a flat plate for the electrodes with an electrolyte of very small thickness between the two electrodes. Through a 11 . A very porous electrode with a spherical microstructure is optimal so that penetration by the electrolyte and gas can occur. and the gas fuel. electricity. A fuel cell does not require recharging the same as a battery. Fuel cell technology is based upon the simple combustion reaction given in Eq. (2. In theory a fuel cell will produce electricity as long as fuel is constantly supplied. protons.1): 2H2 + O2 ↔ 2H2 O (2. Simple fuel cells have a very small area of contact between the electrolyte. Therefore.
2 History of Fell Cell Technology The origin of fuel cell technology is credited to Sir William Robert Grove (18111896).1: Basic Fell Cell Operation The hydrogen fuel can be supplied from a variety of substances if a “fuel reformer” is added to the fuel cell system. and oxygen combine to form the harmless byproduct of water. The electron.1. Grove developed an improved wet-cell battery in 1838 which brought him 12 . The electrons are capable of taking a path other than through the electrolyte. 2. therefore there is are significantly cleaner emissions than from a fuel combustion process.chemical reaction. The proton passes through the electrolyte and both are reunited at the cathode. Grove was educated at Oxford and practiced patent law while also studying chemistry. The fuel cell’s means for producing electricity is through a chemical reaction. This process is shown in Fig. when harnessed correctly can produce electricity for a given load. Each takes a different path to the cathode. hydrogen can be obtained from hydrocarbon fuel such as natural gas or methanol. Therefore. proton. which. 2. the hydrogen is split into an electron and a proton. Figure 2.
these problems are mainly 13 . electrolyte. Bacon was successful in developing a fuel cell that used nickel gauze electrodes and operated at pressures up to 3000 psi. Bacon’s work lead into World War II as he tried to develop a fuel cell to be used in the Royal Navy submarines. Ludwig Mond (1839-1909) along with assistant Carl Langer conducted experiments with a hydrogen fuel cell that produced 6 amps per square foot at 0. Mond and Langer came across problems using liquid electrolytes. His work was performed along with students at Braunschweig and Zurich. Some of these current problems are the high initial cost of manufacturing the fuel cell. Similar technology is still being used in spacecraft.fame. Grove developed a device which would combine hydrogen and oxygen to produce electricity. Using his research and knowledge that electrolysis used electricity to split water into hydrogen and oxygen he concluded that the opposite reaction must be capable of producing electricity. Using this hypothesis. in a similar way as has been done in the so-called dry piles and batteries. who experimentally determined the relationship between the different components of the fuel cell. It was Friedrich Wilhelm Ostwald (1853-1932). Grove had developed the world’s first gas battery. Ostwald’s work opened doors into the area of fuel cell research by supplying information to future fuel cell researchers. anions and cations. including the electrodes. First the industry must reduce the cost of producing fuel cells. and the unfamiliarity that the industry has with the fuel cell. As Mond said “we have only succeeded by using an electrolyte in a quasi-solid form soaked up by a porous non-conducting material. the founder of the field of physical chemistry. During the first half of the twentieth century. oxidizing and reducing agent.” Mond used an earthenware plate saturated with dilute sulfuric acid. Bacon’s developments were successful enough gain the interest of Pratt & Whitney. and his work was licensed and used in the Apollo spacecraft fuel cells. 2. Emil Baur (1873-1944) conducted extensive research into the area of high temperature fuel cell devices which used molten silver as the electrolyte. Francis Thomas Bacon (1904-1992) performed research and significant developments with high pressure fuel cells.3 Why are we studying Fuel Cells? Currently there is a lot of active research throughout the world on solving engineering problems that currently prevent fuel cells from becoming commercially available. In 1958. his work lead to the development of an alkali cell using a stack of 10” diameter electrodes for Britain’s National Research Development Corporation. It was this gas battery which has become known as the fuel cell.73 volts. These problems highlight three areas. the lack of an infrastructure to deliver fuels to the cells.
engineering or manufacturing problems associated with each type of fuel cell. The second issue us one of policy and engineering. In order to develop an infrastructure for fuel cells first a specific type of fuel cell needs to be chosen so the infrastructure can correctly be developed to support the specific needs of the cell. Also there needs to be several policy changes that can account for the new source of electric power, i.e. standardization, safety codes, and regulations for production of the fuels and the distribution of the fuels. The final noted hurdle that must be solved before commercialization can begin is the power industry needs to be familiarized with this emerging technology. This education of the industry will occur over time as the technology becomes more commonplace as a form of energy generation and as the power companies themselves move toward a more hydrogen based from of electric power generation. Thus as an introduction to fuel cells we need to study cell cells for two important reasons. First they are an emerging technology that needs to be understood, thus enabling the continuation of R & D and the eventual rollout of commercialization. Secondly we need to study fuel cells because we need to learn how the presence of fuel cells will change current application of energy dependent devices.
Why Fuel Cells are an Emerging Technology
As mentioned above the major disadvantage of the fuel cell is that it is currently more expensive then other forms of power conversion. But this is a barrier that is soon to be broken. Previously the application of fuel cells was purely for use in niche applications like the space shuttles of the 60’s. But as R&D has progressed in the past 40 years the cost of the fuel cell has dropped dramatically, the current cost is about $1,500/kW. According to most research analysts the necessary cost that producers must reach is around the $400/kW range. To address this cost barrier the government has awarded $350 million in research grants to several companies to lower the initial cost of the cell to the necessary price range. The government is working in this area though a branch organization of the Department of Energy, called the Solid State Energy Conversion Alliance (SECA). The SECA has distributed money and provided help to four major companies in an effort to break the cost barrier by the year 2010. Once this barrier is broken it is widely speculated that fuel cells will become a dominant source of energy conversion. The reason for their desirability is that they are extremely efficient, simple, have virtually no emissions, and they run silent. Current fuel cells, when operated alone have efficiencies of about 40%-55%, and when they are used with CHP they can reach efficiencies of 80%. This is a dramatic improvement over a current internal combustion engine which is limited to an efficiently of about 30%. The simplistic design of fuel cells will contribute greatly to their longevity. They have virtually no moving parts, and in some cases are made entirely of solids. This not only simplifies the manufacturing process but 14
it also will allow the cells to have longer operational periods. Since the output of an ideal fuel cell is pure water, the emissions are extremely low. Depending on the type of fuel cell and the fuel used the actual emissions of fuel cells fall well below any current standard of emissions. If the fuel cell is applied to the L.A Basin emissions requirements we find that it falls well below the maximums. It emits <1ppm of NOx, 4ppm of CO, and <1ppm of reactive organic gases, while the standards are an order of magnitude greater for NOx, two orders of magnitude greater for reactive organic gases, and several orders of magnitude larger for CO. The final advantage of the fuel cell that many consumers will appreciate is the silence of operation. The cell converts energy though a chemical process, as opposed to a mechanical process like in a internal combustion engine, thus the sound emissions are virtually zero. This is important especially in onsite applications and in vehicle application. All of these major advantages make fuel cells an excellent choice of the future of power generation.
What are the applications of Fuel Cells?
The applications of fuel cells vary depending of the type of fuel cell to be used. Since fuel cells are capable of producing power anywhere in the 1 Watt to 10 Megawatt range they can be applied to almost any application that requires power. On the smaller scale they can be used in cell phones, personal computers, and any other type of personal electronic equipment. In the 1kW - 100kW range a fuel cell can be used to power vehicles, both domestic and military, public transportation is also a target area for fuel cell application, along with any APU application. And finally, in the 1MW - 10MW range fuel cells can be used to convert energy for distributed power uses (grid quality AC). Since fuel cells can be used anywhere in the power spectrum their development will have an immediate impact in their prospective power range. One of the major applications for the fuel cell in the future will likely be that of domestic and public transportation. The fuel cell is well adapted to this application because use of a fuel cell will reduce the design complexity of a vehicle. Currently GM has devoted a lot of their future planning on the incorporation of the fuel cell in their designs. They would like to create a drive-by-wire vehicle that would remove the dependence of today’s cars on mechanical systems. This conversion to a totally electronic vehicle would greatly reduce the number of moving parts in the car, thus dramatically decreasing the likelihood of failure. In the low scale range the fuel cell has a great advantage over batteries in the that they do not need to be recharged, only fueled, and they have much higher power densities than current commercialized batteries. Since they can provide more power per area the cell can be smaller while applying the same power, thus saving space considerably. In a large scale setting the fuel cell can be used to assist in increasing the efficiency of the current turbine power plant. By using the hot exhaust from the fuel cell and transferring it to a turbine 15
power cycle the overall practical efficiency of the system can reach up to 80%.
Part I Fuel Cell Basics and Types 17 .
1) To analyze the chemical energy changes throughout the chemical process involved in the operation of a fuel cell.Chapter 3 Open Circuit Voltage and Efficiency 3. 3.1: Basic Fuel Cell Inputs and Outputs The power and energy is the same as that for any electrical system.1 Open Circuit Voltage Fuel cell efficiency can not be analyzed the same as a thermodynamic system using the Carnot efficiency. one must be aware of and understand “Gibbs free energy. Unlike many electrical power generating systems it is not obvious what form of energy is being converted into electricity in a fuel cell. Hydrogen Energy = ? Electricity Energy = V*I*t Fuel Cell Oxygen Energy = ? Heat Water Figure 3. The inputs and outputs of the basic fuel cell are shown in Fig.” This is defined as the “energy available to do external work. Power = VI and Energy = Power × t = VIt (3. neglecting 19 .1.
chemical energy has reference points from which all other system chemical states are based upon. pref )] ¯ ¯o g ¯ o = gf + ∆¯ ¯ g (3. For chemical energy the point of zero energy can be define as almost anywhere. (3. Gf is used when this convention is used.5) (3. T is temperature. and T is the temperature. p) − h(Tref . (3.any work done by changes in pressure and/or volume.6) T IntRev 1 where δQ is the heat transfer at a part of the system boundary during a portion of the cycle. Assuming ideal gas behavior entropy at any temperature and pressure is determined by: S2 − S1 = s = s(T. pref ) + [¯(T. p is pressure. p) = so (T ) − R ln ¯ ¯ 20 p pref (3.8) (3. p) = gf + [¯(T.7) 2 .3) we obtain the equation below: ¯ ¯ ∆¯ = [h(T.” Just as for potential energy. The Gibbs function of formation is defined below: g = h − Ts ¯ ¯ ¯ (3. Entropy is defined as: δQ (3. p) − Tref s(Tref . ¯ The Gibbs function at a state other than the standard state is found by adding the Gibbs free energy of the standard state with that of the specific Gibbs function of the state of interest as expressed below: g (T. thus simplifying calculations and creating a standard.2) to ¯o Eq. p) − g (Tref . Therefore. “Gibbs free energy of formation”. and s is entropy per mole.3) where gf is the absolute Gibbs energy at 25o C and 1 atm. (3.” A simple analogy can be made between “chemical energy” and “potential energy. p) − s(T. and v is the specific ¯ ¯ volume.2) is defined as: ¯ ¯ h = u + p¯ v (3. pref )] − [T s(T. pref )] g ¯ ¯ Enthalpy in Eq. pref )] ¯ ¯ s ¯ which can be expanded to: ¯ s(T.4) where u is the specific internal energy per mole. the Gibbs free energy of formation is zero for the input state. Applying Eq.2) ¯ where h is enthalpy per mole.
15) where F is the Faraday constant which is the charge on one mole of electrons. therefore. (3. and knowing that two electrons are produced during the basic combustion reaction. (3. (3.1 shows ∆¯f for various temperatures and states.14) Difficulty in the above equation comes because the “Gibbs free energy” is not constant but varies with both temperature and the products state. If we designate −e as the charge on one electron. The change is defined as follows in Eq.13) is reversible. applying Eq. meaning that all the Gibbs free energy is converted into electrical energy. Eq.where so is the absolute entropy at temperature T and pressure p.10): ∆Gf = Gf of products − Gf of reactants (3. or ∆Gf . The absolute ¯ entropy is defined as: T s (T ) = ¯ 0 o cp (T ) dT T (3. 21 . then the charge that is produced by the reaction is: −2 N e = −2F Coulombs (3. This change determines the energy released during the chemical process.9) Also just the same as potential energy can change.10) A much more common and useful notation is the Gibbs free energy of formation in the “per mole” form. (3. Using the correct notation for the “per mole” form.12) 2H2 + O2 ↔ 2H2 O equivalent to: 1 H2 + O 2 ↔ H2 O 2 Therefore.11).13) (3. g Assuming Eq. it is useful to calculate the change in Gibbs free energy of formation. chemical energy can also change. then the Gibbs free energy can be used to find the open circuit voltage of the fuel cell.11) Applying the previous equations to the basic simple combustion equation presented in Ch.11) then becomes: ∆¯f = gf of products − gf of reactants g ¯ ¯ (3. we have: ∆¯f = (¯f )H2 O − (¯f )H2 − g g g 1 (¯f )O2 g 2 (3. and N is Avagadro’s number.1: (3. Table 3.
14V 1.” The efficiency of a fuel cell is given by: 22 .15) becomes.16): Electrical work done = charge × voltage = −2F E Joules (3.17V 1. g ∆¯f = −2F E g when rearranged. The “enthalpy of formation” is the value given to the “enthalpy of formation”.04V 0.18) 2F where E is the EMF or reversible open circuit voltage for a hydrogen fuel cell.2 -225. E= 3.4 Max EMF 1.2 Efficiency The efficiency of a fuel cell is determined by the Gibbs free energy. ∆¯f .98V 0.6 -177.09V 1. ∆h heat that would be produced by burning the fuel.3 -220.4 -210.2 -228. (3.92V Efficiency limit 83% 80% 79% 77% 74% 70% 66% 62% Table 3. Eq. The “enthalpy of formation” is more commonly referred to as the “calorific value.17) −∆¯f g (3. Therefore.Form of water product Temp o C Liquid 25 Liquid 80 Gas 100 Gas 200 Gas 400 Gas 600 Gas 800 Gas 1000 ∆¯f g kJ/mole -237. and the g ¯ f .1: Gibbs free energy for water for various temperatures and states The electrical work done by the fuel cell in moving two electrons around the circuit is given by Eq. gives: (3. ∆¯f .6 -188.23V 1.3 -199.16) where E is the voltage of the fuel cell.18V 1. Since the process is reversible then the electrical work done will be equal to the Gibbs free energy released. (3.
and the ¯ ¯ enthalpy of formation. corresponding to the H2 O is known as the lower heating value (LHV). ∆hf = −285. For the ¯ product H2 O in the form of steam being produced. The higher heating value is the value given when the product of the combustion is a liquid and the lower heating value is the value corresponding to when the product is in the gas form. and it is a positive number equal to the enthalpy of combustion. Eq.19). ¯ for H2 O in the form of liquid being produced.21): Maximum efficiency possible = ∆¯f g ¯ f × 100% ∆h (3. ∆hf . Some interesting points about the efficiency of a fuel cell are : • Even though a fuel cell is more efficient at lower temperatures as shown in Table 3. depends on the state of the H2 O product in the governing combustion equation. is the actual energy produced by the combustion reaction. (3. The Gibbs ¯ free energy. Therefore. the voltage losses are much less in higher temperature fuel cells. The product H2 O can be in the form of either steam or liquid. hf .1 gives the value of maximum efficiency for a range of operating temperatures. hf . The heating value is a common term applied to a fuel. The enthalpy ¯ of formation.20) where P and R correspond to the products and reactants. gf divided by the ideal energy produced by the reaction. and h is the enthalpy. The enthalpy of formation.21) where the maximum efficiency of any system is the actual energy produced by the ¯ reaction. in any general combustion equation. respectively. Table 3. Therefore. (3.83kJ/mole. The difference in the two enthalpy of formation values is due to the molar enthalpy of vaporization ¯ of water. 23 . whereas. n corresponds to the respective coefficients of the reaction equa¯ tion giving the moles of reactants and products per mole of fuel. can be ambiguous in that the enthalpy of ¯ formation.19) The efficiency equation. the maximum efficiency for a fuel cell is determined by Eq.1. The enthalpy of formation is easily calculated from the equation below: ¯ hRP = P ¯ ¯ ne (ho + δ h)e − f R ¯ ¯ ni (ho + δ h)i f (3. is the ideal energy that can be produced by the combustion reaction if the maximum energy was produced by the combustion reaction. ∆hf = −241. ∆hf = −285. gf .84kJ/mole.83kJ/mole. corresponding to the H2 O in the liquid state is known as the higher heating value (HHV).84kJ/mole. it is more advantageous to run a fuel cell at a higher temperature yet lower efficiency to produce higher operating voltages. ∆hf = −241.electrical energy produced per mole of fuel ∆¯f g = ¯ ¯ −∆hf ∆hf (3.
• Fuel cells operating at higher temperatures will produce more heat which can be harnessed and used in a much more efficient manner than the low heat produced by low temperature fuel cells. (3. The pressure of the fuel and the gas concentration of the fuel is vitally important in the efficiency of the fuel cell.1.” The activity is defined by: activity a= P Po (3. aH2 O .2.2) we obtain a new form of the Gibbs equation: 1 2 aH2 aO2 ∆¯f = ∆¯f − RT ln g (3. If we consider the hydrogen fuel cell reaction: 1 H2 + O ↔ H2 O 2 (3.1 MP a. Applying Eq. (3.24) go aH2 O 2 where values for ∆¯f are given in Table 3.22) where P is the partial pressure of the gas and P o is the standard pressure. (3.18) we obtain the “Nernst” equation: 1 E= −∆¯f g0 2F + RT aH2 aO2 ln 2F aH2 O 24 1 2 . and aH2 . or 0.22) to Eq. (3. Substituting Eq. A heat engine is actually more efficient at higher temperatures depending on the specific fuel cell being analyzed. • Fuel cells do not necessarily have a higher efficiency than heat engines. The altered form of the Gibbs free energy equation given in Eq. In the case of any chemical reaction the products and reactants have an associated ”activity.1 Efficiency Related to Pressure and Gas Concentration The efficiency of a fuel cell is affected by more than just temperature. 3. (3.24) will affect the voltage of a fuel cell. (3. and aO2 are the activation go energies for the products and reactants.8) and Eq.23) The activity of the products and reactants alters the Gibbs free energy equation.24) into Eq.
25) reduces to: −∆¯f g 0 RT E= + ln 2F 2F If we use the relationships 1 PH2 Po22 P H2 O (3.25) is known as “Nernst voltage. 25 .28) 1 RT ln E=E + 2F 0 αβ 2 δ 1 As we can observe from the different forms of the “Nernst” equation.1 RT ln = E0 + 2F 2 aH2 ao2 aH2 O (3.25) we obtain: RT E=E + ln 2F 0 αβ 2 1 P2 δ + RT ln(P ) 4F (3. be it in pure form or part of a mixture. Applying these relationships to Eq. The voltage given in Eq. which makes them very complex in analyzing and optimizing. (3.26) PH2 = αP PO2 = βP PH2 O = δP (3. (3. and the pressures are given in bar. (3. O2.25) where E 0 is the EMF at standard pressure. (3. and δ are constants that depend on the molar masses and concentrations of H2 . then Eq.” Applying Eq. The EMF is affected depending on the state and type of hydrogen supplied.22) to Eq. 3. The fuel and oxidant utilization and the system pressure affect the system EMF.3 Nernst Equation Analysis There are various ways of analyzing the different forms of the Nernst equations.27) where P is the pressure of the system and α. β. and H2 O. (3.25). there are many variables when considering the EMF of a fuel cell. assuming that the produced H2 O steam behaves as an ideal gas.
Substituting the partial pressures. (3. 26 .29) A relevant example that has been tested in laboratory is the Phosphoric acid fuel cells with a temperature of T = 200o C.and F we get: ∆V = 0. The change in partial pressures results in a smaller value of: RT 2F αβ 2 δ 1 (3. (3. oxygen is used and therefore the partial pressure of the oxygen will reduce. Analyzing Eq. 3.28) we obtain the voltage drop which is: RT RT ln (P2 ) − (P1 ) 2F 2F P2 RT = ln 2F P1 ∆V = (3. The experimental results were approximately 0. if oxygen and hydrogen are being used to produce H2 O then the partial pressure of H2 O should increase.3. As a result of conservation of mass. T .31) The result of the decrease in value is a decrease in EMF.28) in light of the change in partial pressure.29) and substituting the proper values for R.30) Comparing to experimental values by Hirschenhofer this voltage gives good agreement. This value varies throughout the fuel cell and the most losses are at the exit of the fuel cell where the fuel is being used.2 Fuel and Oxidant Utilization As the fuel cell is operating.02. isolating the hydrogen term. The partial pressure of the fuel decreases as the fuel is utilized in the fuel cell.1 Hydrogen Partial Pressure The voltage drop can be determined if we assume that PO2 and PH2 O are unchanged.3. This correlation is affected by the concentration of hydrogen and therefore does not correlate as closely at differing concentration levels. it is clear that α and β decrease as δ increases. (3. and that the hydrogen partial pressure changes from P1 to P2 .29). and finding the difference in the EMF used in Eq.3. (3.02ln P2 P1 volts (3. The same arguments can be applied to the system pressure and various temperatures applied to Eq. Using Eq.024 whereas we calculated 0.
the voltage of a fuel cell is determined by the concentration of the reactants and the pressure of the fuel being supplied to the fuel cell.32) Therefore a change in voltage will be obtained with a change in pressure from P1 to P2 as given by: P2 RT ln ∆V = (3. Eq. P . (3.33) 4F P1 As we have shown.3. the EMF of the fuel cell will increase with an increase in system pressure. given by the following term: RT ln(P ) 4F (3.3.28). 27 .3 System Pressure As the Nernst equations show.
sometimes it isn’t even reached when the open circuit voltage is measured. 4. First.1 Introduction The discussion of the last chapter was to derive the theoretical output voltage. it is important to understand the language as well as the engineering system. we must now investigate the source of the losses in the system. This chapter will describe why there are voltage losses across the cathode and anode. Since the voltage of the cell is not seen as operating at the theoretically described EMF.Chapter 4 Causes for Voltage Loss 4. One of the areas that offer the most 29 . EMF of a fuel cell.2. Both the low and high temperature fuel cells show a linear digression until they reach high current density.2 V in the low current density range.1 and Fig. There are also some noticeable differences between Fig. This voltage is never truly realized.3 V at open circuit and another . in mA the 0-100 cm2 range.1.1 Common Terminology Since many different groups of people are interested in fuel cells. At this point both cells rapidly loose voltage with respect to increasing current density.1 shows. and the field of engineering has different terminology for similar ideas. 4. this sudden fall in mA voltage occurs at about 900 cm2 in both the low and high temperature cells. 4.2 Volts for a low temperature fuel cell (40 o C) and about 1 Volt for a high temperature fuel cell (800 o C). Although these two fuel cells act very different at low current densities their graphs are very similar in character in the mid and high current density ranges. In contrast the low temperature fuel cell loses . the high temperature graph does not display any large initial voltage loss. We must modify the theoretical equation so that it models real life. it also does not have any initial rapid fall in voltage. and how these losses can be minimized. 4. as graph Fig. The EMF was approximately 1.
confusion is the topic of voltage differentials. The first necessary term describing the voltage difference is “over voltage” or “over potential”. Figure 4.1: Low Temperature Fuel Cell Losses 4.1) . This term has its origins in thermodynamics.1 General Voltage Loss Descriptions Initial Theoretical Voltages −∆¯f g 2F 30 Last chapter showed the theoretical voltage of a fuel cell is derived to be: E= (4. but in a fuel cell system due to the chemical conversion of energy the irreversibility’s are extremely low. The way in which this term is often applied is to heat loss in a Carnot cycle.2 4.2. Another common term that can be used is “irreversibility”. but in the case of fuel cells the over voltage actually reduces the reversible voltage. This voltage is superimposed over the ideal (reversible) voltage. This is typically an electochemists view of difference generated at the surface of an electrode.
2V and why the high temperature fuel cells have a theoretical EMF of 1.1) to temperatures in the 200 K to 1200 K range. 4. 4. this is the case because we have seen in the last chapter that V is directly dependant on temperature .2. (4. 4.2: High Temperature Fuel Cell Losses This voltage is not the operational voltage of fuel cell.0V . The activation loss occurs because the chemical process initially has not begun. fuel crossover/internal current losses.1) alone we can see that the EMF of the fuel cell is directly dependant on the desired operational temperature. These losses each have a different effect on the theoretical voltage of the fuel cell.2 Description of Operational Losses The losses that were evident in Fig.2 can be broken up into four different types of losses. and mass transport/concentration losses.1 and Fig. ohmic losses.2 have shown us. as opposed to the reverse. This loss only occurs at low current densities in low temper31 .3 shows us why the low temperature fuel cells have a theoretical EMF of 1. 4.Figure 4. Fig. thus activation energy is necessary insure that the reaction tends toward the formation of water and electricity. (4. Fig.1 and Fig. as Fig.3 shows the application of Eq. These are activation losses. 4. 4. Our first task is to show why the initial theoretical voltages are different. By applying Eq. 4.
The final type of loss is the mass transport/concentration loss. This type of loss occurs because of the resistance to the flow of electrons in the interconnect. All of these losses contribute in their own way to form the operational voltages that can be seen in fuel cells today. This loss occurs in both low and high temperature fuel cells. It appears as a major source of loss in both the low and high temperature fuel cells. like all Ohmic losses. respectively. but is only prevalent at high current densities. Fuel leakage causes most of the problems in this category. It is the result of the effect of loosing a high concentration of either fuel or oxygen at the anode and cathode. This loss is associated with the losses that occur thought the electrolyte.3: Temperature Dependence ature fuel cells. the anode and the cathode. It essentially occurs because the fuel cell is using fuel or oxygen faster than it can be supplied. is directly proportional to the current. The second source of loss is the fuel crossover/internal current loss. It can occur in two ways. and just like the activation energy loss this loss only has a significant effect at low temperatures. 32 . and those are Ohmic losses. The most common source of loss in any electrical device is also present in the fuel cell. This loss.Figure 4. either fuel leaking though the electrode or electrons leaking through the electrode.
T is the temperature in Kelvin or Rankie. The value α is know as the charge transfer coefficient. For typically used materials the value is in a very narrow range.3. it is always about α = . at lower currents. and is unitless. 4.5 for the cathode. The corresponding equation for the experiment is below. This loss is often termed over potential.3) 2αF In this equation R is the ideal gas constant. Through experimentation Tafel was able to mathematically describe these losses. and for the protons to travel though the electrolyte. This value describes the proportion of the electrical energy applied that is harnessed in changing the rate of an electrochemical reaction. and is essentially the voltage difference between the two terminals. Thus the overall value of A is simply a function of the material properties. The following equation has been simplified to apply specifically to a hydrogen fuel cell. which is forcing the hydrogen to split into electrons and protons. io is the value on the Tafel plot when the current begins to move away from zero. one being a slow reaction the other a fast reaction.2) The constant A is higher for those reactions that are slower.1 Tafel Equation Through experimentation Tafel produced figures that showed direct correlation between the current density and the output voltage. and then combine with the oxygen and returning electrons. and F is the Faradays constant. These losses are basically representative of a loss of overall voltage at the expense of forcing the reaction to completion. The value of io called the exchange current and becomes important later when we discuss the exchange current losses.1 to α = .4. It is this value that differs from one material to another. and the constant io is larger for faster reactions.5 for the electrode. For determining the values of A in the experimental Tafel equation there has been work done to show what a theoretical value would be. These minor variations make A= 33 .3 Activation Losses Activation losses are those losses associated with the initial dramatic voltage losses in low temperature fuel cells. V = A ln i io (4. and it ranges from about α = . . The plot has been displayed in natural logarithm form to simplify the analysis. RT (4. The following graph illustrates the effects of the constants in the Tafel equation.
and a high end value of io (100 cm2 ) is used then the steady voltage is 1. as we can see from the form of the equation the only constant that we can change is io . The exchange current density constant varies over a range of four orders of magnitude.Figure 4. as it is in all design processes is to produce the most efficient product. First. To gain a more quantifiable understanding of how this value effects the voltage. In this specific case that means that we have to keep losses at a minimum. (4. In order to minimize the loss of voltage due to activation losses there are several things that can be done. we can increase the operational temperature. 34 .001 cm2 ) is used then the current mA steadies around . This dramatic effect illustrates why it is vital to design fuel cells with high exchange current densities. 4.3.2). At this point it is useful to look back on Eq.6 V . and it was seen that if a low end value of io (. and thus has a dramatic effect on the performance of fuel cells at low current densities.2 Maximizing the Tafel Equation The goal of fuel cell design.1 V .4: Tafel Plot experimenting with different material to dramatically change the voltage predicted by the Tafel equation not a very productive endeavor. the Tafel equation was plotted for several of these values of mA io .
1: Common io values for selected Metals The use of a more valuable catalyst can be very rewarding since the value of io that is associated with commonly used metals ranges from 10−13 up to 10−3 . these values will all increase as the temperature and pressure are increased. This is true since the higher the pressure at the cathode the quicker the reaction will be “forced” to take place. is the region of a fuel cell that separates the anode from the cathode.2 shows the activation loss at high temperatures is minimal. necessary to allow proton transfer. mainly in the interest of cost. As mentioned above. Secondly. Secondly to increase catalytic effects you can increase the operational pressure. Today. This can be accomplished in three effective ways.0x10−3 Table 4.As Fig.0x10−4 4. we can increase catalytic presence. 4. and is also 35 .1 and Fig. and provides a means for the proton transfer. that is their OCV is . and is either a solid or a liquid.3 V less than what theory predicts. as described in Chapter 2. Metal Pb Zn Ag Ni Pt Pd mA io ( cm2 ) 2. 4.0x10−7 6. Theses two sources of voltage loss are grouped together because despite being different modes of loss they contribute to the loss in the same amount and they both occur due to the inability to produce a perfect electrode. The final way to increase the catalytic effect is to use a more effective material. The following table illustrates the range of io values for a variety of materials at STP. 4. This is due to the fact that the value of io increases almost two orders of magnitude over the span of temperatures typically used.4 Fuel Crossover/Internal Current Losses The next portion of the graph that needs to be investigated is the cause of low temperature fuel cells having a lower inital voltage. The electrode is porous. also we have seen in Chapter 3 that the increase of pressure is helpful to increasing the voltage of the cell.0x10−11 4. The electrode. this allows for more area of contact and thus allows the reaction to proceed faster. This initial loss of voltage is due to fuel crossover and internal currents. Nickel or Platinum is used.5x10−13 3. First we can use rougher catalysts. The electrode is often made of different materials depending on the type of fuel cell.0x10−6 5.
V = A ln i + in io (4. prevented from traveling externally. This allows for the ease of use in evaluating performance of the cell. 4. The final way to reduce resistance associated with Ohmic losses is to create a thin electrode. Another way to reduce the resistance is to use well-designed bipolar plates. Since in both of these processes two electrons are wasted. thus giving the protons a shorter distance to travel before they can combine with the oxygen and electrons. This modification will allow for the addition of the term mA in cm2 . If the hydrogen is being used at a very vigorous rate at the anode then the partial pressure of the 36 .2).6 Mass Transport/Concentration Losses The losses that occur due to mass transport/concentration problems are those directly related to the pressure issues that were discussed in Chapter 3. in is usually less than 10. In order to model this phenomenon the Tafel equation that was used earlier.5) where i is the current density and r is the area specific resistance. V = ir (4. These losses simply occur due to the resistance to electron flow in the bipolar plates. and fuel cells are not an exception to this rule. described in Chapter 2. It is not prevalent in high temperature fuel cells because the small value of in does not significantly change the ratio in the natural logarithm.slightly conductive. since most cells are rated in terms of the current density. Thus to reduce the value of the Ohmic resistance it is necessary to use electrodes with extremely high conductivities. as a result it is possible for un-reacted fuel and electrons to crossover to the cathode.4) This new form of the Tafel equation will now account for the initial loss of voltage in low temperature fuel cells. the losses are similar in source and the same in result. (4. They are of the standard Ohmic form. which have high conductivities and short lengths. 4. resistance is proportional to distance. or reducing the distance that the electrons must travel. can be modified. Eq. but usually written in terms of current density and area resistance.5 Ohmic Losses Ohmic loses are prevalent in every electronic device.
then assume a linear current density that runs from the current at no pressure down to the limiting pressure at maximum current then the following relationship can be applied. fuel crossover. If we define Pl as this pressure.(4.8) shows us that most of the loss occurs near the limiting value of il . This relationship is the change in voltage relationship for the hydrogen. il − i il P2 = P1 (4.7. This will act as a current ceiling since there will be no more fuel to advance the density. mass transport. This graph Fig.8) Eq. thus slowing the reaction rate. and ohmic losses are combined then the actual operational graph of a fuel cell is produced.9.1 Conclusion Combining the Losses If all the losses that we have looked at: activation. This produces the final relationship: i RT ln 1 − 2F il ∆V = (4.4. (4.5 below is the curve that is used to determine if the specific fuel cell is operating at high standards. that is to say the pressure of excess hydrogen will be zero. This is also the same case that occurs at the cathode with oxygen.6) To adapt this equation we assume a limiting current density il at which the fuel is used up at a rate equal to its maximum supply speed . RT ln 2F P2 P1 ∆V = (4. To mathematically model this situation we must modify our voltage equation that we developed in Chapter 2. thus following with our figures in the beginning of the chapter.hydrogen drops. 4. 37 .7) To fully develop the model this equation must be plugged into the voltage relationship developed earlier.7 4. This type of loss is also considered a Nernstian loss since it uses the Nernst equation to determine the change in voltage. The equation for this line is given in Eq.
Current Density ∆V = E − (i − in )r − A ln i + in io + B ln 1 − i + in il (4.5: Operational Fuel Cell Plot for Voltage vs.9) 38 .Figure 4.
5.Chapter 5 Alkaline Fuel Cells Alkaline fuel cells differ from other types of fuel cells in the chemical reaction and the operating temperature. The basic schematic of an alkaline fuel cell is given in Fig. Temperature 60-90C Efficiency Present 40% Projected 50% Porous Carbon Anode Porous Carbon Cathode Pure H_2 Pure O_2 35-50% KOH Pt catalyst Product H_2O + waste heat Figure 5.1: Basic Alkaline Fuel Cell The chemical reaction that occurs at the anode is: 39 .1.
Alkaline fuel cells were used in the Apollo space shuttle which took the first men to the moon. They were tested in many different applications including agricultural tractors. ease of use. power cars.2) It was not until the 1940’s that F. Due to the success of the alkaline fuel cell in the space shuttle Bacon was able to perform much research toward alkaline fuel cells. as shown: O2 + 4e− + 2H2 O → 4OH − (5. reliability. Bacon at Cambridge. 5. Alkaline fuel cells encountered many problems including cost.1. Proton exchange membrane fuel cells became very successful and therefore alkaline fuel cells were given much less development resources. and safety which were not easily solved. This is due to the fact that the hydrogen evaporates the water product. The hydrogen must be circulated to extract the water produced by means of a condenser. One of the major problems that the mobile electrolyte fuel cell faces is the chemical 40 . 5.T. as the fuel at the anode and air for the reaction at the cathode. The one major commonality between all alkaline fuel cells is the use of potassium hydroxide solution as the electrolyte.1 Types of Alkaline Electrolyte Fuel Cells Alkaline fuel cells are categorized depending on their pressure. temperature and electrode structure which widely varies between the different designs. The electrolyte is pumped around an external circuit.1 Mobile Electrolyte The mobile electrolyte fuel cell uses pure hydrogen.2H2 + 4OH − → 4H2 O + 4e− (5. The space program remained an important researcher for alkaline fuel cells and since has improved alkaline fuel cells. durability. and provided power to offshore navigation equipment and boats. OH −. England proved alkaline fuel cells as a viable source of power. Attempts at solving these problems proved to to be uneconomical given the other sources of energy at the time. The second advantage is that the electrodes do not have to be made of precious metals. H2 . The first is that the activation overvoltage at the cathode is usually less than with an acid electrolyte fuel cell.1) The reaction at the cathode occurs when the electrons pass around an external circuit and react to form hydroxide ions. Alkaline fuel cells have some major advantages over other types of fuel cells.
3) and becomes unusable. Potassium hydroxide is used as the electrolyte along with a fuel such as hydrazine.1. and therefore does not circulate as in the mobile fuel cell. The carbon dioxide problem is the reason the astronauts in the movie Apollo 13 had to build their own carbon dioxide scrubbers to keep power supplied to the space shuttle. that is present in the supplied air. 5. An individual cooling system is needed to keep the fuel cell within an operational temperature range. • The setup makes it very easy to completely replace all the electrolyte if the need arises such as in the case that the electrolyte reacts with carbon dioxide as in Eq.2 Static Electrolyte Alkaline Fuel Cells Static electrolyte fuel cells differs from mobile electrolyte fuel cells in that the electrolyte is held in a matrix material. This type of fuel cell was used in the Apollo space shuttle. KOH. This type of fuel cell has very high fuel crossover problems but it is not of great importance since the cathode catalyst is not platinum. and carbon dioxide. CO2 .1.3 Dissolved Fuel Alkaline Fuel Cells The dissolved fuel alkaline fuel cell is the simplest alkaline fuel cell to manufacture and examples how simple such a fuel cell can be. or ammonia combined with it. To avoid this problem.3) We can see that the potassium hydroxide is slowly converted to potassium carbonate in the presence of carbon dioxide. The problem is shown in the following reaction: 2KOH + CO2 → K2 CO3 + H2 O (5.reaction between the potassium hydroxide electrolyte. (5. but does not have to be in pure form. 41 . Dissolved fuel alkaline fuel cells do not work well for large power generation applications. Static fuel cells usually use pure oxygen as the reactant for the cathode side. 5. This is unfavorable because the efficiency and performance of the fuel cell depends on keeping the potassium hydroxide in its pure form. The hydrogen is circulated to remove the gaseous form of water product. The major advantages of the mobile electrolyte fuel cell are : • The fuel cell can be easily cooled by using the circulated hydrogen. a carbon dioxide scrubber is used to remove as much carbon dioxide as possible from the supplied air. • The circulated potassium hydroxide helps to avoid the produced water from becoming saturated in the potassium hydroxide and therefore solidifying.
The methanol reaction at the anode is: CH3 OH + 6OH − → 5H2 O + CO2 + 6e− (5.2. 5. the use of methanol is impractical. but it is very difficult to “make an active catalyst in a low temperature acid electrolyte fuel cell that does not use precious metals. The powder form of the nickel makes it much more porous and therefore more advantageous for fuel cells since the porosity increases the surface area for the chemical reactions to take place on. A Raney metal is formed through mixing an active metal needed for the electrode and an inactive metal such as aluminum. It is unfavorable to use this fuel because hydrazine is toxic.2 Raney Metals Raney metals are a good solution to achieving the activity and porosity needed in an electrode. an alternative fuel such as methanol can in theory be used. The Apollo space shuttle used this type of electrode structure. Even though acid electrolyte fuel cells could be used with the dissolved fuel principle.2 Electrodes for Alkaline Electrolyte Fuel Cells Alkaline electrode fuel cells operate at a wide range of temperatures and pressures and their application is very limited. a carcinogen. (5. Bacon in his first fuel cell because of the low cost and simplicity of the material. and explosive. they are rarely widely used. Some of the various types of electrodes are explained below. As a result. The remaining 42 . Two different sizes of nickel powder are used to give the optimum porosity for the liquid and gas fuel.” 5.1 Sintered Nickel Powder Sintered nickel powder was used by F. there are different types of electrodes used.4) but as we can see. Therefore. the produced CO2 will react with the KOH as shown in Eq. The sintering is used to make the powder a rigid structure. Since the carbon dioxide cannot be easily removed from the system.3) thus producing carbonate which is unfavorable. even though fuel cells using this fuel might be simple.The optimum fuel to be used by this type of fuel cell is hydrazine. H2 NNH2 . As a result of the problems which arise when using hydrazine. and that will not therefore oxidize the fuel.T. The resulting hydrogen can then be used as the fuel. and larger for the gas fuel. because it dissociates into hydrogen and nitrogen on the surface of a fuel cell electrode. 5. The inactive metal is then removed from the mixture by dissolving the metal using a strong alkali.2. The liquid is better with a smaller pore size.
The electrode has a layer of PTFE which is nonconductive and therefore a bipolar plate is unusable for connecting multiple cells. Apparently this type of electrode does not react to the CO2 making it highly favorable for use in this type of application. Raney metals were also incorporated into the fuel cell technology used on submarines by Siemens in the early 1990s. Rolled electrode manufacturing can be performed on an altered paper machine which makes it easily manufactured at a relatively low cost.01/cm2 or about $10/f t2. Other problems arise with high pressure storage systems and cryogenic storage systems.2. This combination is then rolled onto a sheet of nickel.3).structure is a highly porous structure made entirely of active metal. They are also mixed with polytetraflouroethelene (PFTE). carbon fiber is often used to increase the strength. There are problems associated with rolled electrodes. 5. The use of non-platinum electrodes greatly reduces the cost of producing the electrodes. conductivity and roughness of the mixture. The purpose of the PTFE is to act both as a binder and control porosity in the mixture. Due to the strength to weight ratio and the conductivity. explained by Eq. There is an increased cost to manufacture systems that ensure no leakage 43 . Guzlow (1996) used an anode based on granules of Raney nickel mixed with PTFE which does not use carbon supported catalysts to try and solve this problem.3 Rolled Electrodes Carbon supported catalysts are commonly used in the current production of electrodes. The actual increase in voltage is much higher given that the pressure increase increases the exchange current density which reduces the activation overvoltage on the cathode. An advantage of this process is that the pore size can be easily changed for the desired application by simply altering the mixture ratio of active to inactive metal. The problem of carbon dioxide. the open circuit voltage of a fuel cell depends on the temperature and pressure and increases with increasing pressure and temperature. As we can see from Chapter 3. Raney metals are often used for the anode. The cost for such electrodes is approximately $0. 5.3 Operating Pressure and Temperature Alkaline electrolyte fuel cells generally operate at pressure and temperature much higher than the environment it operates in. The carbon dioxide can be removed which increases the lifetime of the fuel cell. or negative side of the fuel cell. but this causes much lower current densities which are unfavorable. and silver for the cathode. is present when carbon supported catalysts are used in the electrode.(5. or positive side of the fuel cell.
in the high pressure systems. such as nitrogen as used by Siemens. One solution is to encase the fuel cell inside a pressure vessel with an inert gas of higher pressure than that of the fuel cell. 44 . It is essential in this type of storage device to ensure no leaks because of the high flamabality of pure hydrogen and oxygen. flows into the fuel cell ensuring no leakage of combustible fuels. but instead the inert gas. This ensures that any leaks do no escape the fuel cell.
Note that there are two moles of electrons and one mole of CO2 transfered from the cathode to the anode. A general schematic of a molten carbonate fuel cell is shown in Fig. or lithium and sodium carbonates which is held in a ceramic matrix of LiALO2 .2) where a and c correspond to the anode and cathode gas supplies respectively.(6. This allows the reactant air to be preheated while burning unused fuel and the waste heat can be used for alternate purposes as necessary.1. This is given by Eq.1): 1 H2 + O2 + CO2 (cathode) → H2 O + CO2 (anode) 2 (6. The electrolyte is usually a binary mixture of lithium and potassium.1) As described in Ch.2 we can determine the Nernst reversible potential for a molten carbonate fuel cell as given below: 1 2 PH 2 PO 2 RT + RT ln E = Eo + ln 2F P H2 O 2F PCO2c PCO2a (6. The carbon dioxide and oxygen is essential to react and form the carbonate ions by which the electrons are carried between the cathode and anode. The material is a molten mixture of alkali metal carbonates. Another advantage with 45 . The CO2 produced at the anode is commonly recycled and used by the cathode. carbon dioxide must be supplied to the cathode instead of being extracted from the supply. This configuration also allows the CO2 to be supplied externally from a pure CO2 source. 6. Note that unlike the alkaline fuel cells. A highly conductive molten salt is formed by the carbonates at very high temperatures (approximately 600-700 o C).Chapter 6 Molten Carbonate Fuel Cell The defining characteristic of a molten carbonate fuel cell (MCFC) is the material used for the electrolyte.
Molten carbonate fuel cells operate at relatively high temperatures which allow them to attain high efficiencies.Figure 6. and the same amount of electrons are produced. The difference is that twice the amount of carbon dioxide is produced at the anode instead of water. is supplying carbon monoxide as the fuel to the anode. An alternative to supplying hydrogen to the anode. Their high operating temperature also allows them to utilize different fuels as shown above with the use of carbon monoxide.3) 2F Some values for the gibbs free energy and open circuit voltage are given in Table 6.1: General schematic of reactions within a molten carbonate fuel cell molten carbonate fuel cells using this setup is that they do not require noble metals for the electrodes.1. The reversible open circuit voltage can be determined and is identical to that given in Chapter 3: −∆¯f g E= (6. 46 .
Fuel H2 CO
∆¯f /KJ.mol− 1 E/Volts g -197 1.02 -201 1.04
Table 6.1: Values of ∆¯f and E for hydrogen and carbon monoxide fuel cells at 650o C g
Molton Carbonate Fuel Cell Components
Current molten electrolyte fuel cells contain approximately 60 wt% carbonate in a matrix of 40 wt% LiOAlO2 . The LiOAlO2 is made of fibers approximately < 1µm in diameter.The matrix is produced using tape casting similar to those used in the ceramics and electronics industry. The ceramic materials are put into a ”solvent” during the manufacturing process. A thin film is formed on a smooth surface by use of an adjustable blade device. The material is then heated and organic binding agents are burned out. The thin sheets are then put on top of each other to form a stack. The operating voltage is largely related to the ohmic resistance of the electrolyte. The most significance factor of ohmic losses is the thickness of the electrolyte described by : ∆V = 0.533t (6.4) where t is the thicknesss of the electrolyte in cm. Currently using tape casting, the electrolyte thickness can be reduced to approximately 0.25 − 0.5 mm.
Molton carbonate fuel cells are made of porouse sintered Ni-Cr/Ni-Al alloy. The anodes can be made to a thickness of 0.4-0.8 mm. The anodes are manufactured using hot pressing fine powder or tape casting as used for the electrolytes. Chromium is commonly added to reduce the sintering of the nickel, but causes problems such as increased pore size , loss of surface area and mechanical deformation under compressive load in the stack. These problems can be reduced by adding aluminum to the anode. The reactions that take place at the face of the anode are relatively fast at high temperatures, therefore a large surface area is not needed. This reduces the amount of porosity needed in the structure. This allows partial flooding of the anode with molten carbonate without problems. Tape casting allows the anodes to be manufactured with a various porosity throughout the anode which can be favorable given that larger pores near the fuel gas channel is optimal. 47
The cathode of a molten carbonate fuel cell is made of nickel oxide. The problem with nickel oxide is the solubility in molten carbonates. Nickel ions diffuse into the electrolyte towards the anode. Metallic nickel will then precipitate out in the electrolyte. This precipitation will cause internal shortages within the fuel cell which cause electrical problems. The formation of nickel ions is described by the equation: NiO + CO2 → Ni2+ + CO − 3−2 (6.5)
which can be reduced using a more basic carbonate in the electrolyte. Nickel dissolution can be reduced by (a) using a basic carbonate, (b) reducing the pressure in the cathode and operating at atmospheric pressure, and (c) increasing the thickness of the electrolyte to increase the amount of time and distance it takes for Ni2+ to reach the anode.
Molten carbonate fuel cells need manifolds to supply the gases for operation. This is done by the use of external or internal manifolds. With external manifolds the bipolar plates are approximately the same size as the electrodes. External manifolds were described in the explanation of fuel cells in Chapter 2. External fuel cells are a very simple design, they allow low pressure drop in the manifold and efficient flow through the manifold. A drawback is the temperature gradients caused by the flow of gases perpendicular to each other. External manifolds can also have leakage problems. Internal manifolds allow gas distribution internally or within the stacks themselves, by penetrating the separating plate. Internal manifolds allow much more diversity in the direction of flow of gasses to minimize temperature gradients. They also allow a high degree of variation in the stack design. The electrolyte matrix is used as the sealant in the internal manifold.
MCFC research and systems
As of the printing of Laraminie and Dicks book on fuel cells, there were two corporations within the U.S. conducting research toward the commercialisation of molten carbonate fuel cells. These are Fuel Cell Energy (formerly Energy Research Corporation) and M-C Power Corporation. Fuel Cell energy has demonstrated a 2 MW molten carbonate fuel cell, and MC Power has demonstrated a 250 kW fuel cell. Japan currently has a 1 MW MCFC plant running. MCFCs should be close to becoming commercialized within the next few years. 48
Chapter 7 Polymer Electrolyte Fuel Cell (PEMFC)
Polymer electrolyte fuel cells have the ability to operate at very low temperatures; this is the main attraction of the PEM. Since they have the ability to deliver such high power densities at this temperature they can be made smaller which reduces overall weight, cost to produce and specific volume. Since the PEM has an immobilized electrolyte membrane there is simplification in the production process that in turn reduces corrosion, this provides for longer stack life . This immobilized proton membrane is really just a solid-state cation transfer medium. Various groups will call these types of cells SPFC, for solid polymer fuel cells, this is usually in government setting since the federal government will more generously fund solid state energy. The PEM, like all fuel cells, consists of three basic parts; the anode, the cathode, and the membrane. These three areas are often manufactured from separate “sheets”, and the PEM is no exception. The anode and electrode are often formed together, thus making a membrane electrode assembly (MEA). The PEM has been in use for some time by the government, making its debut on the Gemini, its life was only 500 hours. After NASA’s decision to use alkaline fuel cells on subsequent missions the popularity of PEM fell dramatically, until recent advances made them more economical to develop and research. With the development of new membranes, such as Nafion, and the mg mg reduction in Pt use (from 28 cm2 in 1967 down to .2 cm2 today). PEMs are being actively persued for use in automobiles, busses, portable applications, and even for some CHP applications. Today industry has great hope for the PEM, some even sighting that is has exceeded all other electrical energy generating technologies in the breadth of scope and their possible applications. The PEM, is potentially the most 49
important fuel cell being researched today.
Figure 7.1: PEM: A General Cell
The Polymer Membrane
The PEM is named for the solid-state exchange membrane that separates its electrodes. William Grubbs discovered in 1959 that without the presence of strong acids in his membrane he was still able to transfer cations, protons, to the cathode. This discovery was capitalized on by NASA, and is still being used today. This membrane is just a hydrated solid that promotes the conduction of protons. Although many different types of membranes are used, by far that most common is Nafion, produced by DuPont. Other types of membranes being researched are; polymer-zeolite nanocomposite proton-exchange-membrane, sulfonated polyphosphazene-based membranes, and phosphoric acid-doped poly(bisbenzoxazole) high temperature ion-conducting membrane. Although the Nafion membrane is so commonly used it is considered an industry standard, and all new membranes are compared to it. The Nafion layer is essentially a carbon chain, which has a Fluorine atom layer attached to it. This is considered Teflon. A branch is formed off of this basic chain (also made of carbon 50
This is ideal since the electrode needs to remain hydrated to promote the high levels of proton transfer. thus the need for a hydrated membrane. All of these properties result in five important properties of the material : • they are highly chemically resistant • they are mechanically strong (possible to machine them as small as 50µm) • they are acidic • they are very absorptive to water • they are good proton (H + ) conductors if well hydrated The membrane allows for the transfer of protons and thus permits the general fuel cell process. Since the electrode is very thin (50µm) is possible for the water to leak back to the anode. 7. thus a balance must be achieved.3 Water Management The main concern that has to be considered in the polymer electrolyte membrane is the management of water. Several complications arise in this process. while the proton travels though the conductive membrane to the cathode. The first is that the water naturally moves toward the cathode. as opposed to steam. The water cannot be simply removed. in the presence of oxygen. since as mentioned above the membrane needs to be hydrated. the frequency of these side chains are reflected in the types of Nafion. which would be the ideal situation if the exact amount were to migrate. The side chains increase the hydrophilic effect of the material. Hydrogen at the anode separates the electron and proton.5 molecules per proton . freeing them to travel throughout the fuel cell. This is accomplished by being solvated in water molecules. the water transfers the protons through the membrane. which is similar to the PAFC.atoms surrounded by Fluorine). The electron and proton then meet at the cathode where water is formed. This is an issue because liquid is produced as a final product. This property of the Nafion allows for up to 50% increase in dry weight. about 1 to 2. This ”electro-osmotic drag” is problematic at high 51 . Since high temperatures are not necessary to hydrate the membrane the PEM can be run at very low temperatures. Flooding of the electrodes causes a decrease in surface area in which the separation of hydrogen or the formation of water takes place. The electron travels externally. The water that is formed at the cathode must be regulated by removal or retention techniques. typically at 80o C or lower. Since this water product is formed it is important not to flood the electrolytes.
or in a detrimental case dry the cell out.1 Air Flow’s Contribution to Evaporation Air is supplied to the cell to provide oxygen to the cathode. Eq.1) is derived from the definition of 52 . In order not to remove too much water form the cathode.(7. To solve this problem it is necessary to add water to the system to keep everything hydrated.2: Schematic of the Cell current densities because all the water can be removed from the anode. 7. it is necessary to have the correct airflow.Figure 7. Studies have shown that at temperatures over 60oC the air dries out the cathode. The presence of the air also provides a vehicle for which excess water can be removed from the system. The following equation. thus drying the membrane and the anode out. To date all types of water management problems that have major impact on cell performance have been solved. thus causing an abrupt loss in fuel pressure since no water will be present to transfer new protons (this is a form of a mass transportation loss). A further problem in water management is the susceptibility to have the air dry the water out at high temperatures. Although these problems have been solved it is still quite necessary to understand these issues since design of a cell is critically based on water management. the trick is not to over hydrate the cell.3.
. This graph is maximized in the region where the cathode will not be too dry or wet. = mw ma Where mw is the mass of water percent in the sample of the mixture. The question of pressurization is only used in larger PEMs (10kw ≥). relative humidity and the exit air flow rate equation we arrive at the pressure relationship for the PEM.1) Vc The λ represents the stochimetric ratio. and ma is the mass of the dry air. weight. Since problems arise due to the fact that the drying effect is highly non-liner with respect to the room temperature we must define a few special terms. which is to get a higher power rating out of a smaller device.(7. To pressurize the system there are certain costs. A simple example of a pressurized fuel cell would be if a pressurized hydrogen container feeds it.power. These terms allow us to qualitatively describe the necessary water conditions in the cell.188 Where PW is the partial pressure of water in the air. Eq. and Ps at is the saturated vapor pressure. etc. Pe (7. In order to complete the process we must add the fact that the temperature plays a very important role.2) simply establishes that the vapor pressure at the exit is a function of the air properties and the operating pressure of the cell.2) λ + 0. in the case of the PEM λ = 2. The result of adding the temperature into the equation results in a decaying exponential.3. By using the humidity ratio.421 Pt (7. typically 60O C. 7.4 Effects of Pressure The advantages of operating PEMs at elevated pressures are often debated. PW = 7. There are also benefits. • Relative Humidity. Air f low Ratecathode = 3. In this example a motor would be powered 53 .57 × 10−7 × λ × • Humidity Ratio. These values are typically in the range of 30% to 70%. see Fig. θ = PW Ps at PW is the partial pressure of the water. the major benefit is to supercharge the system. The Pe is the power of the cell and the Vc represents the voltage of the cell. and O2 usage in the cell . size. and Pt is the operating pressure. monetary.
∆V = RT ln 2F 54 P2 P1 (7.1 Mathematical Understanding of the Effects of Supercharging A change in pressure was seen earlier in Chapter 4.3: Temperature Dependence of the Cell by the fuel cell to compress the intake air.3) we will better be able to understand the problem. 7.Figure 7. In our specific case there are some necessary modifications. In that case we were only referring to the pressure change from the cathode to the anode. but by starting with the general relationship Eq.3) . which would be necessary to supply an adequate amount of O2 and satisfy water concerns. when we discussed general fuel cells. out weigh the cost of compressing the gas? To gain a more factual understanding of the answer to this question we must turn to a mathematical model.4. via the rate of the process. Does the effect of increasing the power. An evaluation of this system can be made and then compared to the un-pressurized system.(7.
thus we can modify it to a power equation and say: Pgain = C ln P2 P1 In (7. The PEM offers a great balance between power and size/operating temperate. They will likely be the fist cells commercialized on a large scale. And they can be scaled up for larger projects.5 Conclusion The PEM offers a perfect stepping stone into the commercialization of fuel cells.6) The interpretation of what the values of the constants should be in Eq. By inserting known values and applying the definition of power we can solve for the total change in voltage. such as the Ballard Power Systems bus.6) is the cause for much discrepancy as to what the right answer to the question of what the desired pressure should be.1).(7. It was shown before that m ˙ is derived from Eq.We must recall that this relationship was derived from a logarithmic principle. there is also the power lost due to the need to compress the gasses.286 − 1)λ ηm ηc P1 (7. By adding this term in we arrive at the total power loss expression.5) PLost = cp T1 P2 γ−1 (( ) γ − 1)m ˙ ηm ηc P1 (7.58 × 10−4 × T1 P2 (( ). 7.(7.4) Unfortunately this is not the only power associated with the system. Fig. If “optimistic” values are chosen then the peak performance will arrive at a pressure ratio of 3.5) where γ is the ratio of specific heat m is the mass flow rate ηm is the efficiency of ˙ the motor. so long as proper bipolar plate designs are used. 55 . They can be operated at low temperatures.(7. thus allowing them to compete in the same market as batteries. and ηc is the efficiency of the compressor.4 shows the overall performance of a PEM operating at standard pressure and 60o C. and it will give a . thus determining that pressurization has a negative effect on the cell. Eq. 7.015V increase per cell. If a more “pessimistic” interpretation of the numbers is used then we find that as the ratio increases the loss is consistently greater. Since their membrane is a solid-state material the cell can easily be stacked. Vloss = 3.
4: Overall Performance 56 .Figure 7.
wood and landfill gas). All of these characteristics make it a very attractive choice for use in a fuel cell system. Since methanol is a liquid at STP (boils at 65oC and 1 atm) it can easily be stored. 57 . distribution. Although many options are possible. and it can be manufactured from a variety of carbon-based feedstocks such as natural gas.g. the dramatically lower size and weight of the overall system is an advantage. The fuel cell system also has other design advantages over pure hydrogen fuel cells. Since the storage. since hydrogen can be drawn from virtually any hydrocarbon (fossil fuel or renewable fuel). coal. and biomass (e. There is also elimination of complex humidification and thermal management systems.Chapter 8 Direct Methanol Fuel Cells (DMFC) 8. again a consequence of the low operational temperature. in a container like gasoline is.. typically about 5% water. and production of hydrogen are tasks that have not been completed yet some members of industry and government have called for alternate sources of hydrogen. And finally. This is mainly a consequence of the fact the methanol boils at a low temperature. many hold high hopes for the direct use of methanol. thus enabling the classic hydrogen fuel cell chemical reaction to take place.1 Introduction The use of pure hydrogen in fuel cells is not the only way to convert hydrogen into useful electric energy. and an on-board coolant. A variety of reactions can produce hydrogen indirectly. Since the fuel is a methanol and water combination. The on-board coolant comes in the form of the fuel itself. the design of the fuel storage in the cell also allows for cell coolant. They eliminate the fuel vaporizer and all the heat sources that are associated with it.
5O2 → 2H2 0 + CO2 (8. The fuel is a mixture of water and of methanol. 8. which can in turn be used by the cathode. Fig. 58 .3) The compounds PtCOH and PtCO are poisons for Platinum.8.4) through Eq.6) Eq. Ru + H2 O → RuOH + H + + e− P tx CHOH + P tOH → HCOOH + H + + e− + P t P tx CO + RuOH → CO2 + H + + e− + xP t + Ru (8.5) (8. The cathode undergoes the typical fuel cell reaction with hydrogen combining with oxygen. as discussed previously in Chapter 4.1) As mentioned above the boiling point of methanol at atmospheric pressure is 65 C. it can be represented with the following stages : CH3 OH + xP t → P tx CH2 OH + H + + e− CHOH + xP t → P tx CO + 2H + + e− + xP t (8. The reaction mechanism is much more complex with the appearance of species adsorbed as well as with HCOH and HCOOH. representing only the initial and final products for both the cathode and anode is as follows: CH3 OH + 1.(8.21 V at STP.(8. obtained from the Gibbs free energy statement. it reacts directly at the anode according to: CH − 3OH + H2 0 → 6H + + 6E − +CO2 o (8. thus the cells requires an operating temperature around 70o C (to avoid a too high vapor pressure). The total DMFC equation. is dependant on current density. This voltage. and prevent poisoning.1 below is an accurate diagram of the fuel cell and its components.7) This corresponds to a theoretical voltage of 1.4) (8. If one considers this reaction on a Pt/Ru catalyst.2 Description of Operation The operation of the whole DMFC system is similar to the operation of the PEM in terms of the physical manufacturing of the cell. and after research it was found that the addition of Ruthenium makes it possible to cure the Pt.6) give the production of the hydrogen. The major difference is in the fuel cell supply.2) (8. It is subject to the same inefficiency as a hydrogen fed fuel cell.
shows a typical voltage vs. current density for the DMFC. 8. 59 . • the reaction of oxidation is not always complete. This graph shows that fuel crossover and Ohmic losses are the two major contributors to irreversibility’s in the DMFC.4V with cathode). mass transport losses.3 8. This is the common fuel crossover loss described in Chapter 4. Ohmic losses. These losses are the direct result of several sources of inefficiency in the DMFC.2.3. The figure below. Fig. It has activation losses. and fuel crossover losses.Figure 8.3V with the anode and 0.1: Direct Methanol Fuel Cell 8.1 General Voltage Loss Descriptions Typical Losses The voltage losses of a DMFC are similar to those associated to a hydrogen-fueled fuel cell. There can be formation of acid (HCOOH) or formaldehyde (HCOH) • The potentials of electrodes are very different from the theory because of significant over voltage (around 0.
These problems are those associated with the inability to get full potential out of the anode and the cathode. Since it is possible to have the fuel leak through the polymer electrolyte and directly combine with the O2 . there is a constant voltage drop that occurs. The 60 .2: Direct Methanol Fuel Cell: Voltage vs. These losses are considered to be acceptable in terms of how other fuel cells. even at zero voltage. the inability of a perfect insulator that remains an effective cation transfer medium. Three identified problems are as follows : • Acid electrolytes must be used because carbonate formation is a serious problem in alkaline solution. are mainly due to the resistances associated with the bipolar plates and the interconnections of the cell stack. The Ohmic losses that are also present in the methanol fuel cell.3. especially at projected useful current densities. 8.Figure 8. see Chapter 4. The main concern that developers are having is with the anode and cathode. Current Density Plot The fuel crossover loss occurs for the same reasons that fuel crossover occurs in all types of fuel cells.2 Anode and Cathode There are several issues that make the DMFC a less attractive options then the pure hydrogen fuel cell.
Several companies such as JPL. the DMFC will likely be used in smaller electronics. show below Fig. Other than cars. IFC. the solution to the above listed problems have not been completely solved yet. but the most important of which is the transcontinental trip by the first DMFC car. The successive of the DMFC’s are varied. and Ballard are tyring to solve these problems.8. completed in June 2002. thus bringing the cells closer to commercialization.3: Direct Methanol Fuel Cell: Potential Uses 61 . thus making them highly susceptible to poisoning (CO) 8. This results is a situation where it is possible to have “chemical short circuits” thus results in more inefficiencies • The catalysts are typically high in Pt content. methanol is not in full commercialization.4 Conclusion These anode and cathode problems have made industry weary of the DMFC. Since.3 Figure 8.corrosive tendency of the acid causes slow kinetics at the cathode • There have been marked problems with the cathode and the anode having the same electro-catalysts.
which is high compared to electrolyte materials used in other fuel cells. Phosphoric acid does not react with CO2 to form carbonate ions such as the case with alkaline fuel cellsl. As the electrolyte freezes and expands. The chemical reactions use highly dispersed electrocatalyst particles within carbon black. Small amounts of the acid electrolyte is lost during operation therefore either excess acid should be initially put into the fuel cell or the acid should be replenishable. therefore carbonate formation is not a problem with phosphoric acid fuel cells. They use a proton-conducting electrolyte. This thickness allows considerably low ohmic losses. chemical and electrochemical stability and low enough volatility to be effectively used. The phosphoric acid is uniqely contained in a silicon carbide particle matrix using capillary action. The silicon carbide matrix which holds the electrolyte is produced with particles approximately 1 micron in size allowing the matrix to be about 0. 9. hence the name of the fuel cell. The phosphoric acid fuel cell operates at approximately 180 − 200oC.1 The Electrolyte Phosphoric acid is used as the electrolyte because it is the only inorganic acid that exhibits the required thermal stability. therefore to avoid the potential problems associated with these stresses the fuel cell electrolyte is kept at a temperature above 42o C. The electrode material is generally platinum. An inorganic acid of concentrated phosphoric acid is used as the electrolyte. The structural matrix is thick enough to prevent cross over of the reactant gasses from the anode to the cathode. This electrolyte will conduct protons. it will cause internal stresses in the containment system.Chapter 9 Phosphoric Acid Fuel Cells Phosophoric Acid Fuel Cells are much like Proton Exchange Membrane fuel cells. 63 . Posphoric acid has a freezing point of 42o C.1-0.2 mm thick.
The carbon has some major functions : • To disperse the Pt catalyst to ensure good utitlization of the catalytic metal • To provide micropores in the electrode for maximum gas diffusion to the catalyst and electrode/electrolyte interface • To increase the electrical conductivity of the catalyst. The current platinum loadings are approximately 0. Carbon is bonded with polytetrafluoroethelyne (PTFE) to create the catalyst structure.1: Phosphoric Acid Fuel Cell 9.2 The Electrodes and Catalysts The phosphoric acid fuel cell uses gas diffusion electrodes. The carbon has allowed a reduction in platinum loading in the development of P fuel cells. The primary catalyst for choice is platinum on carbon.Figure 9. 64 .50 mg cm2 in the cathode.10 mg cmt2 in the anode Pt and about 0.
3 The Stack A stack is used in the structure of a phosphoric acid fuel cell. It is common for a stack to consist of approximately 50 or more cells which will produce a usable voltage. As with any liquid water cooled systems. This temperature uniformity causes an increase in the cell efficiency. Water treatment is required to filter and purify the water to minimize corrosion. The preferred method of cooling is by using a liquid. Liquid cooling is preferred because the thermal conductivity for water is approximately twenty times higher than that for air. Advantages of the ribbed substrate are : • Flat surfaces between catalyst layer and substrate promote better and uniform gas diffusion to the electrode • It is amenable to continuous manufacturing process since the ribs on each substrate run in only one direction • Phosphoric acid can be stored in the substrate. The stack consists of many cells which contain a ribbed bipolar plate. This forms a ribbed substrate structure. problems arise that must be designed around to avoid. Manifold design 65 . Two types of cooling can be used. thereby increasing the lifetime of the stack 9.4 Stack Cooling and Manifolding It is essential to remove the heat from the stacks that is created during operation. Bipolar plates used to be made from graphite with machined gas channels on each side. The addition of a water filtration system increases the cost of the fuel cell therefore generally water filtration is only used in fuel cells greater than 100 kW. The multi-component plates use a thin carbon plate to separate reactant gasses in neighboring cells and separate porous ribbed plates for gas distribution.9. wear and buildup in the piping system. These bipolar plates are constructed from layers which allow them to be manufactured more easily and cheaper than previous methods. Manifolding is important to optimize the gas supply to the cells. therefore water will extract heat from a system much more effectively than air. Generally external manifolds attached to the outside of the stacks are used. Boiling water is quite effective in creating uniform temperatures throughout the stack. electrolyte matrix and cathode. and allows gas to be supplied to the anode and cathode. and the other is by using gas. Multi-component bipolar plates are currently being used in phosphoric acid fuel cells. The bipolar plate allows the cells to be connected in series. the anode. The liquid water cooling can be performed using either boiling water or pressurized water.
27mV /oC for phosphoric acid fuel cells.5 Operating Pressure As with all fuel cells discussed thus far. As a result of the lower phosphoric acid concentration the actual voltage gain is much higher than that described by the Nernst voltage equation. The increased pressure decreases the activation polarization at the cathode. because of the increased oxygen and product water partial pressures.6 Temperature Effects As has been shown in the previous fuel cells. (9.allows the fuel gas to be supplied uniformly to each cell. temperature.15 (T2 − T1 ) mV 66 (9.1) But it has been shown that the Nernst voltage given in Eq. The increased current density reduces ohmic losses. if the phosphoric acid fuel cell is operated at high pressure then the whole fuel cell is placed within a vessel of nitrogen which is at a higher pressure than that of the fuel cell to ensure that no gasses leaking out of the fuel cell could cause harm or safety risks. the cell performance is determined by the pressure.1) does not fully describe the voltage gain at higher pressures. The voltage also increases with pressure due to the relationship given in Chapter 3: ∆V = RT 4F P2 P1 (9. According to Hischenhofer the actual voltage gain is: ∆V = 63. 9. reactant gas composition and utilization.3) . As was shown in Ch. mass transfer polarization. Just the same as alkaline fuel cells. a lower phosphoric acid concentration will result. As shown in Chapter 4 an increase in temperature has a beneficial effect on cell performance because activation polarization. This increases ionic conductivity and in turn increases current density. 2 the performance of a fuel cell increases with increasing pressure. and ohmic losses are reduced. The maximum decrease is approximately 0.5 ln P2 P1 (9.2) 9. Hirschenhofer has shown that at a midmA range operating load (250 cm2 ) the voltage gain with increasing temperature of pure hydrogen and air is given by: ∆VT = 1. If the partial pressure of the water is allowed to increase. the reversible voltage decreases at the temperature increases. Reducing temperature gradients will reduce thermal stresses and therefore increase the lifetime of the fuel cell.
This is reasonable for a temperature range of 180 < T < 250o C.
Research and Development
Phosphoric acid fuel cells were the first commercially available fuel cells. Many PAFCs have operated for years upon which much knowledge and technological improvements have been made. The reliability of the stack and the quality of the power produced have been greatly improved. Currently there are a total of approximately 65 MW of phosphoric fuel cells in use or being tested. International Fuel Cells and Toshiba for Tokyo Electric Power has created the largest phosphoric acid fuel cell power plant capable of supplying 11 MW of grid quality AC power. Unfortunately the cost of technology is still too high to be economically competitive with alternative power generation systems. Research is being directed to increase the power density of the cells and reduce costs which both affect each other.
Chapter 10 Solid Oxide Fuel Cell (SOFC)
Solid Oxide Fuel Cells have become a valid option for energy generation due to their attractive features. These cells, part of the broader category of ceramic fuel cells, are attractive due to their solid-state components. These cells, unlike the high temperature MCFC, have a solid-state electrode, as well as a solid-state anode, cathode, and cell interconnects. The government has become very interested in this type of fuel cell, since it conforms well to their plan of a solid state energy source fueled by hydrogen, as described by the SECA. Besides being a fuel cell that fits well into the governments energy plan, there are also many positive features that it has in its own right. Due to its high temperature of operation, 1000oC, it is an effective source of byproduct heat which can be used for cogeneration. The SOFC can also be manufactured in any manner of configurations; this is a consequence of its solid-state design. Due to the high temperature of operation it does not require precious metals on the anode or cathode, Ni ceramics and Lanthanum Manganite are typically used as the anode and cathode, respectively. And finally, the ability to use a wide variety of fuels makes the SOFC a promising technology. Since the SOFC operates at high temperatures is able to use pure H2 and CO. Since the materials are not easily poisoned it is also possible to use methane, diesel, gasoline, or coal gas. Although its operational temperature makes it impractical for smaller applications, i.e. hand-held devices, the SOFC in general is a very promising fuel cell for use in the distributed or centralized power generation industry. 69
10. The fabrication techniques for the planer type fuel cell are a well-understood processes. Although all three types of cells are being researched today. as well as thermal mismatches. The first cell developed was the planer fuel cell. The planer fuel cell has become more popular recently due to the advances in manufacturing processes. this was a problem due to the high temperature of operation. This problem most likely will occur since as the stack increases in size there is a greater probability that the thermal stress induced between any two cells will exceed tolerances. 70 . but the non-uniform stress that is induced by the compressive seal is seen as a barrier to the design. At the time it was very difficult to form large flat plates that could adequately seal gas. The description for manufacturing processes will be discussed later in section 4. it has been shown that the cells break down due to thermal fatigue after several cycles. Each geometric configuration has its advantages and disadvantages. typically a compressive thermal seal. This cell soon found dramatic limitations in the manufacturing process. Although the geometry of the cell varies by company all types of cells operate with common components.1 Planer The planar SOFC was the first geometry attempted for the SOFC. the tubular and the cylindrical. This design offers improved power density relative to the tubular and bell and spigot designs. Although it has many advantages it was discovered early on that gas leaks at the higher temperature are a formidable design challenge. since its application is widely used in PAFC and MCFC. but at the price of high temperature gas seals. These cells have overlapping components. SOFC has proven to be the most advanced and efficient designs. and thus remove the issue of having adequate gas seals. this motivation is partially supplied by government funding. These seals are possible to manufacture. Since the ceramics are very weak in tension. This design is very advantageous since is used simple cell interconnects and is easily stackable. An option to alleviate the need for a gas seal was incorporated in the design of the tubular SOFC. Some research has also indicated that the use of a compressive seal may inhibit the height of the overall stack. 10. and are relatively cheap and easy to perform. and the bell and spigot configurations. To relieve these problems the industry decided to experiment with other geometric configurations.2 Configurations Since the SOFC is made entirely of solid-state components it is possible to manufacture them in a wide variety of configurations.2.
10. see Fig. The cathode tube is closed at one end.2. Thus the cathode serves not only as the structural base of the cell. Since the tubular design forces the electrons to travel along a much longer path than most other fuel cells. Its unique configuration eliminates the need to design gas seals. This is the long current path that the electron must follow. since it is geometrically separated.1 Figure 10. see Fig. 10.1: Fuel can oxygen flow in a Tubular SOFC As mentioned above the solution that fixed the problem of gas seals at high temperature created a separate problem. This process greatly simplifies the manufacturing process of making large cells.1. The fuel gas flow on the exterior of the tubes. 10. Although the Ohmic losses in all fuel cells are 71 . the losses due to interconnect resistance are significant. see Fig. These cells do come at a cost unassociated with the other types of cells. and that is the increased Ohmic loss.10. and then through a variety of manufacturing processes building the other layers around the cathode. is in a co flow direction. The Siemens design is formed by extruding the cathode.2 Tubular The tubular configuration of the SOFC is the most advanced of the major geometries being researched right now.2. but also as the cite for oxidation of the oxygen in the air stream. this eliminates the need for a gas seal. currently they are 150cm in length.
Figure 10.a source of loss. This problem is being faced as a materials and design problem. Currently the cost of raw materials for the tubular design cost about $7/kW . which costs $60/kW . and are dramatically more costly compared to other competitors.2: Path of the electron in a Tubular SOFC. where as manufacturing cost are nearly $700/kW .3 Cell Components Although there are several configurations for the SOFC the planer and tubular type are dramatically more advanced. When these prices are compared to the total cost of an operational ICE. it is obvious that improvements need to be made in order for the SOFC to have any 72 . Siemens Westinghouse design 10. Siemens Westinghouse currently has multiple units being tested in the field. They are technically sound and feasible sources of energy generation. and thus are not ready for commercialization. they are especially apparent in the tubular SOFC. and as a result the cells currently being tested are yielding lower than expected performances. The design is still in testing. but unfortunately they are hampered by high capital cost. As the cells become smaller and better materials are used the loss due to this long current path will drop. and all the material issues have not been worked out yet.
thus providing a base for which planer fuel cells can be built upon. The current drive is to potentially add Al2 O3 to the electrode matrix in the hope of adding strength. Recently. Lawrence Berkeley National Laboratory scientists developed a solid oxide fuel cell (SOFC) that promises to generate electricity as cheaply as the most efficient gas turbine. December 2002. who hopes to be able to manufacture all components by PVD. Their innovation lies in replacing ceramic electrodes with stainless-steel-supported electrodes that are stronger. The electrode of the fuel cell is currently Yttria stabilized Zirconia. Component Anode Cathode Electrolyte Cell Interconnect 1965 Porous Pt Porous Pt Yttria Stabilized ZrO2 Pt 1975 Ni/Zr02 cermets Stabilized ZrO2 YSZ Mn (Cobalt Chromate) 2000 YSZ Doped Lanthanum Maganite YSZ Doped Lanthanum Chromate Table 10. The processes are described below and are currently being heavily researched by Siemens. The final and most important step that will reduce cost for the SOFC will be the reduction of manufacturing. This is a very important concept that needs to be designed and developed if the planer SOFC is to be a viable option. and. This latter advantage marks a turning point in the push to develop commercially viable fuel cells. This component would also have to have an equivalent thermal expansion coefficient. If such a component is found it will greatly reduce the problems caused by the high and uneven thermal and mechanical stresses. Below is a list of the advancements made since the beginnings of SOFC research in the material of the cell. as mentioned above. most importantly. particle vapor deposition which is a cheap and effective manufacturing process. easier to manufacture. This potential breakthrough could break the $400/kW mark set by the SECA. but it was not a severe loss of potential.significant market penetration. YSZ. The final design of the fuel cell system is not complete yet. 73 . The increased strength will be approximately from 300 MP a to 1200 MP a. This did increase the general resistivity of the cells stack.1: The Evolution of Cell Components (Specifications from the Siemens Westinghouse) Currently the work that is being considered for the cell is in developing mechanically tough materials. cheaper. but engineers involved say the cost will be close to $400/kW .
The can be formed as thin as 2µm.5 Performance Figure 10.4.10. like the MCFC. The casting is performed by applying a ceramic slip. Tape casting of a self-supporting structure costs $1/kg. but unlike the MCFC is not susceptible to a wide variety of contaminates. Since the cell operates at high temperatures is has almost no activation losses. and the usefulness of the cell in not 74 . is a high temperature fuel cell. The ceramic is then dried and then can be shaped into its final form. and extruded onto a temporary support. This comes at a cost. The overall losses of the SOFC are mainly those associated with Ohmic losses. if a non-self supporting application is necessary. and has a thickness of 200µm.4 10.3: Basic Chemisrty accociated with a SOFC The SOFC. it has a lower open circuit voltage as compared to the MCFC.1 Manufacturing Techniques Tape Casting Tape casting is a suitable processing technique to produce thin ceramic sheets with smooth surfaces and a very precise dimensional tolerances. 10. cut by a sharp blade.
Siemens mA demonstrated that their tubular design can operate at . According to information gathered by the Department of Energy the Ohmic breakdown is as follows: 45% cathode.3) (10.1 Effects of Pressure The pressure differential across the cell will have a dramatic effect on the cell.2 below shows the long electron path. 1 E= RT 2F 2 Ph 2 PO 2 ln P H2 O (10.5) 10. A list of these important equations are listed below. The cathode supplies so must resistance mainly due to the long electron path. First and foremost the cells must operate at a high temperature or else the solid-state ceramic will not conduct the oxygen through its membrane.5) is a reasonable source for determining the effects of an increased pressure.(10. and increase will have a dramatic effect on the SOFC.1) (10. As mentioned previously. 18% anode 12% electrolyte and 25% interconnect. ∆V (mV ) = 59 log P2 P1 (10. It should be noted that there are conflicting views about the pressure-educed constant. It has been shown that Eq. around 75 . 10.65V and 500[ cm2 ] at a pressure of 10 atm.(10. The figure 10.5.1).5.in the range of mass transportation losses. almost 80 % of the loss is in the cathode.1) the change in pressure and an increase in temperature will be significant to maximizing the potential of the SOFC. where the cell only operates at .2) (10.4) 1/ O2 + 2e → O = 2 H2 + O = → H2 O + 2e− H2 + 1/2O2 → HO As seen by Eq. This is a dramatic increase from 1 atm. The basic fuel cell equation derived in Chapters 2 and 3 apply to the SOFC.2 Effects of Temperature As noted by the Nernst equation for a SOFC. Ohmic losses are a very big concern in the SOFC. Once the cell is at a sufficient temperature to allow transport. When the length of electron path and resistivity of the material are considered.47V at the same current density.(10. Eq.
4: Path of the electron in a Tubular SOFC.5. In testing it was found that 5000 ppm of NH3 had no effect on the fuel cell. the effects of a temperature increase are dominated by the Nernst equation and the current density. and hydrogen sulfide. A source of 1ppm of HCl was then added and again no loss was detected.Figure 10. HCl. After the initial loss of voltage 76 . This is very important to a fuel cell that will most likely be used in a non-portable application. (10.6) 10. NH3 . Coal gas is readily available and can be used in the SOFC. one of the advantages to a SOFC is its resistivity to impurities. ∆V (mV ) = K (T2 − T1 ) J where K is defined in Table 10. and so the following equation is only a current conjecture. H2 S. but with the addition of 1 ppm of H2 S the system dramatically lost potential (close to . MCFC. The three impurities of potential concern are ammonia.2 and J is the operational current density. Again like the effects of pressure. hydrogen chloride.11 V initially). the effect of temperature is not completely understood.3 Effects of Impurities As mentioned in the introduction to SOFCs. These are all common impurities that had to be scrubbed in other fuel cells. Siemens Westinghouse design 800o C.
Thus a SOFC would have to be scrubbed of the H2 S content. 77 .6 Conclusion The SOFC is a very technically sound design that has the potential to deliver power to a multitude of stationary locations.1000 0. It has the ability to supply excess heat to a boiler and thus have an even greater efficiency.009 800 . with a slope of .2: K values for Temperature Dependence the system continued to decline in potential in a linear digression.0054 V per 400 hrs of operation. The follow up to the previous experiment determined that .068 850 Table 10.900 0.5 ppm was an acceptable level. The variety of fuels that can be used with the SOFC is very wide.006 1050 0. 10.003 950 0.K Temp (o C) 0.014 900 . and will no doubt contribute to its attractiveness on the open market.008 1000 0.
Part II Fuel Cell Applications and Research 79 .
an external motor drives only one rotor. To accomplish this.8 factor increase in pressure which is quite small. The screws do not come in contact with each other. Each has its own advantages and disadvantages for various fuel cell types. and the second rotor is turned by the first.1 Compressors There are four major types of compressors that can be utilized in a fuel cell system. and the axial flow compressor. but this contact must be lubricated with oil. ejectors. The second configuration uses a synchronizing gear to connect to two screws. There are two configurations for the screw compressor. turbines. the Roots compressor is still only useful for small pressure changes of about a 1. The Roots compressor is cheap to produce and works over a wide range of flowrates. The advantage of the Lysholm compressor is that it can provide 81 . These include the Roots compressor. This is unfavorable for fuel cells because the oil can enter the fuel cell.Chapter 11 Fuel Cell System Components For fuel cells to function efficiently. In one. The exhaust gas from fuel cells can also be used by turbines to create additional usable power and therefore increase the efficiency of the fuel cell. Though. the Lysholm or screw compressor. therefore no oil is needed and this type of compressor is favorable for fuel cell systems because it does not introduce oil into the fuel cell. compressors. The Lysholm or screw compressor works by two screws which counter rotate driving the gas between the two screws forward and therefore compressing it. blowers and pumps are used. A disadvantage is that it only gives useful efficiencies over a small pressure difference. the centrifugal or radial compressor. 11. Improvements in this type of compressor have been developed by Eaton Corporation. air must be circulated through the fuel cell for cooling and cathode reactant supply. fans.
It is relatively low cost and the technology is mature. Cp is constant.1) where T2 is the isentropic temperature. 11. 82 . or at least the change is negligible • The gas is a perfect gas. One can think of this compressor as the inverse of a turbine. Typical operating speeds are approximately 80. some general assumptions are made : • The heat flow from the compressor is negligible • The kinetic energy of the gas as it flows into and out of the compressor is negligible. The efficiency of the compressor is found by using the ratio of the actual work done to raise the pressure from P1 to P2 to the ideal work that would be performed if the process had been reversible or isentropic.a wide range of compression ratios. This type of compressor is also available to suite a wide range of flowrates.000 rpm. This type of compressor is commonly found on engine turbocharging systems. A disadvantage is that it cannot be operated at low flowrates. The centrifugal compressor uses kinetic energy to create a pressure increase. and so the specific heat at constant pressure. yet the efficiency is high over a very limited range of flowrates. Cp /Cv . According to Watson. The axial flow compressor uses large blades to push air through a device with a decreased cross sectional area. 1982.2 Compressor Efficiency Compressor efficiency is important to define because it plays an important role in the overall efficiency of the fuel cell. To find the actual work and the isentropic work done. Another problem is the operational rpm of this compressor. and γ is the ratio of the specific heat capacities of the gas. The centrifugal compressor is the most common type of compressor. In a reversible adiabatic process the pressure change of the gas is related to the temperature change through the relationship : T2 = T1 P2 P1 γ−1 γ (11. The drawback to these types of compressors is the cost because they are quite expensive to manufacture. Axial flow compressors are expensive to manufacture. It can increase the pressure up to eight times the input pressure. these types of compressors will most likely only be used for systems above a few MW to make them worth while.
The actual work done by the system is therefore: W = cp (T2 − T1 )m (11.4) Substituting Eq. The isentropic work done by the system is therefore: W = cp (T2 − T1 )m (11.1). η= isentropic work real work and ηc = cp (T2 − T1 )m cp (T2 − T1 )m T2 − T1 = T2 − T1 (11. The total efficiency is therefore the compressor efficiency times the mechanical efficiency of the shaft or: ηT = ηm × ηc (11.5) The change in temperature can be found from the above equation: T1 ∆T = T2 − T1 = ηc P2 P1 γ−1 γ −1 (11.1) into the above equation we get: T1 ηc = (T2 − T1 ) P2 P1 γ−1 γ −1 (11.3) where T2 is the isentropic temperature given by Eq.7) 11.2) where m is the mass of the gas compressed. Since power is defined as: Power = Work time ˙ W = cp ∆T m ˙ 83 (11. and cp is the specific heat at constant pressure.8) . The efficiency is the ratio of these two quantities of work. T1 and T2 are the inlet and exit temperatures respectively.2) to determine the power that would be required to increase the temperature of the gas. (11. (11.3 Compressor Power We can use Eq.6) There are also losses due to friction within the bearings of the rotating shaft which must be accounted for. (11.
4 Turbines The hot gas exhausted by fuel cells can be harnessed into mechanical work through the use of turbines. we can find the power from: 84 . 11. ηc .6) into the above equation we arrive at: T1 Power = cp ηc P2 P1 γ−1 γ −1 m ˙ (11. Generally 0.10) and noting that the isentropic temperature T2 is actually lower than the actual exit temperature the efficiency of a turbine is therefore: ηc = actual work done isentropic work (11.13) as the same with the compressor. One must remember to take into account the mechanical efficiency of the compressor when finding the power needed from the motor or turbine to drive the compressor.11) Substituting in the proper equations as we did for the compressor.9) The isentropic efficiency.Substituting Eq. (11.1) T2 = T1 P2 P1 γ−1 γ (11.9 is used. The efficiency of a turbine determines whether or not it is economically viable to incorporate into the fuel cell system. (11. Using Eq. is readily found from efficiency charts. The efficiency of a turbine is similar to that of the compressor. the efficiency becomes: ηc = T1 − T2 T1 − T2 1− P2 P1 T1 − T2 = T1 γ−1 γ −1 (11.12) and the temperature change can be found to be: ∆T = T2 − T1 = ηc T1 P2 P1 γ−1 γ −1 (11.
W −1 m ˙ (11. The problem 85 .15) 11.14) Again we note that the power available to drive an external load is found by multiplying the above power by ηm . These types of fans draw air through the center and force it outward creating a pressure rise.6 Fans and Blowers An easy and economical way to cool a fuel cell is to use fans and blowers. 11.5 cm of water which is very low. but is not effective across large pressure differences.7 Membrane/Diaphragm Pumps As we have seen in the discussion of PEM fuel cells.5 Ejector Circulators The ejector is the simplest pump. There are no mechanically moving parts. small to medium sized PEMs are expected to be a substantial sized market for portable power systems.Power = Work time ˙ W = cp ∆T m ˙ = cp ηc T1 P2 P1 γ−1 γ ˙ Power . Basically ejector circulators use the stored mechanical energy in the pressurized gas to circulate the fuel around the cell. the pressure is still only 3 to 10 cm of water which is quite low. The typical back pressure for this type of fan is approximately 0. Hydrogen fuel cells that use pressurized stored hydrogen use these types of pumps. Centrifugal fans are mainly used for circulating cooling air through small to medium sized PEM fuel cells. A common fan such as the axial fan is effective in moving air over parts. It is helpful to note the efficiency of a cooling system is: Cooling system effectiveness = rate of heat removal electrical power consumed (11. the mechanical efficiency. Greater pressure differences can be obtained using the centrifugal fan. Therefore these types of fans are suitable for a few very open designs of PEM fuel cells. Even though centrifugal fans can create greater pressure compared to the axial fan. 11.
operational pressure. Some major features of such pumps are the low cost. and fish tank aerators. Closed system cooling in small to medium sized PEMs creates back pressure of about 10 kP a or 1 m of water. Therefore diaphragm pumps are ideal for such situations. Obviously this is too high for axial or centrifugal fans as discussed earlier. The larger diaphragm pumps can be operated against a back pressure of one to two meters which is ideal for small to medium sized PEMs. variety of sizes and efficiency. small scale chemical processing. 86 . These types of pumps are readily available for gas sampling equipment. silent. reliability.with the cooling of PEMs is the closed system cooling.
Since this is not an economically viable source of producing hydrogen there is research being conducted to develop alternate methods of hydrogen production. by comparison. today’s electrically produced hydrogen costs around $30 per million British thermal units (Btu). despite this abundance it does not appear naturally in a useful form. is produced through the steam reforming of natural gas. 87 . Through reactions within the fuel cell system these are converted in to the necessary hydrogen. Thus the necessary question becomes.1 Introduction All of the fuel cells previously discussed use hydrogen as their source of fuel. natural gas costs about $3 per million Btu. Although there has been some discussion about the use of methane and CO.1. and gasoline costs about $9 per million Btu. where do we get the hydrogen. Currently most hydrogen in the United States. As noted by the Department of Energy. being the most abundant element in the universe. hydrogen will likely be produced from fossil fuel sources. the next 20 to 40 years. In the short term. Although natural gas will likely provide the earliest affordable feedstock for hydrogen. The long-term solution to hydrogen production will likely be in production though biological. These carriers are either natural sources of hydrogen or are produced though a variety of industrial processes. nuclear. and about half of the world’s hydrogen supply. Despite this research. The sources are hydrogen are vast.1. Below is a table of common hydrogen carriers.Chapter 12 Fueling the Hydrogen Fuel cell 12. today’s costs are prohibitively expensive . see Fig. or biomass sources. hydrogen is still expensive and a pollution creating process. So the economic barriers to hydrogen production are formidable. see Table 12. these two sources are simply hydrogen carriers. will describe the common and useful properties of hydrogen rich carriers of hydrogen. 12. The following table.
1: Methed of possible Hydrogen Production 12.2. in 88 . especially the low temperature cells. In this process the hydrocarbon and steam are run through a catalytic cycle where hydrogen and carbon oxides are released. This method of hydrogen formation is most efficiently used with light hydrocarbons such as methane and naphtha.12. In the diagram it should be noted that there is a desulphurization process. The steps of this process are outlined in a general form as follows : • synthesis-gas generation • water-gas shift • gas purification A simple block diagram of the process is shown in Fig. sulphur. there is a relevant body of information on how to obtain hydrogen.Figure 12. As noted in many of the cells.2 Hydrogen Production from Natural Gas Since hydrogen is currently produced in industry for a variety of reasons. This is a requirement of the fuel cells. and occasionally as a byproduct of other processes. On such process is the steam reforming of natural gas.
5 1275. By heating the process at about 800oC the conversion of methane is about 98%.1) (12.(12. and the hydrogen production is about 72%.04 17.3) The values of n and m in the preceding equation are the subscripts found in Table 12. this issue is discussed later in the chapter (see Coal Gasification). and the quality of exit gas. thus requiring external energy to be supplied to the system.7 -97. There are three major types used in industry.3 638.(12.0 368.9 855 789 C8 H18 114.4 64. 89 .8 125.5 -33.7 5512.3) we find that the overall reaction is endothermic.H2 CH4 NH3 CH3 OH Molecular Weight 2.016 16.2 -182.1 for carbon and hydrogen combinations.2) (12.6 510 1371 1100 Liquid Density (kg/l) 77 425 674 792 Table 12.2: Steam Reforming Cycle the form H2 S was a major inhibitor of performance. Each type has certain advantages and thus vary in cost depending on the quality of hydrogen.04 o Freezing Point ( C) -259. By summing the enthalpy of Eq.3 78.Eq.5 -77.5 316.8 802.77 . To supply the heat to the system a reforming furnace must be used.1).07 -117.8 o Boling Point ( C) -252. Cn Hm + nH2 O −→ nCO + (n + m/2)H2 CO + H2 O −→ CO2 + H2 CO + 3H2 −→ CH4 + H2 O (12.1 702 Figure 12.161. and Eq.1: Properties of Hydrogen Rich Fuels C2 H5 OH 46.5 Heat of Vaporisation (kJ/kg) 445.03 32.(12.2 -56.7 Enthalpy (@25o C) (kJ/mol) 241. The overall ideal reformer process is governed by the following equations.2.
The top fired furnace is a good option for most uses since it is smaller in size than the other two types of furnaces.3 Hydrogen Production from Coal Gas Since MCFC and SOFC operate at a very high temperature (1000o C) it is natural to use hydrogen produced by coal gasification. it again has heating design considerations. the bottom-fired furnace. Thus once again the designer must use materials that can handle these high temperatures. This mixture of coal and steam goes through a series of chemical reactions to produce hydrogen and carbon dioxide. This process requires a very high temperature for the rate of reaction to be sufficient. In coal gasification coal is burned and then the reactant gasses are combined with steam. Coal is a nonrenewable resource. This process is not well suited to materials that are economically available. and thus it can be mined cheaply. which means less overall hydrogen is being extracted. thus its use with MCFC and SOFC. and has well know properties. 12.8 kJ/mol. is characterized by counter current flow. This configuration allows for the greatest heat fluxes. or use a lower overall temperature. Its operation is such that there is concurrent flow of the inlet gas (providing the necessary heat) and the natural gas (typically methane). This source of hydrogen production is also a potentially huge market considering that the US government has decided to take a clean coal approach to energy conversion. The following table has the overall reaction along with the intermediate reactions that occur in a typical coal gasifier.The three types are top-fired furnace. This design is intended to provide the most energy to where the endothermic reaction is taking place. but it is very abundant. The uneven heating distribution causes high temperatures in the material and thus can crack the layer separating the gas streams. this design has multiple burners all long the reforming tube. at the expense of multiple burners. Thus maximizing the area of heat transfer. This is necessary when high levels of hydrogen are to be extracted from the natural gas. either at the bottom or top. 90 . It is also an endothermic reaction. This design offers the advantage of even heat fluxes. Although the heat flux is consistent. Since as the natural gas leaves the reformer it sees the highest temperature. Since the gas increases in temperature along the tube. yet is still being heated at a constant rate the gas temperature is the hottest at the exit. The final type of furnace is the side-fed furnace. and the side-fired furnace. Since the two previous types have only one burner. thus it extracts the most H2 . to be obtained at the bottom of the furnace. bottom-fired furnace. The second type. unless ceramic layers are inserted to protect against the high temperature. with an enthalpy of 131. This is mainly because the US owns a large amount of coal. and thus heat transfer. which is produced by the burning coal. and is economically more affordable.
since coal often has high sulphur content. and this process directly related to the coal feed and the specific gasifier.4) This process is typically run at 400o C in order to optimize the rate of reaction. The basic types of reformers used are: fixed bed. Since all fuel cell types are extremely sensitive to sulphur compounds. Sulphur removal is also important for natural gas reforming as well.1 131. In this reaction the exit gas is run over a zinc oxide surface. The most significant is desulphurisation. it is necessary to remove sulphur from the exit stream. This internal removal process uses limestone (CaCO3 ). fluidized bed.38 Table 12. (12. Just like the natural gas reforming process there are several industrial processes for coal gasification. Although there are a variety of processes to remove H2 S. 91 . Various companies that produce hydrogen use one of these four processes.8 131. sulphur content. thus inhibiting the efficiency.7 185. Although this process is possible it does not typically yield a lower H2 S count than external reforming.4 96. Regardless of the process used to create the hydrogen there are some general problems that must be over come.5O2 → CO C + O2 → CO2 C + H2 O → CO + H2 C + 2H2 O → CO2 + 2H2 3C + 2H2 O → 2CO + CH4 2C + 2H2 O → CO2 + CH4 Overall Reaction C + H2 O → CO + H2 Enthalpy (kJ/mol) -110. and the tendency to agglomerate make the coal gasification process very difficult and complex. especially the presence of the acid H2 S.Reaction C + . A second technique is to inject an absorbent into the exit gas stream. and molten bat. H2 S + ZnO −→ ZnS + H2 O (12. this deviation varies according to where the coal came from and what the quality of the minded coal is.4). fast fluidization.5 12. the most common is the use of zinc oxide. which is relatively inexpensive.6 -393. The presence of ash. see Eq. to absorb the sulphur.2: Chemical Processes in an ideal Coal Gasifier Since coal is not perfect carbon there is deviation from this process.
There are also several fringe processes for creating hydrogen. Biomass can be converted into energy in several ways. 92 . One such topic. Further development of this technology could be very beneficial to the fuel cell industry. This is the highest yield of hydrogen ever obtained from glucose by a biological process. conversion to biogas. this can include plant mass. The two major processes are anaerobic digesters and pyrolysis gasifier. this requires high nitrogen content fuels. animal waste. and municipal waste (landfills). The researchers achieved 97% of the maximum stoichiometric yield possible. is the use of glucose to separate hydrogen from water. Some of the most important. and animal tissue. the former being useful in the kW range and the later being useful at the MW range. conversion to methanol. An anaerobic digester (AD) is a process that converts complex animal matter (manure) into simpler gasses (methane). vegetable mass. This process is only efficient in large-scale production. There are several developing processes for effectively using biomass for the production of hydrogen.12. 12 hydrogen molecules for each glucose molecule. and finally conversion to liquid hydrocarbons. Pyrolysis gasifier are a process of thermal decomposition to produce gases (methane). recently printed in Nature.4 Hydrogen Production from Bio Fuels A bio fuel is any fuel that is derived from a natural organic material. Researchers reported producing 11. like direct combustion. algae. Farmers typically use an AD to reduce their pollution in the water system.6 hydrogen molecules for every glucose molecule in the substrate. conversation to ethanol. and abundant sources of bio fuels are: wood.
current.1 PEM Simulation and Control Vehicle simulations have become an important analysis tool for improving and optimizing vehicle systems. Storing hydrogen fuel has its own problems too. though gasoline reforming raises problems of fuel reformation and response time. Though as with any fuel cell. PEMs are currently the most widely tested and used fuel cell for non-hydbrid vehicle propulsion. The liquid form of hydrogen has a very high energy density. DMFCs are a viable option for vehicle propulsion yet they must be developed further in order to acheive higher power densities and better stability. there are difficulties in implementing them. VP-SIM. The simulation determines the efficiency of the fuel cell by a series of simple equations. A vehicle performance simulator. The model contains a fuel cell stack and models of auxiliary components required for the fuel cell stack. 93 . scalable modeling approach for all power train components. and hydrogen imbrittlement. 13. yet it is expensive to produce and difficult to obtain. developed at The Ohio State University is able to simulate a variety of designs using a modular. including the high combustibility of hydrogen. They are favored due to their fast start-up and response times. The PEM fuel cell performance can be determined if the voltage. Other fuel cells have been tested. Gasoline-supplied vehicles are undergoing research to asses their usefullness. PEMs pose the problems involved with the fuel supply.Chapter 13 PEM Fuel Cells in Automotive Applications Fuel cell applications for vehicles have certain requirements including available space and fast power response and start up times. Recently a fuel cell system has been added to this simulation tool.
can be determined from the voltage and current: V ×I ˙ Wf c = 1000 (13.13. mf c = mass flow rate of fuel consumed ˙ in the fuel cell reaction [kg/s]. The simulation tool can model voltage-current density relationships.2) where V = fuel cell voltage [V] and I = fuel cell current [A]. higher power density. calculations conducted by the VP-SIM include predicting the ability of a powertrain to meet a desired vehicle driving cycle. Fig.3) where i is the current density in A/cm2 and A is the fuel cell active area in cm2 . the auxiliary components must be considered when analysing performance of a fuel cell stack in automotive applications. As with any power generation system. The expressions for auiliary component power density are derived from an energy balance for each component.13. The fuel ˙ cell power produced.4) The use of this scaled equation allows any size fuel cell to be modeled by specifying the fuel cell active area.2 below gives the auxiliary component power per fuel cell active area equations which are all needed to successfully model the fuel cell system to determine the total efficiency. I =i×A (13. A schematic of a fuel cell system designed for use in an automobile is shown in Fig. and implement a supervisory control strategy. The simulator can scale the size of the fuel cell and determine the effects of size based on the current density. estimate fuel economy. and effects of cathode pressure and fuel cell operating temperature on fuel cell voltage. Applying this to Eq.1) ˙ where Wf c = fuel cell power produced [kW].and power are known to give the exergetic efficiency: fc = ˙ Wf c mf c × LHV ˙ (13. and higher exergetic efficiency.1.1) we obtain: = V ×i 1000 fc (mf c /A) × LHV ˙ (13. With the above equations. power density. a fuel cell needs auxiliary components to support the operation of the fuel cell stack Therefore. As shown in previous chapters the simulation showed that for a given current density increasing cathode pressure or increasing fuel cell operating temperature generally results in higher voltage. and exergetic efficiency can be analyzed.(13. and LHV = fuel lower heating value [kJ/kg]. Wf c . 94 .
13.Figure 13. Tests have been performed using the VP-SIM by varying the fuel cell stack current density request from 0 to 0.13. the cost of fuel cells will decrease once high volume production begins.2 PEM Cost Analysis The cost of fuel cells are currently too high to allow them to become a economically effective alternative to current energy generation methods such as coal and gasoline burning. As with any commercially available product.1: PEM fuel cell system schematic There are other similar simulators such as National Renewable Energy Laboratory’s ADVISOR or Argonne National Laboratory’s PSAT. 95 . The fuel economy is cruicial in determining if fuel cells are more economically viable than gasoline combustion engines.1 A/cm2 .4. The results of current density vs power density is given in Fig. Below is table giving the simulated fuel economy for various fuel cell system configurations.
Figure 13.2: Auxiliary component power per fuel cell active are 96 .
4: Warm Start FHDS fuel economies and fuel usage for various air control and system cases.3: Fuel cell and fuel cell system power density versus current density given by the VP-SIM Figure 13.Figure 13. 97 .
Usually oxides and nitrides (metallic based) coatings are applied by electroplating. and carbon/graphite composites. and chemical vapor deposition. The graph below.Chapter 14 Manufacturing Methods 14. These include stainless steel. Manufacturing methods for bipolar plates differ depending on the material. etching or embossing. Metal bipolar plates offer a high potential to reduce costs and enhance power density versus carbon composites. For metal bipolar plates flow channels are manufactured by machining. whereas metals are machined. The graphs show that the coated layers are stable for at least 1000 h which is a substantial amount 99 . and effectively remove product water. form passive oxide surface layers which have a high ohmic resistance under PEM fuel cell operating conditions. Fig 14. Bipolar Plates are a significant cost of a fuel cell therefore advances in their development is critical Currently there are three major types being researched.1 Bipolar Plate Manufacturing The function of a bipolar plate is to supply reactant gases to the gas diffusion electrodes via a flow field in the surface. titanium. and provide series electrical connections between the individual cells. The following discussion concerns stainless steel bipolar plates and their effectiveness when coated with oxides. The cell voltage as a function of time is shown in the graphs below. Graphite composites are generally compression or injection molded. Therefore. evaporation. stainless steels. One type of metal used.1 shows that formation of the oxide layer and nickel dissolution to the bipolar plate can be avoided by using applied coatings. direct use leads to a voltage drop in the fuel cell which makes the output and efficiency too low for a commercial application To reduce the chance of contamination and reduce contact resistance of the metallic plates various types of coatings and surface treatments have been investigated and applied to the plates. sputtering.
6 shows the polarization response for the various materials.4. Fig. 14. there was a difference in the surface resistance between the used and the original samples as shown in Fig.5 mm). The manufacturing method was a low-cost slurry molding process to produce a carbon-fiber preform (120x140x1.Figure 14. Sample cells were tested in an endurance testing device and then the change in surface resistance was measured. The polarization response was shown to vary depending on the bipolar plate alloy composition which affects the performance of the fuel cell. The surface of the preform is sealed using a chemical vapor infiltration technique in which carbon is deposited on the surface material in sufficient 100 . Over the experimental period.1: Contamination of the MEA in single cells of time for a fuel cell.2 Carbon/Carbon Composite Bipolar Plate for PEMs High-density graphite with machined flow channels was tested. Significant ohmic losses are encountered across the bipolar plate/GDE interface which reduces the efficiency. 14. 14.
Fig. must be free from cracks to prevent gas crossover. so costs should be greatly reduced.7.1 Conclusions The proposed carbon/carbon-composite bipolar plate material has very promising fabrication. and performance characteristics It is very low weight (half the weight of other materials used) which is highly beneficial. 14.3 Electrolyte Matrix The electrolyte retaining matrix is a porous material that gives structural integrity to the stack and contains the electrolyte The electrolyte matrix must be wettable to an extent to provide good ionic conduction.8 is a picture of the bipolar plate used in the evaluation and the second picture shows a cross section of the bipolar plate with carbon deposited on it. material. High electronic conductivity and low cell resistance which increases efficiency lends itself to continuous process fabrication and economies of scale.Figure 14. and must have good structural integrity. 14. 101 . 14.2. 14.2: Lifetime curve for a single cell with different coated bipolar plates quantity to make it hermetic as shown in Fig.
The particle-size distribution curves of as-received powder and the slurries prepared by the ball-milling and mechanical-stirring methods are shown in Fig. The matrix layer prepared by the ball-milling procedure exhibits a more uniform structure but the mechanical-stirring process produces larger pores in the matrix layer as shown by Fig.Figure 14. Cell performance is improved with increase in milling time and reaches a maximum at 24 hrs.1 Conclusions Particles in slurry prepared by the ball-milling procedure exhibit high absolute zeta potential values which give good dispersion Optimum dispersion is achieved after 24 102 . 14. Fig.14.040 mm. The particle size prepared with the ball-milling approaches a bi-modal type which is more advantageous.11 shows the zeta potential for ball milling vs.3: Lifetime curve for a single cell with a gold coated bipolar plate Silicon Carbide with a binder is the best matrix material for use in PAFCs Silicon Carbide slurry is prepared by the procedure below: Doctor Blade method with slurry thickness of 0. Less electrolyte movement in the ball-milling matrix which results in extended cell life.14.3. The particle size in the mechanical-stirred slurry exhibited tri-modal distribution with a well-separated particle-size profile.12. mechanical stirring.14. 14.
To better understand the issues related to this technical hurdle. price is still the major hurdle. This cost is something that the industry and the government has pin-pointed as an area that is critical to reduce if the technology is ever to become a widespread reality. the current price of fuel cells is a major limiting factor for overall market acceptance. 103 . These cells were chosen since both are seen as the most technically possible and easiest to implement into today’s current energy market. as broken down in the SOFC chapter. Two important issues are the production and manufacturing techniques used to create the electrolyte components. In order to bring this price down several areas of the technology will have to be improved. has 80% of the cells cost tied to the manufacturing and production of the fuel cell.4 Introduction to SOFC and DMFC Manufacturing Methods Although the performance parameters and the reliability are major areas of interest in making fuel cell technology more acceptable for common application. which will lead to a price reduction (the overall goal). a case study of current work on SOFC and DMFC is explained below. This high price. This cost of manufacturing and production really touches on many of the hurdles facing the fuel cell.4: Resistance vs. As has been mentioned several times in this book.Figure 14. Compaction force before and after endurance testing hrs milling time The cell with a matrix prepared by the ball-milling method displays better performance and lifetime than with a matrix obtained by the mechanical stirring method. 14.
This slurry was then spread across the surface in a screen-printing type of operation. This layer serves the purpose of providing an avenue for transport of the methanol or the reacted water. Their approach was to use Nafion 112 membrane in their MEA. The first stage was to crate a slurry of Teflon and carbon black.5: Cell Potential vs.26 mm wet-proofed carbon paper was used as the backing material in this specific test. 104 . Compaction force before and after endurance testing 14. For more specific information on the DMCF see the DMFC chapter.5. The idea is to reduce the fuel cell crossover while keeping the internal resistance of the cell at is lowest possible value.Figure 14. The first part is the structurally necessary section. that would operate at STP.1 MEA Thickness and Performance The MEAs are constructed of three parts. This study set out to optimize this ratio while using inexpensive manufacturing techniques. This layer of . fuel reforming. The micro porous layer was formed in a two-stage process. An important design consideration for the DMFC is the optimization of the component thickness. This layer is also important for the connection of one cell to the next. through the bipolar plates. This material is necessary so there is a layer on which to build the substrate surface. The middle section of the MEA is a micro porous layer. or stack cooling. without additional humidification.5 Methods for DMFC The use of a DMFC is an attractive choice due to their simplified technological hurdles. 14. They also are more adaptable to today’s fuel storage infrastructure. They do not require complicated humidification systems.
1: Loading of the Anode and Cathode in a DMFC The goal of this new technique was to develop a thin membrane with two criterion in mind. The first was to reduce the cost of manufacturing and the second was to improve performance. Nafion 112 solution that was formed on Pt-Ru support structure. The final stage was to add the catalyst layer.14.15. This layer was made from a 5 wt%.1. at a mg temperature of 360O C. The manufacturing costs were dealt with by using common manufacturing techniques. as discussed in pervious chapters. Following the table is the graphical representation of the anode and cathode electrolytes. see Fig. This was manufactured using a slurry coating technique.Figure 14. The results of the catalyst layer are listed below in Table 14. The performance of the cell was tested under several 105 .3 cm2 mg 1 cm2 Anode mg 4 cm2 mg 1 cm2 Pt-Ru Nafion Table 14. These two processes resulted in the layer that was 2 cm2 of Teflon. Cathode mg 1.6: Polarization Response for Various Materials Once the substrate dried it was then sintered.
14. the necessity to keep gas leaks down. see Fig.Figure 14.5.7: Chemical Vapor Infiltration apparatus conditions and the results are shown below. or potentially even degrade performance. thus improving the number of cites for which the reaction can be performed. This study seeks to find the limit in which the cell can handle before the compressive load is no longer beneficial. respectively. If two materials are pressed together then they will have greater surface area contact.16. and an important parameter to understand in all fuel cell manufacturing techniques. Several of theses are the importance to have small size. Fig. The 106 .17. is the determination of the allowable compression of the cell members. shows the effect of two different compressive loads. 3 3 The cells that were originally manufactured were about 300 µm in thickness. The two compressive loads tested in this scenario were for mildly compressed and highly compressed cells.14. These were defined as being compressed to 1 and 2 of its original width for highly and mildly compressed cells. and finally to improve performance. This is an important design and manufacturing characteristic. 14. Often cells components are held together under high compressive loads for several reasons. The following figure.2 Effects of Compression The next phase of the study.
The regions between these two cannot support the reaction. 14. 107 . In that scenario the reaction rapidly dropped off in voltage because there was not enough fuel to continue to keep up with the reaction. thus dramatically limiting the usable surface area. Although it appears to have a large contact area. The electrolytes become damaged in the compressive loading process and only have a limited number of remaining cites for the reaction to take place on. the actual area of contact is dramatically less than the apparent contact area. is mainly due to the inability to support the reaction. the cell cannot process the fuel. thus causing the same effect as not having the fuel. Thus it was found that the over compression of the cells is not a beneficial technique since the cell dramatically looses performance. This can be thought of in the same way one thinks about contact surfaces for purposes of friction. Although the system can support the flow rate of fuel.8: Picture of a Carbon Bipolar Plate reason for the dramatic drop off in the voltage.17. The anode and cathode develop regions of cliffs and valleys. A very similar phenomenon is also happening in this case as well.Figure 14. This can be likened to a concentration loss that was discussed in Chapter 4. see Fig.
The component chosen was Cu/CeO2 /YSZ.Figure 14.3 − 0. The second stage of the study was to replace the common Ni-YSZ components in a SOFC with a compound that would better be suited for the use of methanol.9: Cross Section of a Carbon Bipolar Plate 14.5 µm agglomerates. This manufacturing process causes permanent damage to the components of the cell and thus reduces the overall performance of the cell. Fig. which produces 5 − 20 µm size electrolytes. This is especially true for a vehicular system. The next is the ability to produce thinner electrolytes. The two following images illustrate the effects of the new manufacturing process. This results in several important cell properties. The process produces 0. This study has found that by reducing the sintering temperature of the YSZ it is possible to create smaller agglomerates. since methanol is an easy fuel to integrate into a fuel cell system.14. In order to manufacture the electrolytes a tape casting process 108 . The first is improved power densities.19.18 and Fig.6 Methods for SOFC Current solid oxide fuel cell cell-fabrication techniques require high sintering temperatures (14000C) to obtain yttria-stabilized zirconia (YSZ) electrolytes on anode substrates. This was chosen because of its ability to directly oxidize hydrocarbon fuels.14. This was seen as an important feature.
is helping to improve the characteristics of the fuel cell. thus reducing overall system costs. easy. shown below. This process is cheaper and quicker. illustrates this process. 14. thus improving the cost and time of production in a mass production setting. They are seeking to decrease 109 .20. along with the work by the preceding two groups.Figure 14. thus allowing for the immediate production of the cells. These results would likely be better if the gaseous methane were used instead. This process is not necessarily the best mass production technique. and how it was specifically applied to the Cu/CeO2 /YSZ membrane. 14. discussed earlier. which was developed for hydrocarbons.10: Ball Milling was used.21. shows the results of the cell under the new sintering and Cu/CeO2 /YSZ membrane. This process would most likely be replaced by CVD. Fig. The following figure. 14. and effective technology that is proven. Fig. This figure was produced with a liquid synthetic diesel fuel. This is a well-understood process. but it is a cheap. it is also fairly inexpensive.7 Conclusion The work of a wide variety of groups.
11: Zeta Potential the system cost. while improving the performance. Once these two parameters can be realized fuel cell systems will come to dominate the energy conversion market.Figure 14. 110 .
12: Particle Size Distribution 111 .Figure 14.
Figure 14.13: Milling Time 112 .
Figure 14.14: SEM of particle size for the machine milling(top) and ball milling(bottom) 113 .
15: The MEA for the anode (a) and the cathode (b) 114 .Figure 14.
17: Effects of Pressure on the Cell 115 .Figure 14. given the new membrane Figure 14.16: Cell Performance.
Figure 14.18: Cracking of the agglomerates at lower temperatures 116 .
Figure 14.19: The reduction in size of larger agglomerates 117 .
Figure 14.20: Tape Casting of SOFC Figure 14.21: Performance of the SOFC using the new manufacturing and design techniques 118 .
and (2) miniaturizing the individual components. Fuel cells are able to provide this need for increased power. The other form of research for microreactors.1 Introduction As technology advances and our dependence on portable electronics increases. facilitating on-board closed-loop control. 15. Germany (IMM). has been pursued by many groups including the Institut fur Mikrotechnik. This approach is advantageous when materials used in silicon processing are not compatible with the reaction. and lifetime of portable devices are often constrained by the available power supply. Another advantage is that process sensors and control logic can also be built into the same substrate. of Mainz. They have developed a partial oxidation reactor as seen in Fig. operating speed. miniaturized heat transfer manifolds. 15. Advantages of the integrated approach are the ability to control thin-film properties and their interfaces. Another advantage is that its 119 . e.1. miniaturizing the individual components. so does the demand for powering these devices.Chapter 15 Portable Fuel Cells 15. and assembling them into a reactor. from which has come many excellent examples or demonstrations. The first approach is being pursued by a group at the Massachusetts Institute of Technology. The functionality. To accommodate this growth we must improve our ability to power portable consumer electronics. Fuel cells potentially offer 5-10 times greater energy densities than rechargeable batteries. enabling optimized reactor performance.2 Solutions Research is currently being pursued to create microreactors with two main approaches: (1) using silicon processing technology to create an integrated structure.g.
The process by which the bipolar fuel cell design is manufactured is shown in Fig. 15. such as thermocouples. This design also allows the humidification control of the membrane to be separated from other control circuits. thermocouples. exit port. shown in Fig.. The alternative design is the monolithic design. fluid bed reactor.1: Integrated microreactor showing inlet ports. The anode and cathode are on the same substrate. optical fibers. 120 . analogous to IC or MEMs manufacturing.4. One involves a bipolar design using separate Si wafers for the anode and cathode that are sandwiched together. reactant flow and temperature stability. requiring them to be relatively large to minimize ohmic impedance. The monolithic design is unlike the previous design in that it is coplanar and not stacked together. e.2. and catalyst retaining structures possible to integrate sensors. The coplanar structure does reduce the produced power density by 50% which is a significant amount when comparing this power density to that of batteries. The bipolar design is shown in Fig. 15. The advantage of this approach is the ability to form all of the components on the same structure.1 Silicon Based Microreactor Two alternative designs to the silicon based microreactor are examined in the following discussion. into the miniaturized elements even though this is not common practice.2. 15.3.g. Another problem is that the current must now be pulled out by the metal lines. and the second is a monolithic design integrating the anode and cathode onto a single Si surface.Figure 15. 15.
use materials with high specific heat to absorb the excess heat. In order to successfully design a portable fuel cell for electronic devices. but it extends the time that the device stays relatively cool. Therefore. This means that for every 1 W of electrical energy produced. the whole system must be taken into consideration. therefore.5 As can be seen from the figure. 15.1 Thermal Management The formation and dissipation of heat in small electronic devices such as laptop computers is a major concern and limiting aspect of the power supply. Oxygen reduction has very poor kinetics. Fuel cells though. the heat efficiency of a methanol/O2 fuel cell is 121 . 15.3. The majority of energy consumed by portable electronic devices is released as heat. Therefore. Some electronic devices use fans to carry away heat. while other devices.Figure 15. Including such materials does not reduce the maximum steady-state temperature. a device rated for 1 W must transfer a total of 2W.3 System Issues A portable power system must be capable of delivering reliable well regulated voltage while being able to respond quickly to the fluctuating power demand. the fuel cell will create 1 W of heat. This is shown in Fig.2: Cross-sectional schematic of bipolar fuel cell design 15. make this problem of thermal management even worse. it is difficult to manage the thermal load within portable devices. which operate at peak power for short periods of time. a H2 /O2 fuel cell operating near its maximum power density is only 50% efficient.
15. 122 .3.3: Process flow to form one side of the bipolar device greater than that of the H2 /O2 fuel cell. requires approximately 31 cm2 to create 20 W .2 Air Movement The issue of gas transport to the anode and cathode is an important topic for portable fuel cells. For every 1 W of electricity generated by a methanol fuel cell. and during short time use. Longer durations of use will decrease the available oxygen if no air convection is present which increases the path length for oxygen diffusion and decreases the power output of the fuel cell. approximately 2W waste heat is generated requiring total thermal dissipation of 3 W. This also requires that only the fuel needs to be stored with the device. the excess oxygen will remain which is beneficial.8 bar air and 80o C.Figure 15. Fig. A cell operating on H2 .6 shows a schematic depiction of a possibility for the required cathode area needed on a laptop computer to ensure adequate oxygen supply. The transport of oxygen to the cathode is very important to ensure the fuel cell continues to function. The easiest way to obtain the oxygen is through the air.6 is the required surface area for a laptop computer operating at peak power of 20 W. 15. 15. 4. The easiest way to achieve air contact with the cathode is expose the entire surface of the cathode to the open air environment. but a lower efficiency methanol fuel cell increases the required area to approximately 133 cm2 if operated at the most favorable conditions for methanol. Some comparisons of different types of fuel are interesting to note. Fig. This solution provides excess oxygen to the cathode.
Methanol crossover is another issue to be aware of when using methanol as the fuel. Another problem with the use of methanol arises because of the boiling point of methanol.3. The disadvantage with using methanol is the increased cell stack size because the kinetics of methanol oxidation are worse than hydrogen.4: Cross-sectional schematic of monolithic fuel cell design If the same methanol fuel cell is operated at 70o C and at atmospheric pressure. 65o C which is in the range of a normally operated fuel cell. therefore is a more suitable fuel for portable devices. Therefore. 15.3 Fuel Delivery and Crossover Prevention In order for a portable fuel cell device to compete with battery powered devices. the required area increases to 400 cm2 . Two main approaches to solve address this issue involves: (1) the regulation of the methanol feed concentration. The higher energy density allows longer operating times between refueling. a slight back pressure must be applied on the anode to prevent the feed stream from boiling. The first approach works by simply maintaining a feed concentration just above the minimum necessary to provide methanol to the anode which will considerably 123 . 60 − 100o C. or (2) inserting a barrier layer within the fuel cell. which adds considerable complexity and cost.Figure 15. there are two options for portable fuel. The two options are hydrogen and methanol. Liquid methanol has greater volumetric energy densities than hydrogen. Without the choice of a fuel reformer. it must contain both the fuel cell and the means of fuel storage and delivery. basically the whole surface area of a typical laptop computer.
124 . Solutions to these problems. there is a transient that results as the fuel cell takes time to reach this operating temperature. and decreases performance considerably. The second approach uses a barrier generally permeable to hydrogen. As a result of the speed at which electronic devices operate. Therefore. power spikes are not handled well with this approach. This approach is advantageous if the load profile is either constant or known before application. and they have to be capable of doing this from a cold start. a fuel cell must be able to respond very quickly to changes in the load. Since a fuel cell operates at its peak performance at an ideal operating temperature. are to either make the fuel cell larger so that it can supply the required power at start up.3. or add an auxiliary power supply for the start up requirements. but it rejects methanol. This can take several seconds for the water produced at the cathode to diffuse across the membrane and provide it with sufficient hydration to produce power at peak levels.4 Load Management The ability for the system to react to different load requirements is crucial to the effectiveness of portable fuel cells in electronic devices. The disadvantage comes from an additional interface and the inherent increase in resistance. 15. This becomes a problem in extreme cold environments. The feedback monitoring device is also complicated and costly.5: Schematic polarization curves for (a) H2 /O2 and (b) methanol/O2 fuel cells reduce the crossover rate.Figure 15. The increased resistance decreases the power that can be delivered per unit cross-sectional area. The fuel cell also had reduced performance from a dry start before the membrane has become sufficiently hydrated.
6: Relying on passive air supply requires large area of fuel cell to be exposed to air. 125 . The system must take fuel and air as inputs and reject water and heat.5 System Integration By leveraging silicon manufacturing technology. the fuel and oxidant distribution network. This problem can be mitigated by active air supply but at the cost of higher complexity and cost 15. and the monitoring and control electronics can be developed.3.Figure 15. it is hoped that a complete solution that includes the fuel cell. and the protocol for handling these inputs and outputs must be carefully considered.
Other challenges include cost reduction. Gasoline. and power density. The report identified the most difficult among the challenges as low-cost infrastructure. As mentioned previously there are several compelling advantages to a hydrogen economy. stated a recent report by Roland-Berger Strategy Consultants. and suppliers. Since the fuel cell operates on hydrogen is seen as a cleaner fuel. To use pure hydrogen in a fuel cell it must be produced. component integration and. hydrogen is the most abundant element in the world. unless hydrogen is comparable in cost to current technologies. from other compounds or processes.1 Introduction If fuel cell vehicles are to have any significant long-term impact in the auto industry automakers. price is the only driving factor in the success or failure of the technology. The topic of fueling the fuel cell is so important because it is the fuel itself that has motivated much of the recent drive to develop hydrogen fuel cell technology. and one that everyone has access to. complexity and safety issues. is not environmentally friendly.Chapter 16 The New fuel for a New fleet of Cars 16. must successfully address commercialization challenges that loom over the auto fuel cell industry. and all at a lower cost than gasoline. or reformed. and is a potential security risk for a nation without vast sources of gasoline. The question that needs to be answered by proponents of fuel cell technology will be how to get and efficiently use this new fuel. it will not gain vast market acceptance. the most common fuel in use now. range. but according to a DOE study . The study found that despite the other benefits of a hydrogen economy. These differences in production process and origin of 127 . One of the more important features that the study found was the necessity to have uniformity and standardization of how the vehicles of the future will be fueled.
is not pollution free and the weight to hydrogen ratio is lower than other potential options.5O2 → 2H2 + CO2 128 (16. In order to achieve high efficiencies of fuel conversion a nickel catalyst is used to promote full reforming of the gas. Gasoline also still forces the dependence on a nonrenewable fossil fuel.1) describes the reforming process. The reforming of gasoline must be run at an elevated temperature (1100 − 1500oC). If the process is remote then the storage of pure hydrogen will be needed onboard.hydrogen are what needs to be standardized by the industry before auto fuel cells become commercially possible.1) Since gasoline often contains a variety of impurities a CO and sulfur reformer are necessary. Since methanol produces less CO it is seen as a possible better choice for fueling fuel cells.2) .2 16. The partial oxidization of methanol results in the following equation. The POX. and then the full catalytic oxidation occurs. and full catalytic zones are separated to ensure that a pure stream of gas flows to the reformer. which is run at temperatures of 800 − 1000oC. especially when it can be produced from renewable sources. see next section for details on hydrogen storage.(16. This reforming process can either be done on-board or remotely. however.1 Fuel Reforming Gasoline Reforming Since the infrastructure of a gasoline driven vehicle fleet already exists. (These detoxification processes are described in an earlier section). The reforming of gasoline (C8 H1 8). it seems logical that gasoline would be one of the more popular choices for providing a source of hydrogen. which is not evenly dispersed through the world. 16.2. partial catalytic oxidation. C8 H18 + 4O2 → 9H2 + 8CO Ni (16. 16. Since there is already an infrastructure setup for delivering and storing gasoline on board vehicles it is only necessary to overcome the problems of reforming the fuel for acceptable use in a fuel cell.2. CH3 OH + .2 Methanol Reforming Methanol is seen as another viable source for hydrogen production. If only pure gasoline is present then Eq. thus part of the gasoline is oxidized (burnt).
The necessity for cleanup on a methanol system is less stringent than a gasoline reformer. Although the process is possible it is not very efficient. shows the differences between the auto thermal reforming process and the steam reforming process described earlier. and as mentioned above. Figure 16. by using some of the produced hydrogen.1. some can go even higher (1200 129 .1 Fuel Storage Compressed Gas This process is simple and easy approach to hydrogen storage. The technical problems are widely understood and thus the process is mostly optimized. cannot be greatly improved upon. 16. In this scenario the gas is typically held in containers at pressures near 200 bar.3.3 16. The following figure.1: Comparison in fuel reforming processes 16. also the removal of sulfur is rarely a problem. Fig. Since this is a developed technology it is most likely not a good choice for the long-range outlook on the storage of hydrogen. Often the presence of CO is converted into CO2. since sources of methanol do not contain high a content of sulfur.
5 kg) of LH2 .4 Conclusion Although there are other technologies for the storage of hydrogen. and currently the only possible way to store large amounts of hydrogen. BMW has developed a hydrogen internal combustion engine that uses liquid hydrogen. It stores 120 L (8. This occurs when the hydrogen finds its way into internal blisters and then promotes crack propagation. thus making it a valid choice as a stepping stone application of hydrogen storage.0 kg of tank mass. Also this low temperature means that it is necessary to insulate all surfaces to prevent the liquid from boiling. Since the liquid is super cooed at 22 K any contact with bare skin will cause the skin to tear and freeze. 16. This insulation is also necessary since liquid air. hydrogen is very small and thus can escape though the lattice of some metals. Although it is the most efficient way to store larger quantities of hydrogen it has also been explored by a several German companies for its possible use in cars. They operate several company cars on liquid hydrogen stations. This process is a relatively simple process that is well understood. To ensure that the tank does not leak hydrogen. but also held at 22 K. The first is the possibility of severe frostbite. it is necessary to choose an appropriate material. is very combustible. This is a costly operation. they are not seen as possible uses in the near term. The applicability of this process is also demonstrated by BWM. since it must be not only pressurized. which can form if the LH2 comes into contact with the air. thus proving that is possible to build an infrastructure on LH2 .036 kg of hydrogen is stored per 3. such as metal hydrides and nanotubes. It is still a convenient. even under theses high pressure. before it is fed into the fuel cell. Since hydrogen has such a low density it is very difficult to hold it.2 Cryogenic Liquid Another stepping stone technology for the storage of hydrogen is the cryogenic freezing of the gas and converting it into a liquid state. a small-scale operation. and is currently being used in a 15 bus fleet that is being deployed throughout the EU. This material also must be resistant to hydrogen embrittlement. 16. The liquid hydrogen is stored onboard in a 50 kg container.3. In this system the hydrogen must be preheated. This is because the liquid fuel is not possible in a fuel cell. The metal hydrides are simply too heavy and the nanotube technology is too new and 130 . usually by a heat exchanger. Several tanks will be placed on the roof of the busses and will then be fed to fuel cells in the rear of the bus. There are several safety issues that need to be considered with the use of liquid hydrogen. The storage of hydrogen in tanks is possible in the automotive industry.bar is the burst pressure in one tank) . In a typical steel cylinder at 200 bar only .
Fig. The following image.2. then have the opposite. but it will add system weight and complexity. is an artists representation of how an on-board fuel cell reforming car may be designed. it is possible that users would rather have shorter ranges on the cars and pay less. 16.2: FCV: Methanol Reforming On-board 131 . Although technology may be able to make the system light enough to be onboard.some evidence even suggests that it is faulty. Figure 16. The reforming of gas or methanol is a valid option for fuelling vehicles. The other option is have onboard reforming of fuels since it will be easier to adapt our current infrastructure to handle a liquid based fuel source.
distributed generation for buildings. Until recently. and co-generation. such as heating. cooling and power.1. thus reducing the total cost of energy. 133 . a considerable amount of primary energy is wasted into the atmosphere. power for remote locations. But when using independent energy conversion units (FC systems) to provide the various energy forms required by an urban infrastructure. or be used as supplemental units. and the fuel cell is a very efficient form of reliable backup. stand-alone power plants for towns and cities. This move from centralized power to distributed power is an area that will continue to see growth and development in the next century. The following figure shows the performance of a typical SOFC stack. see Fig. Second auxiliary uses for fuel cells are in the area of backup. If there is a need for backup or redundant power a distributed energy solution is need. Thus the efficiency of a system increases when all by-products of the fuel cell system can be used effectively. Stationary fuel cell units are used for backup power.1 Introduction Stationary power is one of the most mature applications for fuel cells. 17. It is possible to only have the fuel cell active during peak times. Polymer electrolyte membrane (PEM) fuel cells fueled with natural gas or hydrogen is the primary design used for these smaller systems. It is estimated that more than a thousand of the smaller stationary fuel cells (less than 10 kilowatts) have been built to power homes and provide backup power . economies of scale were orienting the power production systems towards large centralized units located away from the urban areas.Chapter 17 Commercial and Industrial Use 17. These FC systems can either be used to supplement grid power.
and an additional gas boiler has been undertaken with regards to cost and CO2 emissions. associated with a heat pump.Figure 17. a compression heat pump. A thermo-economic optimization of the design and operation of a district heating. a compression chiller and/or an absorption chiller.1: Overall Performance Of Individual Cell 17. This system will try to optimize all components and thus provide an overall efficiency and an overall power cost.2 Optimization of a Cogeneration Plant The simulation of the plant considers a superstructure including a solid oxide fuel cell-gas turbine (FC-GT) combined cycle. 134 . a compression chiller and/or an absorption chiller and an additional gas boiler. cooling and power generation unit composed of an FC-GT combined cycle. These two issues will be the determining factors in evaluating if the system is a viable system for commercial use.
17. 17. As mentioned above. The following diagram. Fig. Figure 17.2: Thermodynamic Cycle of entire FC system this combination will increase the overall efficiency of the system to close to 70%. shows the thermodynamic cycle used in this case study. Solid oxide fuel cells (SOFCs) can provide highly effective energy conversion systems. The other components of interest are the AC-DC inverter. Efficiency can then be increased up to 70% with an appropriate integration into a CHP system.1 Thermodynamics The optimal fuel cell in this arrangement is the SOFC. a transformer. This system employs SOFC’s and a gas turbine.2.2. pre-heating 135 . when their high temperature exhaust gases are expanded within a gas turbine. This heat will be used to increase the system efficiency even higher. since it has a very high overall efficiency and it produces a high quality exhaust heat.
3. piping. If part of the cooling load remains uncovered by the absorption unit. the COP of the heat pump should reach its highest value in summer when the temperature corresponds only to domestic hot water requirements.heat exchangers. In any environment. This graph. As shown in Fig. After pre-heating the air. Since the temperature of the network can be adapted to the season. insulation materials. illustrate Figure 17. air/fuel compressors. a generator. the reduction gear. Finally. the water is driven to the additional boiler that upgrades the temperature of the water to the temperature level of the supply line. This choice is obvious since the coefficient of performance (COP) of the heat pump will benefit from the low temperature difference between a cold source at a fairly stable temperature (local body of water) and the return temperature from the users.2. These effects are also 136 . and piping for the GT system . like the ones displayed in chapters two and three. 17. see Fig. a combustion GT. the exhaust gases of the GT are driven to a last heat recovery device in order to recover part of the remaining heat for use in the district heating water or in the absorption chiller’s de-absorber loop. 17. The water is then driven to the heat recovery exchanger of the combined cycle in order to recover part of the remaining heat in the exhaust gases of the gas turbine.3: Electrical efficiency of the FC-GT cycle (left) and ratio between electric power generated GT/FC (right) the high dependence of the operating pressure on the system. inletoutlet ports. The amount of heat recovered is derived from the knowledge of the water and gas enthalpies at the entrance of the heat exchanger . the return line of the district heating first enters the condenser of the compression heat pump. and sub-assemblies such as the control panel. the water exiting the heat recovery device is imposed to be in liquid state. The following graphic illustrates the dependence of the system on pressure. an additional compression chiller is introduced.
5. Cluster A. This will have to be completed in a more through analysis. to benefit ratio of supercharging the system was not considered in this analysis. The increased cost. Since this system will operate in a congested environment it must have extremely low emissions.2 Cost Analysis The cost analysis for this system is a very important result of this study. see Fig.2. 17. The following figure shows five configurations and their results.4: Cluster Options some of the possible configurations. 17. in the form CO2 is another critical issue. Also the amount of toxins. and their costs. and its 137 . The following figure shows Figure 17. but not cost effective it is not a reasonable system to implement. besides the fuel cell.felt by the components of the system. If the system is efficient.
This option is the one that requires the lowest initial investment. it remains 138 . This is explained by: (1) a much higher efficiency in summer of the FC system. Note that the efficiency and emissions in winter remain the same since the FC-GT is shut down during this period.3 million.5: Cost Analysis for several configurations single individual A1(1). The yearly emissions rate is of 9200 tons of CO2 per year. Higher emission rates than the reference case are explained by the fact that although the CO2 emission rate per unit of power produced is very low compared to other thermal power units. due to a high electrical efficiency. A jump into the cluster B corresponds to the introduction of a small capacity (500-700 kW) FC-GT combined cycle as a co-generation unit. can be seen as a reference since all the heating load is covered by a natural gas boiler and all the cooling load by a compression chiller. with the totality of the required power imported from the grid . but reduces the yearly operating cost by about 10%. but also that induces the highest yearly operating cost of about US $2.4 million. and (2) a high price of electricity in summer (which is avoided by the FC). It induces an increase of the investment cost by about 25%. with about $3.Figure 17.
thanks to a high global efficiency. Cluster E with a compression chiller uses the option of an absorption chiller and is not recommended here. the CO2 emissions rate can be significantly reduced. Once advances are made. The electric efficiency of the FC-GT is 67. Cluster D corresponds to such a choice with the introduction of a simple-effect absorption chiller. co-generation plants will not only become an economically viable option. Other fundamental advances with will soon follow these advances in the system. the electricity produced has to be used for driving a compression chiller. 139 . and within the range of current technology. a cut of CO2 emissions by half compared to current business as usual. The FC-GT is designed for the midseason power internal requirements with a power outputs from 1.3 Conclusion This cost analysis illustrates the fundamental principles that would accompany the design process necessary before implementing a co-generation plant. the solution with a single compression chiller drives to a lower total annual cost. Operating temperature and pressure remain the same along the curve with 700C and 4 bars. Although these systems are still more expensive then a traditional system. When high electricity and natural gas prices are encountered. Since power exportation is not allowed.615. both economically and environmentally. 17. they are technically feasible. An option.18 to 1. and thus induces the investment in two different chillers rather than a single one of larger capacity. the total annual cost is much higher due to a much larger investment cost. up to 47% lower with a 170% higher investment cost (case of individual C1(4) with a 8 MW heat pump). Clusters D and E provide an excellent avenue for reducing the creation of CO2 . these advanced integrated energy systems represent a promising option in the near future. they will become the best option for implementation. If a heat pump is introduced instead of the FC-GT unit (cluster C). such as increased power density and decreased size of the fuel cell.of the same order than the one of the grid mix since part of the grid power is non-fossil based .7% with a corresponding specific investment cost of 1.00 $/kW. Although the operating cost is reduced. As a result. which at the same time reduces the annual total cost and the annual emission rate is obtained with the association of a heat pump and a FC-GT unit. and the price of the system falls.22 MW. respectively. in some cases vastly superior to current technology.
The reliability of fuel cells must be great enough as to extend the life of the fuel cells to the same as other energy means.500 or less per kilowatt. the public will continue to use internal combustion engines to power automobiles. Currently. Finally. The high capital cost of fuel cells is the most important limiting factor in the widespread implementation of fuel cells in society.1 Cost Reductions Cost reductions are vital to the success of fuel cells because the public will not accept and use technology that is not economically advantageous. In the automobile sector. the cost is in the $4. Both capital and installed cost (the cost per kilowatt required to purchase and install a power system) must be reduced in order for fuel cells to compete with contemporary energy generation methods. Among these challenges include cost reductions. and system integration. until fuel cells can decrease the price per kW. reliability.000+ range per kilowatt. In the stationary power market. a much more stringent criterion. For example. significant resources have been put toward reducing the costs associated with fuel cells. the system integration must be capable of gaining the publics interest by showing real examples of fuel cells in use and the results of such implementation. fuel cells could become competitive if they reach an installed cost of $1. a competitive cost is on the order of $60 . Specific areas in which cost reductions are being investigated include : • Material reduction and exploration of lower-cost material alternatives 141 . Due to the high capital cost on a dollar per kW basis. 18.Chapter 18 Fuel Cell Challenges Fuel cells have a lot of problems that must be solved before economical implementation into society can be achieved. Cost reductions are vital to make fuel cells comparable in cost to other methods of energy formation.$100 per kilowatt.
The fuel cell cost migration path is shown in Fig.1: Fuel cell cost migration path over the next 10 years 18. Figure 18.1. 18.• Reducing the complexity of an integrated system • Minimizing temperature constraints (which add complexity and cost to the system) • Streamlining manufacturing processes • Increasing power density (footprint reduction) • Scaling up production to gain the benefit of economies of scale (volume) through increased market penetration.2 System Integration Two key systems integration issues for the success of fuel cells are: (1) the development and demonstration of integrated systems in grid connected and transportation 142 .
and the University of Las Vegas. Buses running on hydrogen and hydrogen/natural gas mixtures are used for public transport and filled at Sunline’s public access fueling island. The unit reached 210 kW capacity and cogenerated up to 350 lbs/hr of steam used for heating buildings on the air station. consisted of a fully integrated system including a newly designed reformer and a stack with 250 cells each with an 11 f t2 active area. (4) hybrid system integration and testing.2. using molten carbonate fuel cells. The Sunline Services group hosts what has been called the world’s most complex hydrogen demonstration project to date in California’s Coachella Valley. and the City of Las Vegas) will serve as a ”learning” demonstration of hydrogen as a safe and clean energy alternative for vehicle refueling. California. 18. Integrated fuel cell systems must be developed and demonstrated in order to minimize of the cost of electricity. in partnership with Plug Power Inc. Department of Energy. Nevada. and (6) robust controls for integrated systems. Nevada. The world’s first hydrogen and electricity co-production facility opened in Las Vegas. The fueling station and power plant are located at the existing City of Las Vegas Fleet & Transportation Western Service Center. Two different types of electrolyz143 . (2) power conditioners. on-site hydrogen production technologies. the next generation of fuel cell technology.3. in November 2002. the U. In the current program. One such integration is M-C Powers molten carbonate fuel cell power plant shown in Fig. the Miramar facility is being modified to conduct performance verification testing of advanced stack designs and other improvements prior to building prototype units for commercial demonstrations at several sites by early 2001. Other partners include NRG Technologies in Reno. Fuel Cells have been integrated into many parts of society to test the various results. who is retrofitting the 6 buses donated by the City of Las Vegas that will be refueled at this station. Total output was 158 megawatt-hours of electricity and 346. The facility includes small-scale. A hydrogen fuel cell produced at this facility is shown in Fig.. The facility (built by Air Products and Chemicals. 18.applications and (2) development and demonstration of hybrid systems for achieving very high efficiencies. and a 50 kW PEM fuel cell system that supplies electricity to the grid. who is modifying a bus to burn hydrogen in an internal combustion engine and to store the hydrogen in a compressed tank. this requires that the fundamental processes be integrated into an efficient plant when capital costs are kept as low as possible. Specific systems and system integration R&D that is occurring today includes: (1) power inverters. Inc. (5) operation and maintenance issues. installed at Marine Corps Air Station Miramar. (3) hybrid system designs.. The San Diego test unit. 000 lbs of steam over 2350 hrs of operation. a hydrogen/compressed natural gas blend refueling facility.S. ”M-C Power Corporation has tested a commercial scale power generator in San Diego. For most applications.
One possible commercial application for several-hundred watt fuel cells is in pow144 . Sunline’s experience and leadership is instrumental in establishing a knowledge base and developing codes and standards for hydrogen production and use. Conventional underground traction power technologies cannot economically meet the increasingly strict mining regulations regarding safety and health of mine workers. 18. Underground mining is a promising commercial application for fuel cell powered vehicles.4. has been developed. reduced ventilation costs. Proton exchange membrane fuel cell stacks. a fuel cell powered underground vehicle offers lower recurring costs.2: M-C Power’s molten carbonate fuel cell power plant in San Diego.Figure 18. The project is a collaboration between Vehicle Projects LLC. 1997 ers. coupled with reversible metal hydride storage. This is shown in Fig. and Sandia National Laboratories.000 visitors a year from around the world have toured Sunline’s facilities. one supplied by a photovoltaic grid and the other by a natural gas reformer. A full training program. More than 5. the Fuelcell Propulsion Institute. including a curriculum for the local community college. California. produce hydrogen on-site. Nevada. and higher productivity than conventional technologies. a four-ton locomotive has undergone safety risk assessment and preliminary performance evaluations at a surface rail site in Reno. In this application.
An onboard electronics package protects the fuel cell and stores information that can be used to optimize the operation of the fuel cell. International UTC Fuel Cells and Ballard Power Systems. was modified to accept a midrange fuel cell system built by researchers at Los Alamos National Laboratory. where hydrogen supply could easily be established. The scooter is operational and will be compared to a conventionally equipped scooter of the same model. manufactured by Pride Mobility Products Corp. a scooter. The California Fuel Cell Partnership is also evaluating fuel cells in light-duty vehicles. One of the tasks of the California Fuel Cell Partnership is to evaluate fuel cellpowered electric buses in “real world” applications. The users of these types of vehicles are often located in environments. and Santa Clara Valley Transportation Authority (in the South Bay Area)to serve as initial test sites for the Bus Program. Three transit agencies have joined as Associate PartnersAC Transit (in the San Francisco Bay Area). such as wheelchairs and three-wheeled electric powered ”scooters” often used by the elderly or infirm. Up to 20 buses using hydrogen fuel will be placed on the road beginning in 2004 for a two-year demonstration. such as nursing homes and hospitals. 145 . Two fuel cell suppliers. SunLine Transit Agency (in the Palm Springs area).Figure 18. PA). (Exeter. will develop the transit bus engines.3: A hydrogen fuel cell produced at the first hydrogen ering personal mobility vehicles. looking at a variety of feedstock fuels for the hydrogen required for the fuel cells. A Victory personal mobility vehicle.
3 18. the long-term performance and reliability of certain fuel cell systems has not been significantly demonstrated to the market.2. 18. power quality.). The fuel cell requires pure hydrogen 146 .4: Sunlines demonstration of a hydrogen refueling center. (3) durability in installed environment (seismic. The power quality along with increased reliability could greatly advance the implementation of fuel cell technology.Figure 18. (2) thermal cycling capability.3. They must also be shown capable of providing power for long continuous periods of time. etc.1 Reliability Fuel cells have the potential to be a source of premium power if demonstrated to have superior reliability. Fuel cells can provide high-quality power which could be a very advantageous marketing factor for certain applications. development and demonstration of fuel cell systems that will enhance the endurance and reliability of fuel cells are currently underway. Although fuel cells have been shown to be able to provide electricity at high efficiencies and with exceptional environmental sensitivity. transportation effects. Research.1 Technical Issues Fuel The two major issues concerning the fuel for the fuel cell revolution are the storage of hydrogen and the availability of hydrogen. and (4) grid connection performance. 18. The specific R&D issues in this category include: (1) endurance and longevity.
This technique will integrate the fuel cell system and will use the knowledge and techniques of microprocessor technology. It is currently difficult to store enough hydrogen onboard a FCV to allow it to travel as far as a conventional vehicle on a full tank of fuel. This would cost millions of dollars and a huge commitment by the government and industry. This can be overcome by increasing the pressure under which the hydrogen is stored or through the development of chemical or metal hydride storage options. The second option is to have a system for onboard reforming of hydrogen. Both issues must be solved before the fuel cell can achieve wide spread market acceptance. However. Thus makes it more difficult to implement in small handheld devices. This system adds weight and complexity to a design. to the fuel cell. The first option for the fuel cell is to use an onboard hydrogen storage technique. hydrogen gas is very diffuse. Both issues have advantages and serious technical hurdles. The second issue for fueling the fuel cell is the ability to get hydrogen.5: An underground mine locomotive in order to drive its reaction. 147 .Figure 18. This issue is being addressed by trying to develop micro fuel cells and micro fuel cell reformers. This means the development of a hydrogen infrastructure. or some hydrogen rich fuel. especially due to size requirements. The two options for providing hydrogen are the storage of pure hydrogen or the on-board reforming of hydrogen. Fuel cells are more energy-efficient than internal combustion engines in terms of the amount of energy used per weight of fuel and the amount of fuel used vs. and the ability to provide this hydrogen is critical. and only a small amount (in terms of weight) can be stored in onboard fuel tanks of a reasonable size. Researchers are developing high-pressure tanks and hydride systems that will store hydrogen more effectively and safely. the amount wasted.
This will be necessary if the storage of pure H2 becomes difficult or expensive. This reforming process will add to the system weight. since methane has a higher energy density than pure H2 . 18. These technological breakthroughs will likely occur directly through support of innovative con148 . The current problems that are being faced in this area are not technically dominated. New facilities and systems will be required to get hydrogen to consumers.3.6: A mobile vehicle for handicapped people powered by a fuel cell The second area of potential cell reforming will occur in the FCV project. This major shift in energy policy will be necessary if the benefits of a hydrogen society can truly be appreciated. Although technical advances are always a benefit to a problem the short-term issue are those of policy. and time. but it will provide the user with a longer range of operation and it will relieve many of the new safety issues that will be necessary if pure H2 is used in FCV’s. Since the production of H2 is already possible on a large scale it is only a matter of developing the infrastructure to produce and deliver the fuel to consumers.2 Technological Developments Fuel cells need to experience a few breakthroughs in technological development to become competitive with other advanced power generation technologies.Figure 18. complexity. this will take significant time and money. money. and size. The other major hurdle for the industry and the government will be how to get hydrogen to the consumers. in the FCV project. This will most likely be used. The extensive system used to deliver gasoline from refineries to local filling stations cannot be used for transporting and storing pure hydrogen.
materials. but with increasing inlet temperatures.). anode and cathode) • New balance of plant (BOP) concepts (reformers. but can differ from the traditional fuel cell RD&D in that they investigate the balance of plant. In order to accomplish these goals the government has taken steps to improve the technological state of the industry. controls. or in industry. Future modeling will be conducted to examine the oxidation of other species including methane and other hydrocarbons. The government has initiated modeling to simulate the kinetics of oxidation of hydrogen and other constituents of the anode exhaust gas and the formation of pollutant species at the catalytic spent gas burner.7: A refueling station designed for the California Fuel Cell Partnership cepts by national labs. This modeling activity will be validated using experimental data and used to minimize emissions of regulated and unregulated trace pollutants. catalyst. etc. and other aspects of fuel cell technology that have not been previously investigated. With an inlet temperature of 323 K (50o C). These innovative concepts must be well grounded in science. Innovative and fruitful concepts might be found in these areas: • New fuel cell types • Contaminant tolerance (CO. Conventional low-temperature. the oxidation of H proceeds slowly. copper-zinc oxide catalysts for 149 . gas clean-up.Figure 18. sulfur) • New fuel cell materials (electrolyte. universities. water handling. the oxidation of H proceeds much more rapidly (at 350 K).
as well as catalyst supports with high surface areas. Second the range of many military vehicles could be greatly enhanced by the benefits of a hydrogen economy. thermally stable. the program is targeting higher-risk development of high-temperature. advanced oxygen catalysts could reduce or eliminate the need for air compressors. thus they will be able to financially help in the conversion to a hydrogen economy. Its vision is to have affordable full-function cars and trucks that are free of foreign oil and harmful emissions. fuel cell stack operating temperatures are limited to 80 0 C. This membrane would enhance CO tolerance and reduce heat rejection permitting a dramatic reduction in the size of the condenser and radiator. The first reason is that the auto industry has deep pockets. the catalyst must be protected from exposure to ambient air to prevent re-oxidation and must operate at less than 2500 C to avoid degradation of the catalyst’s activity. inexpensive membranes. Although it feels that a completely hydrogen based economy will begin with a hydrogen based fleet of vehicles. Once activated. The development of improved oxygen reduction electro catalysts with enhanced kinetics would be beneficial because the most significant contributor to cell voltage loss is polarization. higher operating voltages are required to meet efficiency targets for fuel cell systems. Third the amount of petroleum and the pollutants caused by vehicular traffic are becoming greater all the time. Rugged. and freedom of vehicle choice.3. Conventional high temperature iron-chromium catalysts also require activation through pre-reduction and lose activity upon exposure to air. 18. has targeted this area as a first step in developing the technology. The governments initiative is under the FreedomCAR program.3 Government Interaction The government is very dedicated to the use of hydrogen as a fuel for the American public. and oxygen catalysts. By using hydrogen fuel cells it would be possible to reduce both of these problems. Typically. freedom of mobility. for several reasons. in fuel cell systems. The government. The governments main pillars for the program are listed below. shift catalysts with equal or better kinetics are needed that do not require activation nor lose activity upon exposure to air.the water-gas shift reaction must initially be activated by reducing the copper oxide to elemental copper. This reaction is exothermic and must proceed under carefully controlled conditions to avoid sintering of the catalyst. without sacrificing safety. used for supercharging. high-temperature membrane operating at 100-1500C that sustains current densities comparable to today’s membranes and does not require significant humidification. In the longer term. Key advantages would be obtained from the development of an inexpensive. As mentioned earlier. Additionally. • Freedom from petroleum dependence 150 .
Altering our petroleum consumption patterns will require a multi-tiered approach. when you want • Freedom to obtain fuel affordably and conveniently The following figure. by preserving and sustaining America’s transportation freedoms. The government and industry research partners. 18. recognize that the steady growth of imported oil to meet our demand for petroleum products is problematic and not sustainable in the long term. The transportation sector 151 . illustrates one of the governments concerns.• Freedom from pollutant emissions • Freedom to choose the vehicle you want • Freedom to drive where you want. Fig.8: Petroleum use by vehicles in the USA They feel that this development will have tremendous national benefits. Figure 18. This is an idea base upon independence and security made available through technology. including policy and research programs. across every end use zone of our economy. which are national labs. Some of which are to ensure the nation’s transportation energy and environmental future. and industry.8. universities. No single effort limited to one economic sector can successfully change this trend.
Continue support for other technologies to dramatically reduce oil consumption and environmental impacts. Cost targets are: • Internal combustion systems that cost $30/kW. • Fuel cell systems.50 per gallon (2001 dollars). • 60% peak energy-efficient. Instead of single-vehicle goals. given time and resources. but the government feels that they can be overcome. They would like to ensure reliable systems for future fuel cell power trains with costs comparable with conventional internal combustion engine/automatic transmission systems.has a significant role to play in addressing this challenge. To meet these goals: • Cost of energy from hydrogen equivalent to gasoline at market price. and success resulting from the FreedomCAR research initiatives will help accomplish the broader national goals and objectives that are being pursued . the goal is to develop an electric drive train energy storage with 15-year life at 300 W with discharge power of 25 kW for 18 seconds at a cost of $20/kW. the idea is to develop technologies applicable across a wide range of passenger vehicles. The technological hurdles are present. the goals are: • Electric propulsion system with a 15-year life capable of delivering at least 55 kW for 18 seconds and 30 kW continuous at a system cost of $12/kW peak. To enable reliable hybrid electric vehicles that are durable and affordable. durable fuel cell power system (including hydrogen storage) that achieves a 325 W/kg power density and 220 W/L operating on pure hydrogen. and meet or exceed emissions standards. that have a peak brake engine efficiency of 45% and meet or exceed emissions standards with a cost target of $45/kW by 2010 and $30/kW in 2015. To enable this transition to a hydrogen economy it is going to be necessary to ensure widespread availability of hydrogen fuels and retain the functional characteristics of current vehicles. thus enabling the industry to interface the new technology with all vehicles. assumed to be $1. including a fuel reformer. have a peak brake engine efficiency of 45%. as opposed to single new vehicles. 152 . Their strategic approach is to develop technologies to enable mass production of affordable hydrogen-powered fuel cell vehicles and ensure the hydrogen infrastructure to support them. such as other hydrogen based goods and services.
affordability. have a peak brake engine efficiency of 45%. specific energy of 2000 Wkg . and meet or exceed emissions standards.• Hydrogen storage systems demonstrating an available capacity of 6 wt% hydro−h −h gen. and increased use of recyclable/renewable materials. • Internal combustion systems operating on hydrogen that meet cost targets of $45/kW by 2010 and $30/kW in 2015. and energy density of 1100 WL at a cost of $5/(kW − h). 153 . the goal is material and manufacturing technologies for high-volume production vehicles that enable and support the simultaneous attainment of 50% reduction in the weight of vehicle structure and subsystems. The industry and the supporting groups are pursuing all of these goals/standards in order to help get FCV on the road as soon as possible. To improve the manufacturing base. Once these technical and regulatory barriers are achieved it will be possible to begin the development of full scale FCV integration into our society.
James Larminie and Andrew Dicks. Weinheim. Miniature fuel cells for portable power: Design considerations and challenges. Fuel processors for automotive fuel cell systems: a parametric analysis.htm. 2002. 1993. 2002.uci. 102(15):1–15. fuels. Simader Gunter and Karl Kordesch. 155 . Journal of Vacuum Science & TEchnology B. Fuel Cell Systems Explained. types. VCH.htm. Paganelli G. Hirschenhofer. Proton exchange membrane fuel cell system model for automotive vehicle simulation and control. Roland M.html. 1:162–193. Doss. and applications.si. John Wiley & Sons. Ulrich Stimming Linda Carrette. 2002. Andreas Friedrich. and G Moran MJ. Helen L.Bibliography http://fuelcells. Danial E.eere.nfcrc. 2000. Meyers. American Power Conference. 2000. Chichester. J. Guezennec YG Rizzoni. 20:1287–1297. ChemPhysChem.edu/fcresources/fcexplained/challenges.edu/mc/mcfc3. January 2001. Journal of Power Sources. May 2002.gov/hydrogenandfuelcells/hydrogen/demonstrations. heat pumps and chillers. Status of fuel cell commercialization efforts. 28(497):498–518. Burer. Energy. 124:20–27. K. Maynard and Jeremy P. Journal of Energy Resources Technology-Transaction of the ASME. httphttp://www. 2002. Multi-criteria optimization of a district cogeneration plant intergrating a solid oxide fuel cell-gas turbine combined cycle. Ltd. http://www. 1996. Fuel cells: Principles. 2002. Fuel Cells and their Application.energy. England.
. Wind. Fuel Cell Handbook. J. Rak-Hyun Song. Metallic bipolar plates for pem fuel cells. Bohm. 2002. Dong-Ryul Shin S. 156 . Inc. 2002. Characterization and performance analysis of silicon carbide electrolyte matrix of phosphoric acid fuel cell prepared by ball-milling method. Morgantown. West Virginiak. 2000. Dheenadayalan. EG&G Services. Spah. 107:98–101. Journal of Power Sources. Journal of Power Sources. 105:256–260. and G. Parsons. R. Kaiser. W.
|
<urn:uuid:09de5805-3e17-41df-be4d-0e1f53cc4b8b>
|
{
"dump": "CC-MAIN-2017-09",
"url": "https://www.scribd.com/doc/71709010/Fuel-Cells",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171070.80/warc/CC-MAIN-20170219104611-00321-ip-10-171-10-108.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9194514751434326,
"token_count": 49161,
"score": 2.578125,
"int_score": 3
}
|
Adonis is a small asteroid whose orbit crosses the orbit of Earth. NASA JPL has classified Adonis as a "Potentially Hazardous Asteroid" due to its predicted close pass(es) with Earth.
Adonis orbits the sun every 937 days (2.57 years), coming as close as 0.44 AU and reaching as far as 3.31 AU from the sun. Its orbit is highly elliptical. Adonis is about 0.6 kilometers in diameter, making it larger than ~97% of asteroids but small compared to large asteroids, comparable in size to the Golden Gate Bridge.
Adonis's orbit is 0.01 AU from Earth's orbit at its closest point. This means that there is an wide berth between this asteroid and Earth at all times.
Adonis has 10 close approaches predicted in the coming decades:
|Date||Distance from Earth (km)||Velocity (km/s)|
|Feb. 7, 2036||5,340,468||23.771|
|July 16, 2043||22,323,674||20.407|
|July 1, 2061||18,584,531||28.391|
|Feb. 16, 2077||26,335,795||30.117|
|Feb. 1, 2095||22,973,141||20.170|
|July 10, 2102||2,920,728||24.576|
|Feb. 3, 2136||20,684,354||20.641|
|July 10, 2143||3,317,590||24.359|
|Feb. 9, 2177||3,364,291||24.104|
|July 14, 2184||13,498,351||22.196|
Adonis's orbit is determined by observations dating back to Feb. 21, 1936. It was last officially observed on June 9, 2020. The IAU Minor Planet Center records 115 observations used to determine its orbit.
The position of Adonis is indicated by a ◯ pink circle. Note that the object may not be in your current field of view. Use the controls below to adjust position, location, and time.
The above comparison is an artistic rendering that uses available data on the diameter of Adonis to create an approximate landscape rendering with Mount Everest in the background. This approximation is built for full-resolution desktop browsers. Shape, color, and texture of asteroid are imagined.
|
<urn:uuid:d9aaa2df-7bc4-4dac-9081-542e5d08587a>
|
{
"dump": "CC-MAIN-2021-17",
"url": "https://www.spacereference.org/asteroid/2101-adonis-1936-ca",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038087714.38/warc/CC-MAIN-20210415160727-20210415190727-00261.warc.gz",
"language": "en",
"language_score": 0.9088191986083984,
"token_count": 516,
"score": 3.78125,
"int_score": 4
}
|
We’re big. They’re tiny. They’re just learning our rules and expectations for appropriate behavior. They have a developmental need to express their will, and they have very little (if any) impulse control. With these complicated, powerful dynamics in play, why would we take our toddler’s hitting, biting, resistance or refusal to cooperate personally?
We get triggered and become angry, frustrated or scared. We might lose perspective and find ourselves stooping to our child’s level, going at it head-to-head with a tot who’s only a fraction of our size. We might be compelled to lash out, even hit or bite back(!), or attempt to regain control by sternly laying down the law, shaming or punishing our toddler in the name of “teaching a lesson”.
Or, perhaps we go the opposite direction. Fearful of confronting our child’s rage or our own, we back down. We give in to our child, hesitate, waffle or tippy-toe around the behavior. Perhaps we plead or cry so that our child feels sorry for us.
While these responses might seem effective in dealing with undesirable behavior in the moment, they end up making matters worse. Our intensity (which is always very apparent to children — so don’t ever think they don’t feel it) can turn a momentary experiment or impulsive act into a chronic behavioral issue. Children sense it when the leaders they count on have lost control, and that makes them feel less safe and too powerful. Punishments create fear, resentment, distrust. Alternatively, our reluctance to set a definitive boundary also causes discomfort, insecurity and more testing. Our vulnerability creates guilt.
Ultimately, these responses fail because they don’t address the need all children are expressing through their misbehavior: Help. When young children act out they need our help. It’s as simple as that. But how do we help them?
Perspective and attitude
If we can perceive our child’s unpleasant actions as temporarily “out of her mind” –a young one’s request for help — our role and our response become much clearer. As experienced, mature adults, this means rising above the fray (rather than getting caught in it) and providing assistance.
When we remind ourselves repeatedly that challenging behavior is a little lost child’s call for help, we begin to see the ridiculousness of taking this behavior personally. We recognize the absurdity of reactions like, “How could you treat me like this after all I do for you?! Why don’t you listen?” Perspective gives us the patience, confidence and the calm demeanor we need to be able to help.
Then we communicate and follow through. “You’re having a hard time not hitting, so I will help by holding your hands”. This is our thought process and might also be the words we say to our child. Or we might say, “I won’t let you hit. You’re so upset that I had to put my phone away when you wanted to play with it. I know.”
“I won’t let you bite me. That hurts. I’m going to have to put you down and get something you can bite safely.”
“Can you come indoors yourself or do you need my help? Looks like you need help, so I’m going to pick you up.”
We help our child and then allow for emotional explosions in response, because children need help with those, too. The assistance they need is an anchor — our patient presence and empathy while they safely ride this wave out. When the wave passes, they need us to acknowledge their feelings, forgive, understand and let go so they can, too. After all, how can we hold a grudge against a person whose impulses are bigger than they are?
This idea was brought home for me recently when walking down our hall at 10:45 PM to remind my teenager it was bedtime. I was startled to see my ten year old son (who had gone to sleep at 9 PM) striding towards me. First I thought he might be headed to the bathroom, but then he said something I couldn’t make out, “Mumble, mumble… watch TV.”
“What?” It then hit me that he was sleepwalking. For as long as any of us remember, he’s had a nightly ritual of talking or shouting in his sleep, much to the amusement of his sisters who sleep in adjacent rooms. He often sits up in bed while spouting a phrase or two, but only occasionally does he embark on a nighttime stroll.
“Give me watch TV,” he said again. This time I understood… sort of. He looked bewildered and deadpanned, “That makes no sense”. Then he lurched toward the stairs.
“Ohhhh, no…you’re going back to bed.” He fought me while I tried to hold him off. We tussled. He’s a strong, muscular little guy, a hardy opponent even in his sleep, but I finally managed to wrestle him back to his room and onto his bed where he was immediately calm and quiet again.
So, what does a ten year old sleepwalker have to do with a toddler acting out? Toddlers are very conscious and aware, but their behavior isn’t. They have about as much self-control as my boy does when he’s sleepwalking, and like my son, they need us to handle their escapades confidently without getting angry.
A mom I’ve had the pleasure of consulting with over the phone recently shared her appreciation for a word I’ve used: ‘unruffled’. She thinks “unruffled” whenever her toddler’s behavior challenges her. Since she had a new baby and her toddler needed to adjust to this tremendous change in his life, she needed to imagine unruffled a lot, but she doesn’t so much anymore, because her unruffled responses have helped her boy pass through this difficult stage quickly.
We can’t fake unruffled. Like good actors, parents have to believe. And we acquire this belief when we maintain a realistic perspective and adopt the attitude that we’re big and on top of things, our child is little, and discipline equals “help”.
Another mom’s note made me smile:
My 16-month-old son Jamie has taken to hitting – hitting me, specifically. He seems to be acting out of pure joy. Meaning – he isn’t hungry, tired or frustrated. On the contrary – he seems thrilled by the exclamation “OW!” and wants to provoke it. He cheerfully chirps “OW! OW! OW!” adorably as he tries to punch me in the face, smiling and laughing the entire time. So far I have tried many times: I’m not going to let you do that, and No, and gently stopped his hands. Also I blank my face, so I’m not smiling back, but I’m not getting emotional or upset.
He probably hasn’t developed empathy yet, but he is still repeatedly hitting me and now trying it on our 19-year-old cat.
Plus, he got me in the eye last week – it’s challenging to not be upset when it hurts. Any advice?
Like many perceptive toddlers, Jamie is as acutely aware of a subpar performance as a mini Roger Ebert. He’s not buying the “blank face”. He heard “OW!” once and that’s all he needed. He’s knows it’s still in there somewhere. He’s getting to his mom and it’s exciting.
Jennifer has to believe this is not a big deal at all. She has to think “booooring!” while she gently but firmly stops Jamie’s from hitting her. She has to rise way above this being a serious problem and perceive her little guy’s behavior as totally nonthreatening for it to cease. Right now, she’s getting caught up in the drama a bit (which is admittedly hard not to do with such a captivating toddler).
The beauty of an unruffled, helpful attitude is that it allows our child to relax knowing her parents ‘have her back’. She knows we won’t get too flustered by her mischief. She’s assured she has anchors — patient teachers capable of handling anything she tosses their way with relative ease.
With the knowledge that their parents will always help them handle the behaviors they can’t handle themselves, children feel safe to struggle, make mistakes, grow and learn with confidence.
“Toddlers test limits to find out about themselves and other people. By stopping children in a firm, but respectful way when they push our limits, we’re helping them to figure out their world and to feel safe.” -Irene Van der Zande, with Santa Cruz Infant Toddler Staff, 1, 2, 3…The Toddler Years.
I offer a complete guide to understanding and addressing common behavior issues in my new book:
For specifics about biting, I also recommend Toddler Bites by Lisa Sunbury, Regarding Baby
|
<urn:uuid:beaf1c53-26ef-40b6-9089-f1cd118632b0>
|
{
"dump": "CC-MAIN-2024-10",
"url": "https://www.janetlansbury.com/2012/09/biting-hitting-kicking-and-other-challenging-toddler-behavior/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474617.27/warc/CC-MAIN-20240225135334-20240225165334-00364.warc.gz",
"language": "en",
"language_score": 0.9635133147239685,
"token_count": 2001,
"score": 2.703125,
"int_score": 3
}
|
The first two episodes have been primarily about two parks, Yellowstone & Yosemite, and men like John Muir who set the stage for them to become the first National Parks in the world. While I'm watching this, seeing the beautiful landscapes awash with large game, beautiful mountain vistas overlooking vast canyons, towering waterfalls emptying into cool valley streams, all I kept thinking was, "Damn, that looks like a great place to do some fishing."
So if you're like me, here's an overview of fishing for both (courtesy of the National Park Service):
Yosemite Fishing Regulations
Fishing regulations for Yosemite National Park follow those set by the State of California, including the requirement that people 16 or older have a valid California fishing license. The season for stream and river fishing begins on the last Saturday in April and continues through November 15.
The only exception is Frog Creek near Lake Eleanor, where fishing season does not open until June 15 to protect spawning rainbow trout. The late opening includes the first 1/2 mile of Frog Creek up to the first waterfall, including the pool below this waterfall. The late opening also extends 200 feet from the mouth of Frog Creek out onto the surface of Lake Eleanor and along its shore for a distance of 200 feet from the creek's mouth. Otherwise, all lakes and reservoirs are open to fishing year-round. Six native fish species occur in the Merced River in Yosemite National Park, from Yosemite Valley to El Portal.
Of these, only rainbow trout and Sacramento sucker occur as high in elevation as Yosemite Valley. Waterfalls created by Pleistocene glaciation blocked fish from populating the Merced River above Yosemite Valley and the Tuolumne River inside the park boundary.
Native Fish Species
- Rainbow trout
- California roach
- Sacramento pikeminnow
- Hardhead — California Species of Concern
- Sacramento sucker
- Riffle sculpin
Rainbow trout, although native to lower elevations, are non-native to waters higher in elevation than Yosemite Valley. Arctic grayling, Dolly Varden and Piute cutthroat trout are believed to be extirpated, no longer existing, in the park.
Nonnative Fish Species
- Smallmouth bass
- Arctic grayling
- Brook trout
- Dolly Varden
- Brown trout
- Lahontan cutthroat trout — Federally threatened
- Piute cutthroat trout — Federally threatened
- Golden trout
- Rainbow trout
- Rainbow-golden hybrid trout
Yellowstone Fishing Regulations
The fishing season begins the Saturday of Memorial Day weekend (usually the last weekend in May) and extends through and includes the first Sunday in November. Exceptions are noted within the Exceptions to General Regulations table within the Fishing Regulations handbook. Also note that there are areas within the park that are permanently closed to human entry and disturbance, have seasonal area and trail closures, off-trail travel and daylight hour limitations, and party size recommendations. In addition, some streams may be temporarily closed to fishing on short notice to protect fish populations in mid-summer due to low water levels and high water temperatures.
Native cutthroat trout are the most ecologically important fish of the park and the most prized, and highly regarded by visiting anglers. Several factors, mostly related to exotic species introductions, are threatening the persistence of these fish. The Yellowstone Fisheries Program strives to use best available science in addressing these threats, with a focus on direct, aggressive intervention, and welcomed assistance by visiting anglers.
Native Fish Species
- Arctic Grayling
- Westslope Cutthroat Trout
- Yellowstone Cutthroat Trout
- Brook Trout
- Brown Trout
- Lake Trout
- Rainbow Trout
|
<urn:uuid:370cee4e-4c4b-42ee-97c4-213a67e943bd>
|
{
"dump": "CC-MAIN-2019-51",
"url": "https://www.troutrageous.com/2009/09/ken-burns-national-parks.html?showComment=1254261894919",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00055.warc.gz",
"language": "en",
"language_score": 0.9242292642593384,
"token_count": 769,
"score": 2.578125,
"int_score": 3
}
|
Jackie Robinson Week: Middle school students learn about race, influence and the physics of baseball
Seventh graders at Prairie Wind Middle School recently spent a week studying America’s favorite pastime – baseball.
In language arts, students read Jackie Robinson’s biography, “American Hero.” Many were aware that Jackie Robinson was the first black man to play in the major leagues, but most were surprised at the details behind his story and the others around him who helped in his success. Students also spent time discussing how to define the word ‘hero.’
Audrey Swanson said, “A hero is someone who is able to help others and make a difference.” Jackie Robinson helped pave the way for other minorities to participate in baseball. Emily Martinson said she believes that, “A hero is not afraid to stand up for their beliefs, no matter how difficult.” Students were surprised to learn that in addition to Jackie Robinson, there were others around him that showed great character. “Branch Rickey, the Dodger’s manager, didn’t care what other people thought,” said Anna Wientjes. “Pee Wee Reese showed tremendous character by supporting Jackie.” In social studies, students took a look back at 1947 and discussed racial issues of the time, including segregation and racism. They examined the influence Jackie Robinson had beyond baseball, and his positive impact on America.The seventh grade science class studied the physics of baseball. Through a video clip from the University of Berkeley, students learned about the forces involved when pitching a fastball. The class then headed outside to calculate the momentum of a pitch when using their whole body versus just an arm. Using velocity data, students converted their measurements from meters per second to miles per hour. They then compared their data with that of other pitchers their age and beyond. To verify results, the class headed to the gym with a radar gun. Nick Lindberg recorded the top speed, with a fastball thrown 65 miles per hour. Students also located the “sweet spot” of a bat and learned the science behind it. As they found out, it’s all about finding “nodes.”Finally, in math class, students looked at statistics of current baseball players. They calculated the different averages for how many times a player got a single, double, triple or a home run. The kids were each assigned a player and then then used the averages for that specific player to play a game of baseball using only the averages.“Jackie Robinson taught me what real courage is,” said Tanner Knutson. Jace Kovash added, “No matter how much racism he endured or how much hate he received, Jackie never quit. He was a hero to us all.”The 549 Foundation provided funds to help with expenses for this unit.
|
<urn:uuid:42a1af82-2695-4665-b8da-86de59c8f50c>
|
{
"dump": "CC-MAIN-2017-09",
"url": "http://www.perhamfocus.com/content/jackie-robinson-week-middle-school-students-learn-about-race-influence-and-physics-baseball",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170562.59/warc/CC-MAIN-20170219104610-00384-ip-10-171-10-108.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9755210280418396,
"token_count": 591,
"score": 3.28125,
"int_score": 3
}
|
Objective Story Throughline: The Objective view of To Kill A Mockingbird sees the town of Maycomb with its horns locked in various attitudes over the rape trial of Tom Robinson. Due-process has taken over, however many people think this case should never see trial. As the trial comes to fruition, the people of the town argue back and forth about how the defense lawyer ought to behave and what role people should take in response to this alleged atrocity.
Main Character Throughline: The Main Character of To Kill A Mockingbird is Scout and her throughline describes her personal experiences in this story. Scout is a young tom-boy who wants things in her life to remain as simple as they’ve always been. Going to school, however, and seeing the town’s reaction to her father’s work introduces her to a new world of emotional complexity. She learns that there is much more to people than what you can see.
Obstacle Character Throughline: The Obstacle Character point of view in To Kill A Mockingbird is presented through Boo Radley, the reclusive and much talked about boy living next door to Scout. The mystique surrounding this boy, fueled by the town’s ignorance and fear, make everyone wonder what he is really like and if he’s really as crazy as they say.
Subjective Story Throughline: The Subjective Story view of To Kill A Mockingbird sees the relationship between Scout and Boo Radley. This throughline explores what it’s like for these two characters to live next door to each other and never get to know one another. It seems any friendship they might have is doomed from the start because Boo will always be locked away in his father’s house. The real problem, however, turns out to be one of Scout’s prejudice against Boo’s mysterious life. Boo has been constantly active in Scout’s life, protecting her from the background. When Scout finally realizes this she becomes a changed person who no longer judges people without first trying to stand in their shoes.
From the Dramatica Theory Book
|
<urn:uuid:c9f0dc04-40cc-4074-a355-93b48fdee3aa>
|
{
"dump": "CC-MAIN-2022-49",
"url": "https://dramaticapedia.com/2010/02/08/the-four-throughlines-of-to-kill-a-mockingbird/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00360.warc.gz",
"language": "en",
"language_score": 0.9690245389938354,
"token_count": 436,
"score": 2.78125,
"int_score": 3
}
|
Claiming Sacramento lacks concern for issues that impact them, North California’s Siskiyou County voted in support of leaving the state, as additional counties in Colorado consider a similar effort in their own right.
Voters in some Colorado counties are also considering secession. The issue is on the ballot in at least three counties.
The sentiments behind the effort dates back to before WWII and were put on hold when war broke out in the 1940s.
The movement became popular, especially in Siskiyou County, where residents have long felt that their concerns are overshadowed by more populated parts of California. It was shelved after the attack at Pearl Harbor, though its spirit lives on today.
California’s Riverside County entertained a similar initiative just two years ago. As many Americans continue to feel more and more disconnected from their political leadership, many wouldn’t be surprised to see such efforts grow.
Over 100 individuals attended the Siskiyou County vote, all but one of them being in favor of leaving the current state government structure behind.
|
<urn:uuid:df8c2d92-cbb9-4d3a-b54f-bca21b84021c>
|
{
"dump": "CC-MAIN-2016-36",
"url": "http://www.breitbart.com/blog/2013/09/04/n-california-country-votes-to-leave-state-other-s-considering/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935857.56/warc/CC-MAIN-20160823200855-00114-ip-10-153-172-175.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9799407720565796,
"token_count": 215,
"score": 2.578125,
"int_score": 3
}
|
The thirteenth century accomplishments of Chinggis Khaan in conquering a swath of the world from modern day Korea to southern Russia and in invading deep into Europe, and the cultural achievements of his grandson, Khublai khan, in China are well-known in world history.
During the late 12th century, a tribal chief named Temujin eventually merged the Mongol tribes. In 1206, Temujin carried the title Chinggis Khan and established the Mongol empire. His kingdom, the largest contiguous land empire and second largest overall in the world after the British Empire, launched numerous wars and military battles across Asia. When Chinggis Khan died in 1227, the Mongol Empire was divided into 4 kingdoms. His grandson Khublai Khan ruled one of the kingdoms, comprising of China and homeland Mongol. He established his capital in modern Beijing but after a century it was overthrew by the Ming Dynasty in 1368. Though famous for its ruthlessness towards enemies, the Mongolian Empire was known to be very tolerant towards the different beliefs of its occupied societies. It is said that at the court of the Mongol Khans, Buddhist, Muslim, Christian, Jewish, Confucian, and other religious leaders used to sit and exchange ideas with one another and the local Shamans and healers. After the decline of the empire, in the 14th century, Mongolia was ruled by the Manchu dynasty of Qing. Ironically, the Manchu never had conquered Mongolia, as the Mongols themselves invited the Manchu to protect them from attacks initiated by western clans.
In 1911, Mongolia proclaimed independence from the Qing Dynasty. The Russians had a short rule though the leadership of "Blood" Baron Ungern and the religious leader Bogd Khan. In 1924, the Mongolian People's Republic was declared. For the next 70 years Mongolia was satellite country to the Soviet Union. Between 1930 and 1940 at least one third of the male population of Mongolia was slaughtered by order of the communist party in Moscow. On the other hand, the Soviet occupation also brought to Mongolia, with its massive resources, infrastructure for transportation, communication and civil services such as education and health in Mongolia.
|
<urn:uuid:c6565016-a06f-4078-825a-82fcd5c2cc11>
|
{
"dump": "CC-MAIN-2018-17",
"url": "http://juulchinworld.mn/about_mongolia/mongolian-history.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945942.19/warc/CC-MAIN-20180423110009-20180423130009-00524.warc.gz",
"language": "en",
"language_score": 0.9647405743598938,
"token_count": 443,
"score": 3.6875,
"int_score": 4
}
|
Evelyn Sarah June 8, 2021 Worksheets
A comprehensive set of worksheets covering a variety of subjects can be used to expand your child‘s learning experience. A worksheet about shapes can be used as part of a game to find shapes around the house, counting worksheets can be used to count things you see in the grocery store and so on. Almost everything you do with your child can be turned into an opportunity to learn - and worksheets can give you the guidance you need to find those opportunities. Worksheets that include topics such as social and natural science will help to expand your child‘s horizons, teaching them about their environment and how things work, while improving their vocabulary at the same time. A worksheet about farm animals can initiate a visit to the farm area at the zoo, or to a real farm, where your child can explore and learn even more.
When you are a parent, and you want to teach your kids ahead of time just before he would go to school, you can use the free online worksheets. There are lots of them available. You can let your kids learn online. Through this, your kids will be ready for school. These online materials are readily downloadable and can be printed for use. And the good thing about this is that you can produce as many copies as you want, until your kid learns and perfect the craft of writing. There are also teachers who use these kinds of techniques to teach in a more animated manner. The idea is to keep children interested because without their attention, it is difficult to make them absorb what you are trying to teach.
Schools use worksheet from printing to cursive writing of letters to writing of words. There are also online help to show the children how to exactly form a letter or word. After showing the students or children the way of writing, you can print the worksheets and give them practices on how to write exactly the right way. Children will be interested to do the activity because they had fun watching the software that you showed them. Worksheet is not just for practice. Teachers can also let their students do a group activity through worksheets. Through this, students will learn how to bond and work with their classmates as one team. Teachers may also make worksheet activities as a contest. The prizes at hand will inspire and motivate students to perform well and learn their lessons.
If you home school your children, you will quickly realize how important printable homeschool worksheets can be. If you are trying to develop a curriculum for your home-schooled child, you may be able to save a lot of time and money by using free online home school worksheets. However, while they can be a helpful tool and seem like an attractive alternative to a homeschool, they do have a number of limitations. There are numerous online resources that offer online worksheets that you can download and use for your children’s homeschooling for free. They cover practically all subjects under the sun. Different homeschool worksheets are available that are suitable for all types of curriculums, and they can help enhance what you are teaching. Aside from helping you assess your child’s comprehension of a subject matter, printable home school worksheets also provide something for your child to do while you work on other things. This means that you can be free to run your home while teaching your child at the same time, because the worksheet simplifies the homeschooling job for you.
Worksheets have been used in our day to day lives. More and more people use these to help in teaching and learning a certain task. There are many kinds or worksheets often used in schools nowadays. The common worksheets used in schools are for writing letters and numbers, and connect the dots activities. These are used to teach the students under kindergarten. The letter writing involves alphabets and words. These worksheets illustrate the different strokes that must be used to create a certain letter or number. Aside from this, such worksheets can also illustrate how to draw shapes, and distinguish them from one another. Teachers use printable writing paper sheets. They let their students trace the numbers, letters, words or dots as this is the perfect way for a child to practice the controlled movements of his fingers and wrist. With continued practice or tracing, he will soon be able to write more legibly and clearly.
Are you the parent of a toddler? If you are, you may be looking to prepare your child for preschool from home. If you are, you will soon find that there are a number of different approaches that you can take. For instance, you can prepare your child for social interaction by setting up play dates with other children, you can have arts and crafts sessions, and so much more. Preschool places a relatively large focus on education; therefore, you may want to do the same. This is easy with preschool worksheets. When it comes to using preschool worksheets, you will find that you have a number of different options. For instance, you can purchase preschool workbooks for your child. Preschool workbooks are nice, as they are a relatively large collection of individual preschool worksheets. You also have the option of using printable preschool worksheets. These printable preschool worksheets can be ones that you find available online or ones that you make on your computer yourself.
Tag Cloudthe century america's time over the edge worksheet answers tenth frame worksheet for kindergarten graphing sine and cosine worksheet kindergarten test prep worksheets home budget worksheet fraction word problems 5th grade worksheet 5th grade math review worksheet kinder coloring worksheets alphabetical cut and paste worksheet for kindergarten electromagnetic spectrum worksheet 1 answers prokaryotic and eukaryotic cells worksheet answers venn diagram worksheet food web activity worksheet primary and secondary colors worksheet for kindergarten 2nd grade fraction worksheet math practice worksheet for kindergarten printing letters worksheets articulation printable worksheets science fair worksheet trapezoid worksheet
|
<urn:uuid:2a825b84-ccdd-44eb-84cd-4e9c86441ea7>
|
{
"dump": "CC-MAIN-2021-39",
"url": "https://oxfordschoolofgymnastics.com/j1O3Y80f4/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00606.warc.gz",
"language": "en",
"language_score": 0.9528286457061768,
"token_count": 1252,
"score": 3.40625,
"int_score": 3
}
|
Mice with memory loss were able to regain cognitive function
Though all mice studies should be viewed with quelled excitement, a new Yale School of Medicine study shows that scientists were able to reverse Alzheimer’s disease with a single dose of a drug compound.
The researchers gave mice with Alzheimer’s a compound called TC-2153, which prohibits a protein called STEP (Striatal-Enriched tyrosine Phosphatase) from interfering with the brains’ ability to learn and make memories. Synapses in the brain need to strengthen so that the brain can turn short-term memories into long-term memories, but STEP prevents synapses from doing so, and this can lead to neurological disorders including Alzheimer’s.
The mice with memory loss who were given the compound were able to recover their cognitive function, and the researchers say they were indistinguishable from normal, control mice. The researchers, who published their recent work in the journal PLOS Biology, are now testing the compound’s ability in other animals.
It will still be a long time before a compound like this is tested in humans, but the preliminary finding is encouraging since very few experiments have actually been able to reverse the disease, which currently affects about five million Americans and is expected to grow dramatically in coming years.
|
<urn:uuid:b0d055eb-ddcb-4955-960e-fcc7198942a1>
|
{
"dump": "CC-MAIN-2015-11",
"url": "http://time.com/3086480/alzheimers-disease-has-been-reversed-in-mice/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936464123.82/warc/CC-MAIN-20150226074104-00238-ip-10-28-5-156.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9719361066818237,
"token_count": 265,
"score": 3.109375,
"int_score": 3
}
|
Tripura Medical College & Dr. B.R. Ambedkar Memorial Teaching HospitalIndia
World is facing a dual challenge of deadly Covid 19 pandemic and economic instability with its best health care facility and advanced science & technology. We need to support and protect our physically and economically vulnerable population like geriatric or elderly people during this difficult time. India has nearly 120 million elderly people with various physical, mental, social, economic, and spiritual problems. Ministry of health has created geriatric centers and geriatric clinics in most of the states. Routine care clinics cannot handle the burden of geriatric population to address their co-morbidities. Rapid training of healthcare professionals of various disciplines in geriatric care, home nursing is now of utmost importance. Government must provide financial support to nongovernmental organizations (NGOs) and other agencies for helping geriatric population by providing affordable health care.
Part of the book: Update in Geriatrics
|
<urn:uuid:8276be3f-29fb-430c-a2e1-afedaac678bd>
|
{
"dump": "CC-MAIN-2021-21",
"url": "https://www.intechopen.com/profiles/328711/anjan-datta",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992159.64/warc/CC-MAIN-20210517084550-20210517114550-00517.warc.gz",
"language": "en",
"language_score": 0.9380471110343933,
"token_count": 191,
"score": 2.625,
"int_score": 3
}
|
The widely publicized discovery of a trio of Earth-like planets earlier this year just passed a crucial fact-check. In research published September 13 in The Astrophysical Journal Letters, astronomers determined that the TRAPPIST-1 star exists alone, with no companion star — meaning that the visual interruptions in its light output are almost certainly accounted for by the family of exoplanets instead of a previously undocumented second star.
The team of scientists behind the research used Chile’s eight-meter Gemini South telescope, along with a special high-resolution camera, to determine that TRAPPIST-1 does indeed stand alone. Dr. Steve Howell, a Scientist at Large at NASA Ames Research Center and lead researcher on the paper, said the observation itself took just 30 minutes.
“That’s one of the real powers of this technique, it doesn’t take long,” Howell told Inverse. “We point the telescope at a star [and do something] called Electron Multiplying CCD — like what people have on their cell phone cameras are also CCDs, just crappier than what we use. With this extra-special multiplication part we take thousands and thousand of very short exposures.”
The instrument itself is called the Differential Speckle Survey Instrument (DSSI). At 60 milliseconds apiece, each frame it produces doesn’t look much like a star on its own — more like a speckled spider web. (“That’s why it’s called that,” Howell said. “It looks like a bunch of speckles.”) Some 100,000 images are then stitched together mathematically using the Fourier system. The effect of the super-short exposures is to essentially freeze the atmosphere in each image, making it perfectly static, which removes the blurring quality of the atmosphere. When subsequently recombined, the micro-images form a picture of what the region would appear as with no atmosphere — scientists get a resolution as clear as if the telescope were actually in space.
The three potentially habitable worlds were first announced in May following their discovery in orbit around the ultra-cool dwarf star. TRAPPIST-1 was named for the TRAPPIST project itself, which is short for the TRansiting Planets and PlanetesImals Small Telescope. The announcement received extraordinary publicity because of how near to Earth the TRAPPIST-1 star and its planets sit — just 40 lightyears, far closer than other potentially habitable planets.
Still, it will take more research to determine how habitable these planets really are — and better technology for us to actually see them for ourselves.
|
<urn:uuid:2c5a3665-9276-4bf4-83bb-5e00970b652d>
|
{
"dump": "CC-MAIN-2020-50",
"url": "https://www.inverse.com/article/20875-nearby-exo-earth-family-withstands-extreme-scrutiny",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141681524.75/warc/CC-MAIN-20201201200611-20201201230611-00102.warc.gz",
"language": "en",
"language_score": 0.9429348707199097,
"token_count": 548,
"score": 3.859375,
"int_score": 4
}
|
Internet content filtering systems prevent or block users’ access to unsuitable material online. When the filtering system is turned on, users cannot open or link to sites that the filtering system recognises as unsuitable. Various methods for filtering content are available, such as keyword matching or blocking, site blocking, keyboard blocking, or protocol-blocking systems. Many filtering systems will also provide facilities to filter or block applications also, such as email and file-sharing applications.
The main aim of filtering content is minimise the risk of users accessing inappropriate or illegal content online. Although some categories of content will clearly be inappropriate for all children and young people – for example those promoting pornography, violence, racist or criminal content.
Found at http://schools.becta.org.uk/index.php?section=is&catcode=ss_to_es_pys_fc_03
|
<urn:uuid:7a9d8a20-313b-4b76-ac81-e44c4e89857a>
|
{
"dump": "CC-MAIN-2017-22",
"url": "http://eict350filtering.blogspot.com/2010/01/filtering-internet-in-schools.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613796.37/warc/CC-MAIN-20170530051312-20170530071312-00385.warc.gz",
"language": "en",
"language_score": 0.8575807213783264,
"token_count": 181,
"score": 3.140625,
"int_score": 3
}
|
Confusion spread, tempers flared and disaster loomed at two reactors of the Fukushima No. 1 nuclear plant, nearly two weeks after the crisis started in March 2011, footage of the plant operator’s teleconferences showed.
At issue were whether to vent the No. 1 reactor after its internal pressure neared the limit and how to keep the temperature in the No. 5 reactor under 100 degrees.
Tokyo Electric Power Co. on Jan. 23 released to reporters 312 hours of videos of teleconferences between March 23 and 30 and April 6 and 12, 2011.
At 11:20 a.m. on March 23, 2011, Masao Yoshida, then chief of the Fukushima No. 1 plant, asked the Tokyo head office to confirm procedures for venting a containment vessel.
“It will be an extremely major issue, and we need to coordinate with the head office,” Yoshida said.
At that time, pressure at the No. 1 reactor was climbing to the maximum level the reactor was designed to withstand. The pressure did not fall until the night of March 24.
At 8:58 p.m. on March 23, another serious problem was revealed when a report said temperature could rise again in the No. 5 reactor. The temperature had fallen below 100 degrees three days earlier.
A cooling system stopped working when its power was being switched from a temporary-use diesel generator to a permanent outside source.
“Something may be wrong with a motor or a power source of a pump,” an official said. “We are trying to determine the cause.”
If such temperatures rise above 100 degrees, the pressure and water levels in the reactor must be re-adjusted.
Yoshida was furious because he had not received the report immediately.
“This is a very, very important issue,” Yoshida said. “If there is something abnormal, tell us without delay. It is the most basic of basic actions.”
At a meeting in the morning of March 24, another report said the No. 5 rector would be repaired by noon. But the repairs were not completed until past 4 p.m.
“The pump rotated at 4:14 p.m.,” an official said. “Water temperature remains low at 99 degrees.”
The videos also showed that TEPCO officials failed to take effective measures for two weeks after receiving a report that highly radioactive water could be flowing into an ordinary drain at the plant.
Plant officials reported to the Tokyo head office on March 25 that water was apparently flowing out of the No. 2 reactor building via a hatch for large equipment.
Officials detected radiation levels of 40 millisieverts per hour, four times higher than surrounding areas.
But traces of water were again found at the same location 13 days later. They appeared to show that water went through the hatch and fell into the drain.
The water had almost all evaporated, but radiation levels were 50 millisieverts per hour.
A plant official said the space under the hatch’s door would be sealed with concrete or through other measures the following day.
In the videos released on Jan. 23, TEPCO beeped out audio in 1,133 cases and blurred images in 347 cases.
The videos released by TEPCO so far cover the initial one month after the Great East Japan Earthquake and tsunami crippled the nuclear plant on March 11, 2011.
TEPCO said it is considering whether to release videos for the following period.
(This article was compiled from reports by Takashi Sugimoto and Toshihiro Okuyama.)
- « Prev
- Next »
|
<urn:uuid:8cfbb0db-b6a0-4758-bf9b-decc237f9f83>
|
{
"dump": "CC-MAIN-2014-35",
"url": "http://ajw.asahi.com/article/0311disaster/fukushima/AJ201301240077",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835699.86/warc/CC-MAIN-20140820021355-00311-ip-10-180-136-8.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9697012901306152,
"token_count": 763,
"score": 2.625,
"int_score": 3
}
|
What can hunt prey and move, but is not an animal?
Some protists, like these Paramecium, act much like animals. Notice the tiny hair-like cilia that help them move. The food vacuoles, where they digest their prey, are colored in orange.
Animal-like protists are called protozoa. Protozoa are single-celled eukaryotes that share some traits with animals. Like animals, they can move, and they are heterotrophs. That means they eat things outside of themselves instead of producing their own food.
Animal-like protists are very small, measuring only about 0.01–0.5mm. Animal-like protists include the flagellates, ciliates, and the sporozoans.
Different Kinds of Animal-like Protists
There are many different types of animal-like protists. They are different because they move in different ways.
Flagellates have long flagella, or tails. Flagella rotate in a propeller-like fashion, pushing the protist through its environment (Figure below). An example of a flagellate is Trypanosoma, which causes African sleeping sickness.
- Other protists have what are called transient pseudopodia, which are like temporary feet. The cell surface extends out to form feet-like structures that propel the cell forward. An example of a protist with pseudopodia is the amoeba.
- The ciliates are protists that move by using cilia. Cilia are thin, very small tail-like projections that extend outward from the cell body. Cilia beat back and forth, moving the protist along. Paramecium has cilia that propel it.
- The sporozoans are protists that produce spores, such as the toxoplasma. These protists do not move at all. The spores develop into new protists.
These flagellates all cause diseases in humans.
A video of the animal-like amoeba can be viewed at: http://commons.wikimedia.org/wiki/File:Amoeba_engulfing_diatom.ogg.
cilia: Very small tail-like projections that beat back and forth to allow movement.
ciliate: Type of protozoa, such as Paramecium, that moves with cilia.
flagellate: Type of protozoa, such as Giardia, that moves with flagella.
heterotroph: Organism that must eat food to obtain energy.
protozoa: Single-celled eukaryotes that share many traits with animals.
pseudopodia: Temporary feet where the cell surface extends out.
sporozoa (singular, sporozoan): Type of protozoa that cannot move as adults.
- Protozoa are single-celled eukaryotes that share some traits with animals.
- Protozoa can move by flagella, cilia, or pseudopodia, or they may not move at all.
Use the resource below to answer the questions that follow.
- What are the characteristics of Sarcomastigophora? Where can they be found?
- What are the characteristics of Sporozoa? Where can they be found?
- What are the characteristics of Cnidospora? Where can they be found?
- What are the characteristics of Ciliophora? Where can they be found?
- What features describe the protozoa?
- How can animal-like protists move?
|
<urn:uuid:4d45148f-5ac1-49d5-aad2-824914e9dd47>
|
{
"dump": "CC-MAIN-2016-30",
"url": "http://www.ck12.org/book/CK-12-Life-Science-Concepts-For-Middle-School/r15/section/6.3/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257829972.49/warc/CC-MAIN-20160723071029-00214-ip-10-185-27-174.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9068753123283386,
"token_count": 763,
"score": 3.5625,
"int_score": 4
}
|
Exploring the Healthcare System in Taiwan: Gain valuable insights into Taiwan’s exceptional healthcare system. From national health insurance to advanced technology, discover how Taiwan prioritizes the health of its citizens.In this article, you will gain valuable insights into the healthcare system in Taiwan, known for its exceptional quality and accessibility. With its comprehensive coverage, advanced medical technology, and emphasis on preventative care, Taiwan has established itself as a leading example for healthcare provision globally. We will explore the key features of the Taiwanese healthcare system, including its national health insurance program, efficient electronic medical records, and efforts to promote public health. By delving into the intricacies of this well-regarded system, you will come away with a deeper understanding of Taiwan’s commitment to the health and well-being of its citizens.
Overview of the Taiwan Healthcare System
Taiwan’s healthcare system is known for its excellence in providing comprehensive and affordable healthcare services to its citizens. With a rich history and a strong emphasis on accessibility and quality, the Taiwan Healthcare System has become a benchmark for many countries worldwide.
Brief history of the healthcare system
The foundation of Taiwan’s healthcare system can be traced back to the establishment of the Bureau of Public Health in 1920 during the Japanese colonial era. After Taiwan’s return to Chinese sovereignty in 1945, the government invested heavily in the healthcare sector, resulting in significant improvements in healthcare infrastructure and services. In 1995, Taiwan introduced its landmark National Health Insurance (NHI) program, which revolutionized the accessibility and affordability of healthcare for its citizens.
Key features and principles of the healthcare system
The Taiwan Healthcare System is built on key features and principles that set it apart from other systems around the world. Universal coverage is one of the cornerstones, ensuring that every citizen, regardless of their socioeconomic status, has access to healthcare services. The system also emphasizes preventive care, disease management, and health education to promote a healthy population and reduce the burden on the healthcare system. Furthermore, the healthcare system is based on social solidarity, with individuals contributing to the National Health Insurance (NHI) program based on their income level.
Taiwan’s healthcare infrastructure consists of a well-developed network of public and private healthcare facilities, pharmacies, and a highly skilled and professional healthcare workforce.
Public healthcare facilities
Public healthcare facilities in Taiwan include community clinics, district hospitals, and regional hospitals. These facilities are strategically located across the country, ensuring that healthcare services are easily accessible to all citizens. Public healthcare facilities offer a wide range of services, including primary care, specialist care, emergency care, and rehabilitation.
Private healthcare facilities
In addition to public healthcare facilities, Taiwan has a robust private healthcare sector. Private hospitals and medical centers in Taiwan are known for their state-of-the-art facilities and advanced medical technologies. These facilities offer a wide range of specialized services, including complex surgeries, high-level diagnostics, and specialized treatments not available in public healthcare facilities.
Pharmacies and medication distribution
Pharmacies play a vital role in Taiwan’s healthcare system, ensuring the availability and distribution of medications to the population. Pharmacies are located throughout the country, making it convenient for individuals to access necessary medications. The distribution of prescription medications is strictly regulated to ensure patient safety and prevent abuse or misuse.
Taiwan boasts a highly skilled and trained healthcare workforce, consisting of doctors, nurses, pharmacists, and other healthcare professionals. Healthcare professionals in Taiwan undergo rigorous education and training programs to ensure their competence and proficiency in delivering high-quality care. The healthcare system also emphasizes continuous professional development to keep up with advancements in medical knowledge and technology.
The National Health Insurance (NHI) program is the backbone of Taiwan’s healthcare system, providing comprehensive coverage and benefits to all citizens and legal residents.
National Health Insurance (NHI)
The National Health Insurance (NHI) program in Taiwan is a mandatory and single-payer system that covers a broad range of healthcare services, including hospitalization, outpatient care, prescription medications, preventive services, and rehabilitation. The NHI program is financed through a combination of government contributions, employer contributions, and individual premiums based on income levels.
Coverage and benefits
The NHI program offers extensive coverage and benefits to its beneficiaries, ensuring that they have access to essential healthcare services without financial burden. The program covers hospital stays, surgeries, physician visits, laboratory tests, imaging exams, prescribed medications, and preventive services, among others. The coverage is comprehensive and designed to meet the healthcare needs of the population.
Eligibility and enrollment process
All citizens and legal residents of Taiwan are eligible for the National Health Insurance (NHI) program. Enrollment into the program is automatic for citizens, while legal residents need to apply for enrollment within the specified timeframe. The enrollment process is straightforward, requiring individuals to provide necessary identification documents and complete the registration process. Once enrolled, individuals receive a National Health Insurance card that allows them to access healthcare services.
Primary Care Services
Primary care services form the foundation of Taiwan’s healthcare system, focusing on preventive care, community clinics, and family medicine.
Community clinics are the first point of contact for individuals seeking primary care services. These clinics are located in local communities and offer a wide range of primary care services, including routine check-ups, vaccinations, health screenings, and management of chronic conditions. Community clinics play a crucial role in promoting preventive care, early detection of diseases, and health education.
Family medicine practitioners
Family medicine practitioners play a vital role in providing primary care services, as they serve as the primary healthcare providers for individuals and families. These practitioners provide comprehensive and continuous care, managing both acute and chronic conditions, and addressing the overall health and well-being of their patients. Family medicine emphasizes a holistic approach to healthcare, focusing on preventive care, health promotion, and disease prevention.
Preventive care and health education
Preventive care and health education are core components of Taiwan’s primary care services. Public health campaigns are conducted regularly to raise awareness about various health issues, encourage healthy lifestyle choices, and promote disease prevention. Health education programs are implemented in schools, workplaces, and community settings to equip individuals with the knowledge and skills to make informed decisions about their health.
Specialized Care Services
Taiwan’s healthcare system is equipped with a comprehensive network of specialized care facilities, catering to the diverse healthcare needs of the population.
Specialist hospitals in Taiwan focus on providing specialized care and advanced medical treatments for specific medical conditions. These hospitals are staffed with highly skilled specialists, surgeons, and medical teams who have extensive expertise in their respective fields. Specialist hospitals often collaborate with other healthcare facilities to ensure seamless and efficient patient care.
Regional hospitals serve as secondary care facilities, offering a wide range of specialized services and treatments. These hospitals have specialized departments and units for various medical specialties, such as cardiology, orthopedics, oncology, and neurology. Regional hospitals play a crucial role in providing specialized care to individuals who require advanced treatments or procedures.
Medical centers in Taiwan are renowned for their expertise in delivering cutting-edge medical care and innovative treatments. These centers are typically affiliated with academic institutions and research facilities, allowing them to stay at the forefront of medical advancements. Medical centers offer a wide range of specialized services and house advanced medical technologies, making them a go-to choice for complex and high-risk cases.
Academic hospitals in Taiwan serve as training grounds for healthcare professionals, providing them with hands-on clinical experience and exposure. These hospitals are affiliated with medical schools and research institutions, fostering a culture of innovation, research, and education. Academic hospitals often collaborate with other healthcare facilities to ensure the dissemination of knowledge and the continuous improvement of healthcare practices.
The Taiwan Healthcare System takes great strides in ensuring that healthcare services remain affordable, minimizing financial barriers for individuals and families.
The National Health Insurance (NHI) program in Taiwan operates on cost-sharing mechanisms, where individuals contribute to the program based on their income level. This ensures that the burden of healthcare costs is distributed equitably among the population. The cost-sharing mechanisms are designed to be fair and affordable, considering the financial capabilities of individuals and families.
Co-payments and fees
Under the NHI program, individuals are required to make co-payments for certain healthcare services. These co-payments are minimal and are designed to prevent overutilization of healthcare services. The fees for services, including hospital stays, physician visits, and medications, are regulated to prevent excessive charges and ensure affordability for patients.
Price control and regulation
The Taiwan Healthcare System employs price control and regulation measures to keep healthcare costs reasonable and transparent. The government plays an active role in negotiating prices with healthcare providers, pharmaceutical companies, and medical suppliers to ensure fair pricing. This helps prevent price gouging and ensures that healthcare remains affordable for all.
Accessibility to healthcare services is a top priority in the Taiwan Healthcare System, with measures in place to ensure that individuals can easily access the care they need.
Geographical distribution of healthcare facilities
Healthcare facilities in Taiwan are strategically distributed across the country, ensuring that individuals have access to healthcare services regardless of their location. From urban areas to remote rural regions, healthcare facilities are available within a reasonable distance, reducing travel burdens and increasing accessibility.
Accessibility in rural areas
To address healthcare disparities in rural areas, Taiwan has implemented various initiatives and programs. These include mobile medical services, telemedicine, and incentives for healthcare professionals to serve in rural communities. Mobile medical services bring essential healthcare services directly to rural areas, ensuring that individuals can receive necessary care without traveling long distances.
Transportation and medical tourism
Transportation infrastructure in Taiwan is well-developed, making it convenient for individuals to access healthcare services. The availability of public transportation options, such as buses and trains, ensures that individuals can travel to healthcare facilities easily. Taiwan also attracts visitors from abroad who seek medical treatments, establishing itself as a destination for medical tourism due to its high-quality healthcare services and affordable costs.
Quality and Safety of Care
Quality and safety of care are paramount in the Taiwan Healthcare System, with rigorous standards and measures in place to ensure the highest level of care for patients.
Accreditation and certification
Healthcare facilities in Taiwan undergo a stringent accreditation and certification process to ensure that they meet national standards of quality and safety. Various organizations, such as the Joint Commission International (JCI) and Taiwan’s Ministry of Health and Welfare, conduct regular assessments and inspections to evaluate healthcare facilities’ compliance with quality standards.
Patient safety initiatives
Patient safety is a top priority in Taiwan’s healthcare system, and multiple initiatives are in place to mitigate potential risks and errors. These initiatives include standardized protocols, patient identification measures, medication safety programs, infection control measures, and continuous quality improvement processes. The focus on patient safety helps prevent adverse events and ensures that patients receive the best care possible.
Healthcare quality improvement measures
Continuous quality improvement is ingrained in Taiwan’s healthcare system, with various initiatives and programs aimed at enhancing the quality of care. These include clinical guidelines, evidence-based practice, professional education, performance measurement and benchmarking, and feedback mechanisms for healthcare providers. The emphasis on quality improvement enables healthcare facilities and professionals to adapt and excel in their services continually.
Health Information Technology
Health Information Technology (HIT) plays a vital role in Taiwan’s healthcare system, facilitating efficient and secure information exchange, enhancing patient care, and supporting healthcare decision-making processes.
Electronic health records (EHR)
Taiwan has implemented a nationwide Electronic Health Records (EHR) system, ensuring that patient health information is readily accessible and securely stored. EHRs allow healthcare providers to access and share patient information seamlessly, improving care coordination, reducing medical errors, and enhancing the overall patient experience. The EHR system also supports population health management and research initiatives.
Telemedicine and digital health
Taiwan has embraced telemedicine and digital health solutions to improve healthcare accessibility and efficiency. Telemedicine services enable individuals to consult with healthcare professionals remotely, reducing the need for physical visits, especially in rural areas. Digital health solutions, such as mobile applications and wearable devices, empower individuals to manage their health and wellness actively, promoting self-care and preventive measures.
Healthcare data privacy and security
Healthcare data privacy and security are paramount concerns in Taiwan’s healthcare system. Strict regulations and protocols are in place to safeguard patient information and prevent unauthorized access or misuse. The government and healthcare institutions continually invest in robust data encryption, authentication measures, and staff training to ensure the privacy and security of healthcare data.
Future Challenges and Innovations
While Taiwan’s healthcare system has achieved remarkable success, it continues to face challenges and is actively pursuing innovative solutions to address these issues.
Aging population and long-term care
One of the significant challenges facing Taiwan’s healthcare system is the rapid aging population and the increasing demand for long-term care services. To tackle this challenge, Taiwan is implementing measures to strengthen its long-term care infrastructure, including expanding geriatric services, promoting home care and community-based care models, and providing financial support for long-term care services.
Healthcare workforce shortages
Like many countries worldwide, Taiwan faces healthcare workforce shortages, especially in rural areas and certain medical specialties. To address this issue, Taiwan is implementing initiatives to attract and retain healthcare professionals, such as offering incentives, scholarships, and special training programs. The government also actively encourages medical students to pursue careers in primary care, rural medicine, and underserved communities.
Healthcare innovation and research
Taiwan recognizes the importance of healthcare innovation and research in advancing its healthcare system. The government supports research and development activities, encourages collaboration between academia and healthcare institutions, and provides funding for innovative projects. By fostering a culture of innovation and research, Taiwan aims to continuously improve healthcare practices, technologies, and outcomes.
In conclusion, Taiwan’s healthcare system is a model of excellence and inclusivity, providing comprehensive and affordable healthcare services to its citizens. With its emphasis on accessibility, quality, and affordability, Taiwan has achieved impressive healthcare outcomes, earning its place among the top healthcare systems globally. As the system continues to evolve and face new challenges, it remains committed to innovation, research, and continuous improvement to ensure the well-being and health of its population.
|
<urn:uuid:6b8099cf-8955-4e05-8320-62aba016f269>
|
{
"dump": "CC-MAIN-2023-50",
"url": "https://www.healthylifevitality.com/exploring-the-healthcare-system-in-taiwan/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102637.84/warc/CC-MAIN-20231210190744-20231210220744-00734.warc.gz",
"language": "en",
"language_score": 0.940929114818573,
"token_count": 2949,
"score": 2.71875,
"int_score": 3
}
|
Follow the clues to find the mystery number.
After some matches were played, most of the information in the
table containing the results of the games was accidentally deleted.
What was the score in each match played?
In how many different ways can I colour the five edges of a
pentagon red, blue and green so that no two adjacent edges are the
Published August 2006,February 2011.
This article describes a particular strategy useful in solving Sudoku puzzles known as "the naked pair". The discussions will be based on the Sudoku shown in Figure 1, and which can also be downloaded here .
In solving any puzzle, the first thing we can do is find all the available single candidates in the rows, columns and boxes. That is, individual cells that are the only place where particular numbers can be put.
To explain this a little more, let's look at Figure 3 where we have listed all the possible numbers that can fit into each cell (taking into account what already appears in the rows, columns and boxes). We can now find four single candidates by the method of "force" .
These single candidates are:
6 in cell (2,4) because there is nowhere else in this block (and column) that the 6 can go,
9 in cell (5,4) because this is the only cell in this block (and row) where the 9 can be placed,
4 in cell (6,9), the only cell in this block, row and column that 4 can be placed and, similarly,
4 in cell (8,5).
We can now remove:
all the other 6's in row 2 and all the other 9's in column 4 (see Figure 4).
After finding all the available single candidates, we can start tackling all the naked pairs.
Looking at Figure 4, the cells (4,1) and (5,1) in column 1 have the same two candidates 2 and 6, forming a naked pair. This means that the two cells can be the only place for 2 and 6 in the same column and the same box. As a result, the options 2 and 6 can be removed from the candidates of the other cells in the same column and the same box. So 2 can be removed from cell (1,1), and 2 and 6 can be
removed from cell (8, 1). As cells (4,1) and (5,1) belong to box 4, the candidates 2 and 6 can also be removed from cells (5,2) and (5,3). Likewise, the candidate 2 can be removed from cell (6,2).
The naked pair with the candidate numbers of 2 and 6 was easy to spot. However, a naked pair can often be found hiding as a "hidden pair" among other redundant candidate numbers. There is one such example in Figure 4. We can find the numbers 1 and 7 in box 7 and box 9 and in both the last two rows. This means that no other empty cell in the last 3 rows, except the cells (7,4) and (7,5) can
contain the naked pair of 1 and 7 as a solution.
When we are in a hurry, especially during a competition, we tend to pencil in redundant options. For example in Figure 3, the two cells (7,4) and (7,5) contain 2, 3 and 9 as well as 1 and 7 which, as shown, are the only two candidate numbers. This results in a change from 1 and 7 being a naked pair to a hidden pair. Hence it is a good practice to make a note, for example by circling the redundant
clue numbers (Figure 4). The same rule can be applied to instances of more than two cells, e.g. "naked triplets" and "naked quads".
In row 6, the only position possible for a 2 is (6, 4). See the paragraph above Figure 4 if you cannot see why the 2 in (6,2) cannot be used. This means that the three cells (6,2), (6,6) and (6,8) in row 6 form a naked triplet with the candidate numbers 5, 7 and 8.
The three cells (7,7), (7,9) and (9,9) in box 9 form another naked triplet with the candidate numbers 2, 3 and 9. Hence the redundant options 2 and 3 can be removed from cell (9,7). Similarly, the redundant options 3 and 9 can be removed from cell (8,8). As a result, the cells (9,7) and (8,8) form a new naked pair with the candidate numbers 5 and 6.
Finally, the three cells (1,8), (2,8) and (3,9) in box 3 form a naked triplet with the candidate numbers 1, 3 and 9. This means the redundant option 3 can be removed from cells (1,7) and (2,7) forming a naked pair with the candidate numbers 4 and 8.
A puzzle consisting of only single candidates and naked pairs should be classified under the easy category. After all the redundant candidates in the empty cells are removed by the technique of "naked pair" new single candidates begin to appear in the puzzle.
The rest of the puzzle can be easily solved by basic techniques. I leave the rest of the solution to the readers.
A second sudoku article can be found here.
|
<urn:uuid:fc855c9b-26a8-4db1-bed3-f55d35797237>
|
{
"dump": "CC-MAIN-2017-17",
"url": "http://nrich.maths.org/5007",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00381-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9180900454521179,
"token_count": 1138,
"score": 2.59375,
"int_score": 3
}
|
If you’re looking for books for your daughter try a few of these books about inspiring women out. They’re educational and sure to inspire your little one.
By Emily Arnold McCully
Margaret E. Knight has been called “the most famous 19th-century woman inventor” In this book we learn about stories of invention and about a woman who stood up for herself and learns from her mistakes. Knight was actually the inventor of the paper bag that stands upright, machine and additionally was the first woman to awarded a United States patent.
By David A. Adler
When the American Civil War began, Tubman worked as a cook and nurse for the Union Army at first, and then as an armed scout and spy. This book highlights Harriet Tubman and her involvement in The Underground Railroad as well as her incredible story of strength and leadership. This is a book is truly about a real American hero.
By Nikki Giovanni
This is not a Rosa Parks biography, but it is most definitely a historical account of one woman who changed a nation. We all know the story of Rosa Parks but no matter how well you know the facts, Rosa Parks was not your typical heroine, she was just a seamstress, a person just like everyone else. This is imperative to the message that any person big or small can stand up for what is right, for what they believe in and make big changes.
By David A. Adler
This book successfully paints a picture of early 20th century North America and how women were treated. Amelia’s whole life is covered and the book even talks about on the conspiracy theories about her death. Amelia’s independent spirit came through in the quotes that the author shared. There’s an extra interesting layer as well, Earhart’s mother was actually the first woman to summit Pike’s Peak, which really ingrains the importance of parental role models.
|
<urn:uuid:635345e0-3334-456c-a5e1-30d75d005056>
|
{
"dump": "CC-MAIN-2022-40",
"url": "https://itsysparks.com/books-about-inspiring-women/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337490.6/warc/CC-MAIN-20221004085909-20221004115909-00623.warc.gz",
"language": "en",
"language_score": 0.9733515381813049,
"token_count": 395,
"score": 3.015625,
"int_score": 3
}
|
Now that we’ve seen the importance of including fruits and vegetables in our diet, how can we make sure that our family will eat them? By playing a game called, trick or treat; emphasis on the word, trick. Vegetables can be a treat if they are part of a favorite meal. Example: tacos (stuffed with veggies), pizza with vegetable toppings, hamburgers with sliced tomatoes and lettuce, raw vegetable sticks with a favorite dip, cauliflower with cheese sauce, etc. But we may likely rely heavily on the trick aspect of introducing disliked vegetables.
If you take the time to learn what healthy foods your children do love, it will make your job easier. Print the Favorite Healthy Foods and Meals template in the Schedules and Templates section of this book and assign a sheet to each child and/or husband. Write their favorite healthy foods on their assigned sheets. You may be surprised to learn that they actually enjoy quite a few healthy foods. Once you’ve learned what their favorite foods are, you can serve these foods openly. The next step will be to secretly add disliked vegetables into their diet. Here are ways to trick your family into eating nutritious food.~ Most children will eat vegetables if they are a part of homemade soup.
~ Pass cooked vegetables in a food processor and add to hamburger patties, meatballs or meatloaf.
~ Finely grate zucchini or carrots and add to pancake batter.
~ Add finely chopped cooked vegetables to canned or packaged soup.
~ Add freshly juiced carrot juice to canned vegetable or tomato juice.
~ Add grated zucchini to square or muffin mixes.
~ Puree vegetables and add to chili or spaghetti sauce.
~ Add grated carrots to tuna or chicken salad.
~ Hide veggies in casseroles and main dishes.
~ Mix fat-free sour cream into a favorite salad dressing.
~ Serve raw vegetables with a favorite dip.
~ Mix regular peanut butter with freshly ground peanuts.
~ Use whole grain bread for grilled cheese sandwiches ~ the toasting will hide the color of the bread.
~ Go from white bread to 60% whole wheat for one month, then introduce whole-grain bread. You can make a sandwich using one slice of the 60% bread and one slice of the whole-grain bread. Serve with the lighter bread slice facing up.
~ Most children will eat a meal that they helped to prepare.
~ Let them make cookies with you. Use whole wheat and carob chips and they won’t know the difference, especially if they are the ones making the cookies. There aren’t too many children who will not eat their own baking.
~ You can create a desire to eat healthier treats by designating a new healthy treat as, mommy’s treat. You can say something like, "these are mommy’s very special yummy cookies, and you can’t have any, okay?" You can even place the cookies in a fancy cookie jar to increase the appeal. Let a couple days go by before ‘reluctantly giving in’ to their requests.
~ Sneak some whole-grain cookies into a bag of favorite mixed cookies, and eventually replace unhealthy cookies with healthier cookies.
~ Use cookie cutters to make fun sandwiches with whole-grain bread.
~ Mix whole-grain noodles into regular spaghetti or macaroni and cheese dishes.
~ Mix soaked soy bits in the ground beef. Slowly increase the soy/ground beef ratio in meals over time and they won’t notice that they are eating soy bits instead of ground beef.
|
<urn:uuid:09729953-4899-469e-a190-0657e8ceaf78>
|
{
"dump": "CC-MAIN-2016-07",
"url": "http://healthrecipes.com/nutrition_in_foods.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701151789.24/warc/CC-MAIN-20160205193911-00028-ip-10-236-182-209.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9279599785804749,
"token_count": 752,
"score": 3.203125,
"int_score": 3
}
|
Color Key to Presentation of Understandable Scientific Data
February 14, 2003
Denver The scientific establishment is drowning in data, but whether it is census data or the vast amounts of satellite and computer-generated information created every day, visual representation and the use of color can help scientists understand and extract important patterns from this deluge, according to a Penn State cartographer.
The smarter we are at mapping data, the more likely it is that we will see relationships, says Dr. Cynthia Brewer, associate professor of geography. We can look at complicated data using color intelligently and see these relationships and generate hypotheses, she told attendees today (Feb 14.) at the annual meeting of the American Association for the Advancement of Science in Denver.
Presenting more than one variable is sometimes difficult, but careful color and texture choices can make representations much clearer. One option is to use a color scheme for the actual data and a hatched texture to indicate that the data is uncertain.
Sometimes, it is important to overlay something about uncertainty to prod the viewers into thinking about what the data means, says Brewer. The color says here is the data, but the hatching says this data is suspect.
No more than three types of data can be shown using color. Confusion sets in at the fourth variable, but three variables can be successfully shown on a map. In topographic data for example, color hues could show the direction of slopes, while saturation of the colors could indicate the steepness of slope. Lightness represents the third variable shape -- and is the illumination on the relief produced by an arbitrary sun above the map. With these three variables, the map takes on shape and conveys information. With only the first two, it is difficult to read.
Brewer warns that there are mistakes that can be made when choosing colors for maps. If ordered data is being represented, choosing a group of random color hues obliterates any patterns. Also, if vivid colors are used for values that are unimportant, the color pops off the page and causes the viewer to give more weight to that variable than it warrants.
For diverging spectral color schemes, make sure the central value, often bright yellow, is actually matched with an important data range, Brewer says. If mapping areas of drought versus flooding, use the central yellow as the normal precipitation level moving to orange brown in one direction and blues in the other.
Another error is designing a map without considering those who are color blind. Most color blind people are red-green color blind and schemes that contain versions of red, orange, yellow and green with the same lightness can be impossible to read. Often, simply removing the green from a spectral scheme will improve readability for the color blind. But avoiding unreadable schemes and finding ones that are pleasant and best show the data is not as difficult as it seems.
Brewer and Mark Harrower, assistant professor of geography, University of Wisconsin, Madison, developed an online web tool that can provide pre-designed color palettes. ColorBrewer (www.ColorBrewer.org) allows you to identify the number of classes you will have on the map, the type of data being represented sequential, diverging or qualitative and then a color scheme. Schemes are marked to indicate color blindness friendly choices, usefulness for projection, on a computer screen, photocopyability or printability. The program also indicates the color specifications for commercial printing, computer screens or web page.
For those who need to do full four-color printing, a paper soon to be published in the January issue of Cartography and GIS will provide color swatches and color-separation ink codes for a large variety of color schemes, said Brewer.
With the advent of color monitors for computers, scientists in every discipline embraced the use of color to clarify and illuminate data, but back then, monitors could only display the eight basic colors. Consequently, many scientists adopted a simple rainbow color scheme for their work.
Now, computers can show millions of colors and researchers are still using just eight, says Brewer. There is a whole world of subtlety and shade available now.
|
<urn:uuid:a5e83e9c-b120-4fd3-928f-c9200f384fba>
|
{
"dump": "CC-MAIN-2016-50",
"url": "http://www.psu.edu/ur/2003/mapping.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542686.84/warc/CC-MAIN-20161202170902-00164-ip-10-31-129-80.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9127145409584045,
"token_count": 840,
"score": 3.03125,
"int_score": 3
}
|
How a small active minority can change the world.
By David Robson
Future Now / BBC (5/14/19)
In 1986, millions of Filipinos took to the streets of Manila in peaceful protest and prayer in the People Power movement. The Marcos regime folded on the fourth day.
In 2003, the people of Georgia ousted Eduard Shevardnadze through the bloodless Rose Revolution, in which protestors stormed the parliament building holding the flowers in their hands.
Earlier this year, the presidents of Sudan and Algeria both announced they would step aside after decades in office, thanks to peaceful campaigns of resistance.
In each case, civil resistance by ordinary members of the public trumped the political elite to achieve radical change.
“There weren’t any campaigns that had failed after they had achieved 3.5% participation during a peak event,” says Chenoweth – a phenomenon she has called the “3.5% rule”.
There are, of course, many ethical reasons to use nonviolent strategies. But compelling research by Erica Chenoweth, a political scientist at Harvard University, confirms that civil disobedience is not only the moral choice; it is also the most powerful way of shaping world politics – by a long way.
Looking at hundreds of campaigns over the last century, Chenoweth found that nonviolent campaigns are twice as likely to achieve their goals as violent campaigns. And although the exact dynamics will depend on many factors, she has shown it takes around 3.5% of the population actively participating in the protests to ensure serious political change.
Chenoweth’s influence can be seen in the recent Extinction Rebellion protests, whose founders say they have been directly inspired by her findings. So just how did she come to these conclusions?
A long tradition of change
Needless to say, Chenoweth’s research builds on the philosophies of many influential figures throughout history. The African-American abolitionist Sojourner Truth, the suffrage campaigner Susan B Anthony, the Indian independence activist Mahatma Gandhi and the US civil rights campaigner Martin Luther King have all convincingly argued for the power of peaceful protest.
Yet Chenoweth admits that when she first began her research in the mid-2000s, she was initially rather cynical of the idea that nonviolent actions could be more powerful than armed conflict in most situations. As a PhD student at the University of Colorado, she had spent years studying the factors contributing to the rise of terrorism when she was asked to attend an academic workshop organised by the International Center of Nonviolent Conflict (ICNC), a non-profit organisation based in Washington DC. The workshop presented many compelling examples of peaceful protests bringing about lasting political change – including, for instance, the People Power protests in the Philippines.
But Chenoweth was surprised to find that no-one had comprehensively compared the success rates of nonviolent versus violent protests; perhaps the case studies were simply chosen through some kind of confirmation bias. “I was really motivated by some scepticism that nonviolent resistance could be an effective method for achieving major transformations in society,” she says.
Working with Maria Stephan, a researcher at the ICNC, Chenoweth performed an extensive review of the literature on civil resistance and social movements from 1900 to 2006 – a data set then corroborated with other experts in the field. They primarily considered attempts to bring about regime change. A movement was considered a success if it fully achieved its goals both within a year of its peak engagement and as a direct result of its activities. A regime change resulting from foreign military intervention would not be considered a success, for instance. A campaign was considered violent, meanwhile, if it involved bombings, kidnappings, the destruction of infrastructure – or any other physical harm to people or property. …
To Learn More….
|
<urn:uuid:fe81ad32-c2a5-4282-83af-cc35a620f654>
|
{
"dump": "CC-MAIN-2021-49",
"url": "https://www.thecommonercall.org/2019/09/09/one-of-most-encouraging-articles-you-will-read-this-week/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363309.86/warc/CC-MAIN-20211206163944-20211206193944-00312.warc.gz",
"language": "en",
"language_score": 0.9607760310173035,
"token_count": 781,
"score": 2.875,
"int_score": 3
}
|
This week the library added an interesting new book to the collection that shows how information gathered by studying internet use can tell a great deal about how people really feel.
Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are is written by Seth Stephens-Davidowitz, a Harvard-trained economist, former Google data scientist, and New York Times writer. He argues that much of what we think about people is wrong because almost everyone will lie on things like surveys. He says that what they do on the internet is much more valuable in determining what they really think. Every time we do an internet search or click on an ad or link, it tells researchers something about us. And because we think that we are doing this in private, we are more likely to divulge our true feelings and thoughts. Scientists and researchers are able to use this information to analyze people’s views on all sorts of topics from religion and politics to what types of things they really enjoy.
The author says that studying data like this is a whole new way of studying the human mind. He uses personal anecdotes and stories, along with the results of actual studies that have yielded surprising results. He says that there is almost no limit to what can be learned about human nature from this data and it will change the way we view the world.
|
<urn:uuid:049abc1c-b740-44d8-abda-6b2ed42adde3>
|
{
"dump": "CC-MAIN-2017-34",
"url": "http://library.beau.org/?article=201705201400",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00050.warc.gz",
"language": "en",
"language_score": 0.9606930613517761,
"token_count": 273,
"score": 2.6875,
"int_score": 3
}
|
Aboriginal human rights essay
The rights and freedoms of aboriginals have improved aboriginal rights essay indigenous peoples should have the same human rights. Equality rights of a group protected by canadian human rights legislation and international conventions report on equality rights of aboriginal people. Works for new analysis of canada aboriginal people essay - indigenous children s non-indigenous person for human rights and urban aboriginal / canada: indigenous. This essay focuses on the rights for freedom for the aboriginal australians who have lived in australia for at least 40,000 years the arrival. The aboriginal struggle for justice and land rights of the indigenous rights movement was to contribute voice committed to human and civil rights.
Human rights in australia have largely been activists like sir douglas nicholls were commencing their campaigns for aboriginal rights within the established. Essay aboriginal rights of manners essay essayer jeux gratuit feedback and self reflective essay research paper on the natural selection of human culture the. Aboriginal rights essay - professionally crafted and custom academic papers get an a+ aid even for the most urgent assignments spend a little time and money to. About indigenous peoples and human rights in canada anti-discrimination legislation exists to protect and advocate for the human rights of aboriginal peoples.
Aboriginal rights essay rather than 100 at indigenous people have title written teen aboriginal rights of human rights in but aboriginal. Essays related to aboriginal rights 1 legacy of racism has been the destruction of the aboriginal's dignity and basic human rights aboriginal rights. Originally drafted in 1985 by the working group on indigenous populations, the world’s largest human rights forum, the draft declaration was adopted by the. Open document below is an essay on indigenous rights in canada from anti essays, your source for research papers, essays, and term paper examples.
Human rights are the inborn and universal rights of every human being regardless of religion, class, gender, culture, age, ability or nationality, that ensure basic. Aboriginal rights essay report about aboriginal people over human rights information is the rights in examples on indigenous children lecture book exists you. Aboriginal rights essay on law aboriginal human rights of aboriginal women releases by filed under aboriginal rights cases from aboriginal title and freedoms essay.
View indigenous education and human rights research papers on academiaedu for free.
- Indigenous rights 2016 marked the 25th australia raises human rights concerns in other countries, but does so very selectively essays the lost years.
- View and download aboriginal essays examples also discover topics, titles, outlines history of human rights: aboriginal residential schools in canada.
- Indigenous people’s rights indigenous peoples across the world have had their rights stripped for centuries living as second class citizens, they have.
- Indigenous people are recognized as being vulnerable, marginalized, and disadvantaged among the worlds population.
- Essay on the rights and freedoms of indigenous australians by crunk in types school work and essay rights and freedoms.
Aboriginal land rights essay 1057 words | 5 pages it also found it was out of step with international human rights and that aboriginals had been dispossessed of. Australian indigenous rights essay getting government to listen: a guide to the international human rights system for indigenous australians east sydney. What does an sat essay score of 8 mean rays synthesis essay technology in schools college application essay about yourself x4 narrative essay writing guide.
|
<urn:uuid:6a763ad5-9685-41c6-b78b-0ab399713548>
|
{
"dump": "CC-MAIN-2018-43",
"url": "http://ascourseworkisoa.thehealthcopywriter.me/aboriginal-human-rights-essay.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514314.87/warc/CC-MAIN-20181021181851-20181021203351-00450.warc.gz",
"language": "en",
"language_score": 0.9270109534263611,
"token_count": 686,
"score": 3.390625,
"int_score": 3
}
|
The participants plan, carry out, and analyze the results of a biological field project. The process includes the search for relevant scientific literature, testable hypotheses should be established and the evaluation of results obtained in relation to the existing knowledge in the field. Specific research topics are chosen among pertinent biological issues in the natural environments that surround the field stations of the Institute of Biology, where the practical part of the course takes place.
The participants will gain insights to the different stages of the research process in general, and to principle of and methods in biological fieldwork in particular.
In relation to the competence profile of the degree it is the explicit focus of the course to:
- Give the competence to implement new biological solutions and independently develop and realise an experimental research project
- Give skills relevant for experimental field investigations
- Give knowledge and understanding of experimental investigations
The following main topics are contained in the course:
- Theory and planning: A research topic is selected and working hypotheses and a detailed research plan are elaborated based on a screening of relevant literature and discussions of the theory behind the topic.
- Field course: The practical part is carried out as a combination of field laboratory work studies at either the Biological Institute’s field station Svanninge Bjerge or the Research Center for Marine Biology in Kerteminde. Accommodation at the field stations is mandatory. The results are analyzed and summarized in a report written in the style of a scientific paper.
The learning objectives of the course are that the student demonstrates the ability to:
- Search and critically select scientific literature pertaining to a particular biological topic.
- Use scientific articles and other sources to formulate working hypotheses for biological field projects.
- Design biological field investigations to test the hypotheses established.
- Carry out biological investigations and experiments in the field.
- Analyze the results gathered, and structure and describe these in a scientific style.
- Discuss own results in relation to previous investigations.
- Define directions for interesting research in the future.
Students taking the course are expected to:
- Have knowledge of the curriculum in biology or similar
- Be able to handle basic software (Word, Excel, Power point)
|
<urn:uuid:a7e191ae-e2fd-4272-8a65-201609ef9fc5>
|
{
"dump": "CC-MAIN-2021-10",
"url": "https://marinetraining.eu/node/4281",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00135.warc.gz",
"language": "en",
"language_score": 0.8802877068519592,
"token_count": 445,
"score": 2.953125,
"int_score": 3
}
|
You know the water cycle from grade school: water evaporates from the oceans and rivers, condenses into clouds, falls to the ground as precipitation, and runs off into the oceans and rivers to start the process again. Except—it's a bit more complicated than that. Not all of the water makes it all the way around the cycle. As much as 40 percent of the water that falls from clouds re-evaporates before it ever touches down, and of the rain that does reach Earth, a lot never hits the ground in the first place. Watch the video below to learn more about this.
|
<urn:uuid:3d61ee76-0abf-403e-b9d2-9a1256f422b6>
|
{
"dump": "CC-MAIN-2017-39",
"url": "https://curiosity.com/topics/most-rain-never-reaches-the-ground-curiosity",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693240.90/warc/CC-MAIN-20170925182814-20170925202814-00451.warc.gz",
"language": "en",
"language_score": 0.972383439540863,
"token_count": 122,
"score": 3.109375,
"int_score": 3
}
|
It May Be a TMJ Disorder
You need it for talking, chewing, smiling, yawning, laughing and singing. It’s the jaw joint—technically known as the temporomandibular joint (TMJ)—one of the hardest working and most complex joints in your body. You usually don’t give it a second thought, and you usually don’t need to. But if something goes wrong, your TMJ can cause nagging pain and limit the flexibility of your jaw. In extreme cases, the pain can be long-lasting and debilitating.
More than 10 million Americans have TMJ disorders, according to some estimates. They’re usually first noticed as a pain in the chewing muscles or jaw joint. Other symptoms may include stiffness or locking of the jaw; painful clicking, popping or grating in the jaw joint; or a change in the way the upper and lower teeth fit together. The symptoms usually go away by themselves. But occasionally, the pain and limited jaw movement may persist for a long time.
Jaw injuries sometimes cause TMJ disorders. But usually the underlying trigger is unknown. Many people believe that stress and tooth grinding are major causes of the condition. However, the research is still unclear. Some studies even suggest that TMJ disorders themselves lead to stress, rather than the other way around. Research also disputes the common belief that a bad bite or orthodontic braces can lead to TMJ disorders. And there’s no scientific proof that clicking sounds in the jaw joint cause serious TMJ problems.
With so much uncertainty about the causes, there’s also little certainty about treatment. The most widely used therapy is a plastic guard—sometimes called a stabilization splint or bite guard—that fits over the upper or lower teeth. But studies of its ability to relieve TMJ pain have been inconclusive.
To get some definitive answers about the cause of TMJ disorders, NIH is now funding the largest study of its kind. Researchers are following more than 3,000 healthy adults for 3-5 years to see who will develop the disorders. This study will hopefully allow scientists to tease out the factors that can cause the TMJ to malfunction. Their preliminary findings suggest that genes can play a role. They found that people with certain genes are less sensitive to pain and much less likely to develop TMJ disorders.
As researchers learn more about what causes TMJ problems, they’ll also find better ways to diagnose, treat and prevent the condition. But until that happens, experts recommend taking simple steps to relieve pain and avoiding procedures, like surgery, that can permanently change your bite or jaw.
|Easing the Symptoms of TMJ Disorders
The most common jaw joint and muscle problems are temporary. Try these simple steps to relieve jaw discomfort. Talk with your doctor or dentist if the problems persist:
Though all possible measures have been taken to ensure accuracy, reliability, timeliness and authenticity of the information; Onlymyhealth assumes no liability for the same. Using any information of this website is at the viewers’ risk. Please be informed that we are not responsible for advice/tips given by any third party in form of comments on article pages . If you have or suspect having any medical condition, kindly contact your professional health care provider.
|
<urn:uuid:4074891d-0c82-444a-a0e6-370ae812a8f1>
|
{
"dump": "CC-MAIN-2018-17",
"url": "http://www.onlymyhealth.com/the-cause-achy-jaws-1285264645",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945552.45/warc/CC-MAIN-20180422080558-20180422100558-00101.warc.gz",
"language": "en",
"language_score": 0.9413415193557739,
"token_count": 673,
"score": 3,
"int_score": 3
}
|
Mechanical systems are becoming increasingly sophisticated and continually require greater precision, improved reliability, and extended life. To meet the demand for advanced mechanisms and systems, present and future engineers must understand not only the fundamental mechanical components, but also the principles of vibrations, stability, and balance and the use of Newton's laws, Lagrange's equations, and Kane's methods. Dynamics of Mechanical Systems provides a vehicle for mastering all of this. Focusing on the fundamental procedures behind dynamic analyses, the authors take a vector-oriented approach and lead readers methodically from simple concepts and systems through the analysis of complex robotic and bio-systems. A careful presentation that balances theory, methods, and applications gives readers a working knowledge of configuration graphs, Euler parameters, partial velocities and partial angular velocities, generalized speeds and forces, lower body arrays, and Kane's equations.
Evolving from more than three decades of teaching upper-level engineering courses, Dynamics of Mechanical Systems enables readers to obtain and refine skills ranging from the ability to perform insightful hand analyses to developing algorithms for numerical/computer analyses. Ultimately, it prepares them to solve real-world problems and make future advances in mechanisms, manipulators, and robotics.
INTRODUCTION REVIEW OF VECTOR ALGEBRA Equality of Vectors, Fixed and Free Vectors Vector Addition Vector Components Angle Between Two Vectors Vector Multiplication: Scalar Product Vector Multiplication: Vector Product Vector Multiplication: Triple Products Use of the Index Summation Convention Review of Matrix Procedures Reference Frames and Unit Vector Sets KINEMATICS OF A PARTICLE Vector Differentiation Position, Velocity, and Acceleration Relative Velocity and Relative Acceleration Differentiation of Rotating Unit Vectors Geometric Interpretation of Acceleration Motion on a Circle Motion in a Plane KINEMATICS OF A RIGID BODY Orientation of Rigid Bodies Configuration Graphs Simple Angular Velocity and Simple Angular Acceleration General Angular Velocity Differentiation in Different Reference Frames Addition Theorem for angular Velocity Angular Acceleration Relative Velocity and Relative Acceleration of Two Points on a Rigid Body Points Moving on a Rigid Body Rolling Bodies The Rolling Disk and Rolling Wheel A Conical Thrust Bearing PLANAR MOTION OF RIGID BODIES - METHODS OF ANALYSIS Coordinates, Constraints, Degrees of Freedom Planar Motion of a Rigid Body Instant Center, Points of Zero Velocity Illustrative Example: A Four-Bar Linkage Chains of Bodies Instant Center, Analytical Considerations Instant Center of Zero Acceleration FORCES AND FORCE SYSTEMS Forces and Moments Systems of Forces Zero Force Systems and Couples Equivalent Force Systems Wrenches Physical Forces: Applied (Active) Forces Mass Center Physical Forces: Inertia (Passive) Forces Each chapter also contains an Introduction INERTIA, SECOND MOMENT VECTORS, MOMENTS AND PRODUCTS OF INERTIA, INERTIA DYADICS Second Moment Vectors Moments and Products of Inertia Inertia Dyadics Transformation Rules Parallel Axis theorems Principal Axes, Principal Moments of Inertia: Concepts, Example, and Discussion Maximum and Minimum Moments and Products of Inertia Inertia Ellipsoid Application: Inertia Torques PRINCIPLES OF DYNAMICS: NEWTON'S LAWS AND D'ALEMBERT'S PRINCIPLE Principles of Dynamics D'Alembert's Principle The Simple Pendulum A Smooth Particle Moving Inside a Vertical Rotating Tube Inertia Forces on a Rigid Body Projectile Motion A Rotating Circular Disk The Rod Pendulum Double-Rod Pendulum The Triple-Rod and N-Rod Pendulums A Rotating Pinned Rod The Rolling Circular Disk PRINCIPLES OF IMPULSE AND MOMENTUM Impulse Linear Momentum Angular Momentum Principle of Linear Impulse and Momentum Principle of Angular Impulse and Momentum Conservation of Momentum Principles Examples Additional Examples: Conservation of Momentum Impact: Coefficient of Restitution Oblique Impact Seizure of a Spinning, Diagonally Supported Square Plate INTRODUCTION TO ENERGY METHODS Work Work Done by a Couple Power Kinetic Energy Work-Energy Principles ]Elementary Examples: A Falling Object, The Simple Pendulum, A Mass-Spring System Sk9idding Vehicle Speeds: Accident Reconstruction Analysis A Wheel rolling over a Step The Spinning Diagonally Supported Square Plate GENERALIZED DYNAMICS: KINEMATICS AND KINETICS Coordinates, Constraints, and Degrees of Freedom Holonomic and Nonholonomic Constraints Vector Function, Partial Velocity, and Partial Angular Velocity Generalized Forces: Applied (Active) Forces Generalized Forces: Gravity and Spring Forces Example: Spring-Supported Particles in a Rotating Tube Forces that do not Contribute to the Generalized Forces Generalized Forces: Inertia (Passive) Forces Examples Potential Energy Use of Kinetic Energy to obtain Generalized Inertia Forces GENERALIZED DYNAMICS: KANE'S EQUATIONS AND LAGRANGE'S EQUATIONS Kane's Equations Lagrange's Equations The Triple-Rod Pendulum The N-Rod Pendulum INTRODUCTION TO VIBRATIONS Solutions of Second-Order Differential Equations The Undamped Linear Oscillator Forced Vibration of an Undamped Oscillator Damped Linear Oscillator Forced Vibration of a Damped Linear Oscillator Systems with Several Degrees of Freedom Analysis and Discussion of Three-Particle Movement: Modes of Vibration Nonlinear Vibrations The Method of Krylov and Bogoliuboff STABILITY Infinitesimal Stability A Particle Moving in a Vertical Rotating Tube A Freely Rotating Body The Rolling/Pivoting Circular Disk Pivoting Disk with a Concentrated Mass on the Rim Rim Mass in the Uppermost Position Rim Mass in the Lowermost Position Discussion: Routh-Hurwitz Criteria BALANCING Static Balancing Dynamic Balancing: A Rotating Shaft Dynamic Balancing: the General Case Application: Balancing of Reciprocating Machines Lanchester Balancing Mechanism Balancing of Multicylinder Engines Four-Stroke Cycle Engines Balancing of Four-Cylinder Engines Eight-Cylinder Engines: The Straight-Eight and the V-8 MECHANICAL COMPONENTS: CAMS A Survey of Cam Pair types Nomenclature and Terminology or Typical Rotating Radial Cams with Translating Followers Grpahical Constructions Comments on Graphical Construction of Cam Profiles Analytical Construction of Cam Profiles Dwell and Linear Rose of the Follower Use of Singularity Functions Parabolic Rise Function Sinusoidal Rise Function Cycloidal Rise Function Summary: Listing of Follower Rise Functions MECHANICAL COMPONENTS: GEARS Preliminary and Fundamental Concepts: rolling Wheels, Conjugate Action, Involute Curve Geometry Spur Gear Nomenclature Kinematics of Meshing Involute Spur Gear Teeth Kinetics of Meshing Involute Spur Gear Teeth Sliding and Rubbing between Contacting Involute Spur Gear Teeth Involute Rack Gear Drives and Gear Trains Helical, Bevel, Spiral Bevel, and Worm Gears INTRODUCTION TO MULTIBODY DYNAMICS Connection Configuration: Lower Body Arrays A Pair of Typical Adjoining Bodies: Transformation Matrices Transformation Matrix Derivatives Euler Parameters Rotation Dyadics Transformation Matrices, Angular Velocity Components, and Euler Parameters Degrees of Freedom, Coordinates, and Generalized Speeds Transformation between Absolute and Relative Coordinates Angular Velocity Angluar Acceleration Joint and Mass Center Positions Mass Center Velocities Mass Center Accelerations Kinetics: Applied Forces Kinetics: Inertia Forces Multibody Dynamics INTRODUCTION TO ROBOT DYNAMICS Geometry, Configuration, and Degrees of Freedom Transformation Matrices and Configuration Graphs Angular Velocity of Robot Links Partial Angular Velocities Transformation Matrix Derivatives Angular Acceleration of the Robot Links Joint and Mass Center Position Mass Center Velocities, Partial Velocities, and Acceleration End Effector Kinematics Kinetics: Applied Forces Kinetics: Passive Forces Dynamics: Equations of Motion Redundant Robots Constraint Equations and Constraint Forces Governing Equation Reduction and Solution: Use of Orthogonal Complement Arrays APPLICATION WITH BIOSYSTEMS, HUMAN BODY DYNAMICS Human Body Modeling A Whole-Body Model: Preliminary Considerations Kinematics: Coordinates Kinematics: Velocities and Acceleration Kinetics: Active Forces Kinetics: Muscle and Joint Forces Kinetics: Inertia Forces Dynamics: Equations of Motion Constrained Motion Solutions of the Governing Equations Discussion: Application and Future Development APPENDICES INDEX
|
<urn:uuid:8c9d1e91-1eb3-40f7-8afc-6c20f5ffd4d2>
|
{
"dump": "CC-MAIN-2017-30",
"url": "http://webcatplus.nii.ac.jp/webcatplus/details/book/8976683.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424884.51/warc/CC-MAIN-20170724142232-20170724162232-00236.warc.gz",
"language": "en",
"language_score": 0.7468425631523132,
"token_count": 1846,
"score": 3.234375,
"int_score": 3
}
|
Two Basic Methods for Turning With a Lathe
One way is to use the headstock and the rear spindle to suspend a piece of wood between the two and turn along the length of the piece of wood. This is referred to as spindle turning and is the type of woodturning that is done to create long, ornate table legs and other long wood turnings. Some woodturners like to use a very small spindle lathe to turn ornate pens or bottle stoppers.
The Other Method
The other basic method for turning on a lathe is to forgo the use of the rear spindle and connect a piece of wood solely to the headstock with the motor. The most common type of woodturning project created in this instance is to turn wooden bowls. In this configuration, both the inside and the outside of the wooden bowl can be turned without removing the wood from the headstock. Of course, there are some considerably different techniques in bowl turning compared to spindle turning, but the basic premise is the same.
Freshly Cut Green Wood
The best way to start turning a bowl is to find a large chunk of wood that you wish to turn into a bowl. Freshly cut green wood works great for this type of woodturning because it cuts quite easily with all of the moisture in the wood. The easiest way to get started is with a blank that has been generally cut into some form of rounded shape equidistant from the center point using other tools such as a circular saw or a band saw.
Once the blank is into a generally round shape, punch a hole in the center point using an awl and then mount the blank to the chuck on the headstock, tightening it with the chuck's wood screw. The first task is to complete the rounding of the shape, for which you may want to connect the tailstock to the center point opposite the mounting point on the headstock for stability.
Position the tool rest parallel to the two center points, and about 1/8 of an inch from the highest protruding point on the stock, while rotating the piece by hand. Turn on the lathe at a low speed and begin rounding the blank using a roughing gouge until the blank is smoothly and consistently rounded to the desired diameter.
Remove the Tailstock and Re-Position the Tool
Next, remove the tailstock and re-position the tool rest so that it is parallel to the face of the blank (that was previously connected to the tailstock). Turn on the lathe slowly and begin turning the outer face of the bowl using a rounding gouge or a bowl gouge. Continue turning until the outer shape of the bowl is complete.
Then, you'll need to cut a recess into the bottom of the bowl to accommodate the bowl chuck that came with your lathe. Check the instructions on your bowl chuck to determine how deep and at what diameter to cut the recess. Once you are confident that you've cut the recess properly, remove the blank from the headstock, attach the bowl chuck to the blank and install it into the headstock. Rotate the blank by hand to make sure that it is spinning freely.
How to Hollow out the Bowl
To hollow out the bowl, position the tool rest parallel to the face of the blank and turn on the lathe so that the blank is rotating slowly. Use two hands on a bowl gouge and gradually begin making light cuts to start hollowing out the center of the bowl. Make very gradual cuts to remove the center material, focusing on developing an inner shape to the bowl that matches the outer shape of the bowl until you have the desired, consistent thickness of wood between the inner and outer shapes.
Finally, use your bowl gouge or a scraper to create a consistent lip of the bowl, whether that be a rounded shape transitioning from the inner to the outer portions of the bowl, or more of a squared-off shape. Make very shallow cuts on the lip, as any cracks in the blank can easily catch on the edge of the cutting tool and gouge the piece.
|
<urn:uuid:0c984555-8a02-42a7-b402-0fc99d5e0ab0>
|
{
"dump": "CC-MAIN-2021-17",
"url": "https://www.thesprucecrafts.com/basics-wooden-bowl-turning-on-lathe-3956070",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00394.warc.gz",
"language": "en",
"language_score": 0.9312244057655334,
"token_count": 833,
"score": 2.59375,
"int_score": 3
}
|
Vacuum Bubble® Technology Science
The Advanced Aeration Group ‘s Vacuum Bubble® Aerators are built upon sound scientific principles and engineering practices. The bubbles are created under a partial vacuum and consequently, as they enter the water, the higher water pressure surrounding them causes them to collapse further. There are no other documented aerators with bubbles as small as 0.25mm in diameter.
This technology addresses biochemical oxygen demand (BOD) and waste treatment problems by improving the performance of aerobic bacteria. As the aerobic bacteria oxidize waste and consume oxygen, the Vacuum Bubble® Technology Aerator makes additional oxygen readily available by producing very large populations of micro Vacuum Bubbles®. The process is proven to be highly energy-and cost-efficient across a wide range of applications.
Vacuum Bubble® Technology Aeration
The Vacuum Bubble® Technology Aerator, instead of using the compressed air/fine pore system, uses a patented process to create bubbles in a partial vacuum. The micro Vacuum Bubbles® created in this process average 0.25 mm in diameter. The small size of the bubble and the low-pressure gas it holds create a small buoyancy force (The phenomenon which makes bubbles rise in a liquid). This buoyancy force is so small that it is less than the surrounding surface tension of the water. The bubble, in fact, does not rise to the surface, but remains suspended in the fluid. This makes all of the oxygen in the bubble available to be dissolved in the liquid as needed.
Most aeration equipment in use today utilizes compressed air systems. These introduce bubbles of air into the water by forcing the compressed air through a fine pore diffuser, similar to the aerators commonly found in most home fish tanks. Experimental results with these systems have shown that the minimum bubble sizes generated are greater than 3 to 4 millimeters in diameter. Bubbles of this size quickly rise to the surface and are lost to the atmosphere. They do not remain in the water long enough to transfer an appreciable amount of oxygen.
Determining the Average Size of a Micro Vacuum Bubble®
A VBT™ 100 Aerator was installed in a 55 gallon glass tank with 50 gallons of tap water. A watertight plastic rectangle large enough to hold a Polaroid camera was mounted with a clear plastic grid of 9 mm squares. The aerator was placed 24 inches below the surface of the water. The aerator plate was 2 inches in diameter and had ten 2.38 mm orifices.
As the propeller withdrew water from the 12 inch column, a partial vacuum interface formed between the plate and propeller causing water to collapse in the partial vacuum, thus forming the micro Vacuum Bubbles®. The aerator was activated for 60 seconds, shut-off and a Polaroid negative taken of the bubbles accumulated on the surface of the submerged 9 mm square grids.
Advantages of Bubbles Created by VBT™
- Compressed air diffusers produce bubbles 20mm or greater in diameter.
- Fine-pore diffusers produce bubbles between 2mm and 4mm in diameter.
- VBTtm creates bubbles 0.25mm in diameter.
Vacuum Bubble® Technology and Aerobic Bacteria
Conventional Waste Treatment
It is expected that in the conventional septic tank or waste lagoon the organic waste contained in them is digested. This is not the case – the organic waste builds up over time and the tanks and lagoons are nothing more than containers for sedimentation and sludge storage. The bacteria in normal septic digestion are anaerobic and are accompanied by odorous gases and groundwater contaminating pathogens.
By supplying enough oxygen, an aerobic condition is developed. Bacteria that obtain their energy aerobically are much more efficient. The same organic waste food supply supports a much larger bacterial flora by aerobiosis than anaerobiosis, and therefore, aerobic decomposition of organic matter is much more rapid. Aerobiosis in activated sludge is substantially complete in six to eight hours, whereas conventional septic digestion of sewage sludge requires about 60 days.
The usual end products from anaerobic decomposition are carbon dioxide, methane, ammonia, and hydrogen sulfide. The end products of aerobic bacteria are carbon dioxide, ammonia, water, and sulfates. The ammonia is not given off as a gas, and is nitrified by the aerobs Nitrosomonas – oxidizing the ammonia into nitrite, and Nitrobacter – oxidizing the nitrite into non-toxic nitrate. Nitrates are directly plant usable and will not harm fish. The only gas given off by aerobic bacteria is odorless carbon dioxide, thereby eliminating any offensive odor.
|
<urn:uuid:d1a32d14-4757-4f43-8c34-72b19dd6427c>
|
{
"dump": "CC-MAIN-2020-45",
"url": "https://www.advancedaeration.com/science/?print=print",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00197.warc.gz",
"language": "en",
"language_score": 0.9200587868690491,
"token_count": 965,
"score": 3.640625,
"int_score": 4
}
|
Sprache - Englisch
Read this eBook for free with the readfy App!
About the eBook
This is one of Poe's stories about nature and a quaint cottage that the narrator happens upon. It is very descriptive about his surroundings.
Edgar Allan Poe was an American writer, editor, and literary critic. Poe is best known for his poetry and short stories, particularly his tales of mystery and the macabre. He is widely regarded as a central figure of Romanticism in the United States and of American literature as a whole, and he was one of the country's earliest practitioners of the short story. He is generally considered the inventor of the detective fiction genre and is further credited with contributing to the emerging genre of science fiction. He was the first well-known American writer to earn a living through writing alone, resulting in a financially difficult life and career.
Genre: Sprache - Englisch
Size: 150 Pages
Filesize: 156.7 KB
Published: May 21, 2019
|
<urn:uuid:32602df6-6dcb-4072-a774-64d2cdfd8218>
|
{
"dump": "CC-MAIN-2023-40",
"url": "https://www.readfy.com/en/ebooks/315348-landors-cottage/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233509023.57/warc/CC-MAIN-20230925151539-20230925181539-00849.warc.gz",
"language": "en",
"language_score": 0.967503011226654,
"token_count": 232,
"score": 2.859375,
"int_score": 3
}
|
Why does this happen and how do I prevent it?
Disease causing bacteria and fungus are present in every pond environment (although specific species may be introduced by new fish or wildlife) but will rarely cause a problem with good water quality and happy healthy fish. Avoiding stress and keeping good water quality and low levels of waste/sludge in your pond is key to avoiding and treating fish infections. Test your water for abnormalities in ammonia, nitrate, nitrite, pH and correct if necessary.
Fungus and Finrot are often secondary infections of wounds on the fins or skin caused by :
- aggression from other fish in the pond
- damage caused by handling e.g. netting fish
- parasite infections cause skin irritation on the fish causing them to scratch and flick resulting in abrasions
Using Pond Guardian Pond Salt with your treatment will help your fish recover as it makes the essential processes fish use to maintain a stable internal salt content to stay alive far easier to manage.
You can also follow your initial treatment of Fungus and Bacteria with a dose of Stress Away to help the fish recover fully after treatment (refer to the product label for specific details)
Key areas to consider and regularly monitor to prevent outbreaks and aid recovery:
- complete regular maintenance
- regularly test water for abnormalities
- consider a quarantine before introducing new fish to an established community
- be aware of pollutants entering the pond through heavy rainfall, run off into the pond, wildlife and garden chemicals like pesticides
What treatment should I use?
Use with any of the above
|
<urn:uuid:feda248f-92f9-4ddf-872c-ca71cd7ad7ae>
|
{
"dump": "CC-MAIN-2019-26",
"url": "https://pondaquariumproblemsolver.co.uk/problem/blagdon/fungus-and-bacteria-2/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999709.4/warc/CC-MAIN-20190624191239-20190624213239-00455.warc.gz",
"language": "en",
"language_score": 0.9158615469932556,
"token_count": 320,
"score": 2.96875,
"int_score": 3
}
|
Tracheal Bronchitis Lungs: SEER Training
Exchange of gases between the atmosphere in the lungs and the blood in the capillaries happens across the walls of the alveolar ducts and alveoli. The two lungs, which comprise all the components of the bronchial tree beyond the primary bronchi, occupy most of the space in the thoracic cavity. The lungs are spongy and soft because they're largely atmosphere spaces encircled by the alveolar cells and connective tissue that is elastic.
Due to the effect tracheal tumors may have on the windpipe, breathing problems tend to be the first indication of an issue whether the tumour is benign or malignant (cancerous). However, respiration problems may result from tracheal chronic obstructive pulmonary disease (COPD), so your physician will look for these symptoms as well: The most common tracheal tumor, squamous cell carcinoma, is considered to be due to of smoking. It is strongly recommended that you talk with your physician if you experience any of the symptoms listed above, if only to rule out a tumor as the cause.
Lung Trachea & Bronchial Tree Diagram & Function
Structurally similar to the trachea, the two primary bronchi are inside the lungs. Together, the two main bronchi and the trachea are known as the bronchial tree. The tubes which make up the bronchial tree perform the exact same function as the trachea: they distribute air to the lungs.
Individuals with tracheal and bronchial tumors may experience the following symptoms: Those with more advanced disease may experience trouble swallowing (dysphagia) and hoarseness, which typically signals the cancer has grown beyond the trachea. Some tracheal and bronchial tumours develop when cancer in another part of the body metastasizes (spreads) to the trachea or bronchi. Several types of cancerous bronchial and tracheal tumours include: Squamous Cell Carcinoma This Really Is the most common sort of tracheal tumor.
Adenoid Cystic Carcinoma These slow-growing tumours close off the airway as they improvement, but are less likely to penetrate the wall of the trachea. Kinds of noncancerous tumors include: Papillomas The most common type of benign tracheal tumor in children, papillomas are cauliflower-like tumours believed to be caused by the human papillomavirus (HPV). This sort of benign tracheal tumor involves an abnormal buildup of blood vessels in the trachea.
TRACHEA, BRONCHI, and LUNGS Flashcards
Describe four components of the aspiration of foreign items. - It truly is not unusual for a child to aspirate a small thing like a peanut -these typically enter the right main bronchus due to its wide, short, perpendicular arrangement -the carina is covered with sensitive mucous membrane. It represents the lowest stage in the tracheobronchial tree where the cough reflex is started -once the carina is passed, coughing stops, but chemical bronchitist atelectasis may ensue.
Chest x-rays - Smoker's Disease, COPD - unusual radiographic feature -Sabre Sheath Trachea
Describing usual and unusual features of COPD.
- The trachea, often called the windpipe, is a tube about 4 inches long and less than an inch in diameter in most folks.
- The trachea then splits into two smaller tubes called bronchi: one bronchus for each lung.
- The trachea is made up of about 20 rings of rough cartilage.
- Damp, smooth tissue called mucosa lines the inside of the trachea.
Infectious bronchitis typically starts runny nose, sore throat, tiredness, and chilliness. When bronchitis is serious, fever may be somewhat higher at 101 to 102 F (38 to 39 C) and may last for 3 to 5 days, but higher fevers are uncommon unless bronchitis is caused by flu. Airway hyperreactivity, which can be a short-term narrowing of the airways with damage or limit of the number of air flowing into and out of the lungs, is common in acute bronchitis. The impairment of airflow may be activated by common exposures, like inhaling mild irritants (for example, cologne, strong smells, or exhaust fumes) or chilly atmosphere. Older people may have unusual bronchits symptoms, like confusion or accelerated respiration, rather than fever and cough.
|
<urn:uuid:f2055395-e65b-4852-a613-70a7686f6ed2>
|
{
"dump": "CC-MAIN-2020-24",
"url": "http://www.alissaadress.com/kkwr.aspx",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392057.6/warc/CC-MAIN-20200527013445-20200527043445-00010.warc.gz",
"language": "en",
"language_score": 0.9166350364685059,
"token_count": 975,
"score": 3.375,
"int_score": 3
}
|
Pre-Númenórean is a term used to refer to several Mannish tongues and dialects in Gondor that predate Númenórean settlement. Its speakers were the Pre-Númenóreans, including the folk of Agar, the Drúedain, the Men of Dunharrow, the Dunlendings and several other inhabitants of Lamedon and Anórien.
Pre-Númenórean languages derived from the settlers who were separated relatives of the House of Haleth. Their members stayed back in Eriador while the Edain migrated to the west during the First Age.
Their language was foreign to Taliska, the language of the other two tribes, and as a result, the evolved languages became distinctly alien from Adûnaic, being closer to the tongue of the Haladin.
The only pre-Númenórean tongues that survived into the Third Age as active languages were Dunlendish and the Drúedain tongue. For the former, it was the result of a long enmity and a reluctance to speak Westron, while in the latter case it was isolation.
The pre-Númenórean tongue of the Ethir and Pelargir merged with Adûnaic into Westron. Under the Númenórean dominion, the language only survived in placenames and personal names, such as Eilenach, Rimmon and Forlong.
- ↑ J.R.R. Tolkien, Christopher Tolkien (ed.), The Peoples of Middle-earth, "XVII. Tal-Elmar"
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 J.R.R. Tolkien, The Lord of the Rings, Appendix F, "The Languages and Peoples of the Third Age", "Of Men"
- ↑ 3.0 3.1 J.R.R. Tolkien, Christopher Tolkien (ed.), Unfinished Tales, "Cirion and Eorl and the Friendship of Gondor and Rohan", note 51
- ↑ 4.0 4.1 4.2 J.R.R. Tolkien, Christopher Tolkien (ed.), Unfinished Tales, "The Drúedain"
- ↑ J.R.R. Tolkien, The Lord of the Rings, The Return of the King, "The Ride of the Rohirrim"
- ↑ J.R.R. Tolkien, The Lord of the Rings, The Two Towers, "Helm's Deep"
|
<urn:uuid:0f7b3236-2512-47c9-a5ec-7ad68e404095>
|
{
"dump": "CC-MAIN-2023-14",
"url": "https://tolkiengateway.net/wiki/Pre-N%C3%BAmen%C3%B3rean",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00063.warc.gz",
"language": "en",
"language_score": 0.8854384422302246,
"token_count": 586,
"score": 3.359375,
"int_score": 3
}
|
Negativity bias … refers to the notion that, even when of equal intensity, things of a more negative nature (e.g. unpleasant thoughts, emotions, or social interactions; harmful/traumatic events) have a greater effect on one’s psychological state and processes than neutral or positive things. – Wikipedia
Our brains are wired for negative thoughts.
Here’s Amit Sood, M.D., author of The Mayo Clinic Handbook for Happiness: A 4-Step Plan For Resilient Living:
When it’s bored, the brain sulks in its default mode. Its attention wanders, thinking about something other than what you’re currently doing or wanting to think about. A wandering mind costs you nothing, but it’s very expensive. It causes stress, depression and anxiety, and takes away happiness.
Your brain spends more than 50 percent of its time in “default” mode.
Our mind’s operations are dominated by stimuli produced by the lower (“reptilian”) brain, what Dr. Sood refers to as the “default” mode. Our default mode produces neutral or negative thoughts and is often experienced as mind wandering. Consequently, we tend not to be happy when we spend too much time in default mode.
Unfortunately, we spend more time in default mode than in focused mode, something that is evident by the amount of mind-wandering we engage in.
Default Mode = Toxic Thoughts.
Have you ever noticed the amount of garbage that your brain produces when left to its own devices? The Japanese call this “monkey mind,” as our brain hops from one useless thought to the next without inhibition, similar to how a monkey hops around without thinking about where it’s going!
Of course, such “mind hopping” invites toxic thoughts to take hold. This is due to our brain’s innate negativity bias. That is, our mind has a tendency to pay more attention to things that are negative than positive or neutral topics.
Recognizing Toxic Thoughts
In this article, we’re going to talk about 10 toxic thoughts and thought patterns that are common to many. It’s essential to notice when our thoughts turn toxic, as this enables us to switch to a more positive way of thinking.
First, here are 10 toxic thoughts that many people have and don’t recognize:
1. “I’m a loser.”
Feeling unworthy impairs our ability to function, period. Destructive and self-limiting beliefs about one’s self can lead to the development of anxiety, depression, and even suicidal thoughts. Constructing a healthy mindset involves challenging this limiting belief.
2. “Someone else will take care of me.”
A sense of entitlement is a poisonous and dangerous state of mind. Poisonous because it will ruin our relationships, self-confidence, and self-worth. Dangerous because it sets us up for an existence where we depend on someone else for everything – and remain vulnerable to their whims.
3. “I’m always right.”
People who insist on being right all of the time risk living a life of stagnation. Without a willingness to admit when we don’t know something, it’s impossible for us to make mental and spiritual progress.
4. “I’ll do it tomorrow.”
Will you really do it tomorrow? Or will tomorrow become next week, next month, or never? Whatever you decide, know this: procrastination is the number one cause of dissatisfaction. Also, procrastination produces unnecessary stress and anxiety. Taking action, even if it’s just a small step, can quickly render these negative feelings mute.
5. “I’ll be happy when…”
“I get that new job,” “I have a million bucks,” “My house is paid for,” “College is finally over.”
No, no, no, and no. Multiple studies have shown that happiness is not dependent upon on income, education, or career. Studies have also shown this to be true: happiness is a life spent working on our life’s purpose and the ability to enjoy the present moment.
6. “It’s their fault.”
As you might have come to realize already, adults are not immune to immature thinking. Failing to take responsibility and blaming someone else for our problems can lead to a life of dissatisfaction. Own up to your choices in life and refuse to entertain self-made excuses.
7. “I can’t screw up.”
Mistakes are part of being human. This may sound cliché and overhyped, but expecting perfection will breed disappointment. Worse, fear of making mistakes manifests in procrastination, low self-esteem, and overthinking. Take action and let the chips fall where they may!
8. “It’s so unfair.”
Let’s not kid ourselves: life can plain suck at times. While this may tempt us to ruminate on the unfairness of life, to do so would only compound whatever crapfest is being thrown in our honor. Instead, face whatever it is head-on and try to make the best of things!
9. “I don’t want to put in the effort.”
In the 1920s, the socialist Soviet Union (now Russia) hung posters that read, “He who does not work, neither shall he eat.” People had to take whatever work they could for the mere hope of being able to feed their children. We all get lazy from time to time, but when we look at it rationally (in this case, historically!), we have no excuse for living a lazy life.
10. “I’ll try.”
Do you mean “I’ll do my best.” If so, good for you. The problem is when the words “I’ll try” projects a self-defeating attitude that inspires no one. Whether or not we realize it, self-talk impacts our daily life – so pay attention to what your unconscious mind is telling you!
The “Focused Mode” of Thoughts
Fortunately, our secondary way of thinking – the “focused” mode – can forcefully repel toxic thinking. Here’s Dr. Mood once again:
“You are in focused mode when you’re paying attention to something interesting and meaningful, often in the external world … Intentionally choosing productive, purposeful thoughts also engages your focused mode.”
In short: where we direct our attention largely determines the emotions we experience. When we allow the mind to fixate on negative thoughts and sensations, such as those mentioned above, we will experience negative emotions. When we focus our attention on something stimulating and meaningful, or choose to produce good thoughts, we feel good as a result!
Final Thoughts on the Importance of Banishing Toxic Thoughts
Sharpening your attention is one of the most beneficial things you can do with your time. One book that this writer can recommend is Focus by Daniel Goleman. The book is written with a casual and empathetic, yet instructional, tone.
It is worth mentioning that every area of our life can be improved with enhanced focus and concentration. You will notice these benefits as your attention sharpens:
- More quality time with family and friends
- Better sleep
- Increased sense of purpose
- Stronger drive
- Fewer mood swings
- Strengthened relationships at work
- More opportunities
To end this article, here is an excellent quote by the late Steve Jobs:
“That’s been one of my mantras – focus and simplicity. Simple can be harder than complex. You have to work hard to get your thinking clean to make it simple. But it’s worth it in the end because once you get there, you can move mountains.”
|
<urn:uuid:586f2c43-e0a2-4907-b0f4-15fb05f1d21b>
|
{
"dump": "CC-MAIN-2020-40",
"url": "https://www.powerofpositivity.com/toxic-thoughts-people-have/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402132335.99/warc/CC-MAIN-20201001210429-20201002000429-00513.warc.gz",
"language": "en",
"language_score": 0.9283543229103088,
"token_count": 1688,
"score": 3.078125,
"int_score": 3
}
|
With flu season in full swing, help is available for needle-phobes.
Nobody likes getting the flu. It can knock students out of commission for days or weeks, making it nearly impossible to study or get homework done – and heaven forbid it catches up with you during midterms. The H1N1 strain going around this year is a particularly nasty one. At the same time, for some folks the prospect of getting vaccinated is the more terrifying option. What’s a needle-phobe to do?
Fear of needles falls into the broader category of “Blood-Injury-Injection” phobia. While some people may experience mild anxiety or nervousness prior to an injection, others may feel a more intense dread or terror. This particular type of phobia is unique because – unlike other common fears such as spiders, clowns, or heights – sufferers’ blood pressure can drop, sometimes causing “vasovagal syncope”, a fancy way to say fainting. (Although other types of phobias can make a person *feel* like they’re about to faint, in those cases, their blood pressure actually increases, making fainting impossible.) Fear of needles also tends to run in families; psychologists still aren’t sure why!
If you’re afraid of needles but want to protect yourself from the flu this year, here’s some tips that can help!
1. Bring a friend. Having someone along can help you feel more comfortable and distract you when it’s time. Student Health Center staff say that students often come in pairs for just that reason. A friendly conversation or a hand to hold can go a long way!
2. If you have a history of fainting at the sight of needles or blood, let the staff know and ask if you can lay down. Having your legs up should prevent the fainting response.
3. An alternative to laying down is the “applied tension” technique, which consists of tensing and relaxing muscles, helping you to maintain your blood pressure. You’ll find it most effective if you practice ahead of time, using this basic technique. For this specific phobia, applied tension works better than “typical” relaxation techniques like deep breathing, which lower your blood pressure and could actually make you more likely to faint.
4. Learn to challenge your worries by replacing them with balanced thoughts. For example, you might find it helpful to replace the thought “I absolutely can’t bear having to get a shot,” with the more balanced, “Getting a shot makes me really anxious but it will be over quickly, and I know some strategies to deal with it.” Learning to challenge anxious thoughts takes some time and practice. CAPS counselors are a great resource to help in this process.
5. If all else fails, consider getting a nasal spray flu vaccine. Local pharmacies are out of this option for the current season, but if you plan ahead, you can take advantage of this alternative next year.
Flu vaccines are available at the Student Health Center pharmacy all season, and are free with your SHIP insurance. Stop by today – your body will thank you, and so will your partner, roommates, and friends.
|
<urn:uuid:4554cc74-708f-4006-a905-81211334324b>
|
{
"dump": "CC-MAIN-2017-43",
"url": "https://ucsccaps.wordpress.com/2014/01/22/scared-of-needles/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825399.73/warc/CC-MAIN-20171022165927-20171022185927-00041.warc.gz",
"language": "en",
"language_score": 0.9373076558113098,
"token_count": 684,
"score": 2.9375,
"int_score": 3
}
|
Marta Reis/Unsplash Move over, low-fat diets. More and more experts are recommending plant-based diets to reduce the risk of heart disease and other chronic conditions such as diabetes and cancer. But are all plant-based diets equally beneficial? And must they be all-or-none eating strategies, or is there a role for a semi-vegetarian or “flexitarian” approach?
The term plant-based diet often conjures up images of vegetarian or vegan fare. But it really means a diet that emphasizes foods from plants — vegetables, fruits, grains, nuts, seeds, and the like — not one that necessarily excludes non-plant foods.
The results of studies on […]
A plant-based vegetarian diet is associated with a lower cholesterol of 29.2 mg/dL, according to a new meta-analysis in Nutrition Reviews. (Photo: Business Wire) WASHINGTON–( BUSINESS WIRE )–A new dietary review of 49 observational and controlled studies finds plant-based vegetarian diets, especially vegan diets, are associated with lower levels of total cholesterol, including lower levels of HDL and LDL cholesterol, compared to omnivorous diets. The meta-analysis appears as an online advance in Nutrition Reviews on Aug. 22, 2017. New study in Nutrition Reviews finds a veg diet correlates w/a 29.2 mg/dL reduction in total cholesterol. Tweet this The study […]
Bridget Malcolm talks Victoria’s Secret LIFE is just a bowl of cherry tomatoes for Victoria’s Secret model Bridget Malcolm.
Proving her model physique isn’t without sacrifice, the Australian supermodel, who eats a vegan diet, revealed the plain truth about her daily eating habits in a blog entry yesterday.
It includes snacking on cherry tomatoes, drinking five litres of water a day and eating a meal-replacement shake with a spoon.“I always carry cherry tomatoes with me,” Malcolm wrote.“If I am hungry and food is far away, or flying, or in a non-vegan friendly environment, I make sure I have some tomatoes to […]
Whether it’s for ethical, health-related, and environmental reasons, or simply for pleasure, it appears the world is finally cottoning on to the importance of recognising vegan and vegetarian lifestyles and diets.
According to a poll carried out by Ipsos MORI for the Vegan Society last year, close to half of all vegans are aged 15–34 (42 per cent), with the number of vegans in Britain having risen by more than 360 per cent over the past decade.
Meanwhile, it is estimated more than 1.2 million people aged over 15 in the UK now profess to being vegetarian.Just last year, lunchtime favourite […]
The concept of eating a “plant-based” diet is tossed around frequently, but it’s a label that can be confusing. Some people shy away from the notion because they assume that plant-based is code for vegan. On the other hand, it’s easy to think that eating all plants and no animals guarantees that your diet is healthful and nutritious. But does it?
The research in support of plant-based diets is bountiful, which is likely because of what they include — vitamins, minerals, phytonutrients and fiber — as much as what they don’t — excess saturated fat. But one limitation of much […]
|
<urn:uuid:165c490e-dae0-45f3-a632-192184d4ab49>
|
{
"dump": "CC-MAIN-2017-34",
"url": "http://www.veganfoodinfo.com/category/vegan-diet/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.0/warc/CC-MAIN-20170823132736-20170823152736-00229.warc.gz",
"language": "en",
"language_score": 0.931373655796051,
"token_count": 707,
"score": 2.65625,
"int_score": 3
}
|
The foliage that comes with autumn is breathtaking. Vibrant colors make the season a favorite among many of us. But after such colorful beauty drops from the branches, we are often left for several consecutive months of winter’s dreary, colorless and frigid effects.
Such effects are a just cause for the planting of conifers among your landscaping trees. When the beautiful autumnal display has come to an end and the leaves have made their way to the ground below, the branches will remain bare until spring rolls around. On the other hand, conifers continue to produce leaves throughout the year, even during the frequently frigid winter months. There are several advantages to this.
First, the leaves conifers continue to produce provide an element of color to a vastly colorless winter world. Some conifers also produce cones or colorful berries. The presence of such textures and vibrancy is often a welcome sight for listless eyes.
Because conifers produce such items as cones, needles and berries, it is quite common to catch glimpses of wildlife, such as birds and deer. Wildlife are often found in close proximity to coniferous trees. Not only does planting conifers produce color by way of its blooms, but it also facilitates color through the wildlife that seek out its blooms. The beauty of a brightly-colored cardinal on a snow-covered branch easily parallels the magnificence of autumn’s bold, rich colors.
Since conifers continue to produce throughout the year, they are largely responsible for supporting wildlife during the frequent scarcity of winter. Berries, nuts, cones and even needles are essential to wildlife. Not only do conifers produce sources of food for wildlife, but they also support them by producing shelter. Leaves or needles provide a degree of protection from winter’s chill. Birds take refuge among the needles still affixed to a pine’s branches, while those needles that have been shed make great bedding, and provide warmth and protection, for wildlife such as deer. And it goes without saying that, as wildlife are enjoying the sustenance conifers provide during winter, we enjoy such things as bird watching and tracking deer.
If you notice any branches on your landscaping conifers which appear to be unable to support wildlife, or may even appear to put wildlife into danger, contact an Austin tree trimming specialist immediately for assistance in appropriately pruning the branches.
And just as conifers provide a means of protection for wildlife, they also provide a level of protection for us, too. This is because, planted in groups or within close proximity of one another, they serve as a windbreak. By blocking winds, not only is winter’s cold lessened, but heating bills may be reduced, too.
If you have questions, concerns or other ideas relative to planting and making conifers a part of your landscape, an Austin tree care professional can certainly speak to the advantages of conifers, particularly during the winter months.
About the Author: Andrew Johnson is the owner of Central Texas Tree Care, a leading provider of Austin tree service in Central Texas. Certified ISA Austin arborist services including: tree trimming, tree removal, tree care and oak wilt treatment. For more information on Austin tree service, please visit https://www.centraltexastreecare.com.
|
<urn:uuid:10139662-0860-4ec0-bd77-90e25a44567b>
|
{
"dump": "CC-MAIN-2019-43",
"url": "https://centraltexastreecare.com/tag/austin-tree-trimming/page/2/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986693979.65/warc/CC-MAIN-20191019114429-20191019141929-00479.warc.gz",
"language": "en",
"language_score": 0.9541968703269958,
"token_count": 694,
"score": 3.28125,
"int_score": 3
}
|
It has been asked what supporting evidence there is for the Christian Identity teaching, that White Europeans are the tribes of Israel. It has been suggested that it may have just been a convenient idea to serve the empirical ambitions of Britain. Anyone that thinks this cannot have properly investigated it, as the supporting evidence is all around us. Besides the actual wording of the scriptures themselves, which are obviously racially exclusive and quite definitive about Christ only coming for the lost sheep of the house of Israel, there are also many secular proofs to corroborate it.
Wherever the Hebrews went they left witnesses to show that they had been there, in the form of megalithic structures. Some were erected in memory of a vow or a battle and others were intended as way marks, to confirm to those following that they were going in the right direction to meet with more. There are many verses in the Bible referring to these stone structures, some of them were just piles of rocks, others were standing stones arranged in circles or rows and others were tombs, which took the form of the long barrows that are dotted all around Britain. A long barrow once the earth has been removed from it is called a cromlech or a dolmen and these are found as far afield as Japan, showing that our White ancestors once trekked all the way to the Orient. One particular long barrow that gets mentioned quite frequently in the Bible, is the one that was built in a field for Abraham.
The megaliths that are found around Britain also conform to the instruction given in the Bible for the building of Solomon’s temple, that no iron tools should be used in the making of them. Although many of the structures were built well into the iron age, they have clearly had no metal tools used in shaping them. There have also never been any idols found anywhere in Britain that were not Roman, unlike the many idols that are regularly dug up around the ancient cities of the Middle East. Again, the Bible stipulates that no idols should be made or worshipped by the Israelites.
Besides the witnesses to our heritage that any Briton can find for themselves on an OS map, there are also the royal family trees of Europe, which can be worked back through Brutus, Aeneas or Cecrops through to Zarah, the son of Judah. All the heraldry of Europe can be traced to an origin in the various ensigns that were given by Jacob Israel to his twelve sons on his death bed. The twelve tribes are described as pitching camp in the shape of a swastika made up of three tribes on each side, with their flags flying and the tabernacle and Levites in the middle. Perhaps the most striking example of the heraldic links is with the flag of Ulster, which is a red hand on a hexagram, the hexagram symbolising the six counties in the kingdom and the red hand symbolising the red cord that was tied around Zarah’s hand when he put it out of the womb first, although it was his twin Pharez that eventually arrived first. Obviously the legend of Zarah has evolved over time as all myths do, but even the later ones still point back to the original, with the most popular myth claiming that a warrior chief cut his hand off and threw it to the finishing line in a race, to ensure that he won.
Along with the proofs in Europe, there are proofs in the Middle East too. In Persia there is a giant rock inscription commemorating the conquests of Darius, carved into the face of a mountain around 515BC. It was inscribed in three different languages, old Persian, Akkadian and Elamite and talks about the conquered tribes that were vassals to him. One of these tribes were the Sakka (Isaaca), who later became known as the Scythians. In the Akkadian version of the inscription the word is ‘Gimiri’, which is where we get the words Cimmerian and Cymru from. So the Scythians and the Cimmerians were originally one and the same people. The word ‘Gimiri’ can be shown to have originated in the name of the Hebrew king ‘Omri’ and the area this conquered tribe is recorded as being in, is the same area the Bible tells us the Israelites were deported to by the Assyrians.
From before that time, we have the letters excavated at Tel-el-Armarna, that were missives written by the Canaanites to the Egyptian king, asking for his help as they were being invaded by a people named as both ‘Habiru’ and ‘Saga’, the same word as ‘Sakka’ on the Behistun rock. The letters clearly show that Sidon and Tyre were conquered by the Hebrews, with one of the writers lamenting the fact that all the cities which the Pharaoh had given him had fallen to the Saga and another stating that the ruler of Sidon had surrendered to them. It was after this conquest that the Hebrew occupiers became known as ‘Phoenicians’ to history and gained their reputation as colonisers, leaving traces of themselves all over Europe, particularly in Britain.
It is often wondered why the Persian King Cyrus allowed the remnant of the tribe of Judah that were held in Babylon to go back to Jerusalem and rebuild their temple at his expense. According to an inscription on the Cylinder of Cyrus, he was a ‘King of Anshan’ which designates him as being an Elamite, descended from Elam who was another son of Shem. The kings of Anshan had been conquered by the Medes at the time the tribe of Judah were taken, but once they were back in control again they naturally helped their brothers in the tribe of Judah to rebuild the temple to their God, the God of Shem.
The eventual infiltration and adulteration of this remnant of Judah by Arab (mixed-race) tribes such as the Edomites, Cuthites and Sephervaim, is dealt with in the Bible and the Apocrypha, along with the works of the historian Flavius Josephus and is confirmed in the modern Jews encyclopaedia.
All the relevant inscriptions that have been unearthed from Nineveh and elsewhere describe the Sakka/Iskuza/Scythians as being in the same place that the Gomri, Khomri, Cymru were, which is the exact place that other records state the ten tribes of the house of Israel were. Rather than having three nations all living in the same territory, it is much more likely that these were all just different names for the same people. Herodotus records the Scythians as abhorring swine, refusing to sacrifice them or even to touch them, which is also inline with the biblical commandments for the Israelites.
The Scythian Israelites moved west through the Israel pass in the Caucasus mountains, leaving grave stones in the Crimea giving dates going back to their original exodus from Egypt. The Irish have records of the prophet Jeremiah arriving with the Hebrew daughter of the last king of Israel, who married into Zarah’s line that were already in Europe. Again, there is a stone to witness this, the Bethel stone that commemorates Jacob’s dream of the ladder with angels ascending and descending, this stone going on to be used at every British coronation since. The Scots talked of when they first left the Egyptian captivity in the declaration of Arbroath and the early church fathers used to refer to Christians as Israel regularly. This tradition continued in Britain right up until a few centuries ago, not because Jews were in the country, but as a trace memory of the British people’s true heritage as the sons of Jacob Israel. In fact the ancient coronation ceremony of England is full of references to the Biblical coronations, right down to the cheer of ‘God save the King’.
The evidence that the White Europeans are the children of Israel is overwhelming and constantly being further reinforced by new archaeological discoveries. The Library at Nineveh was not discovered until the 19th century and the Dead Sea scrolls only uncovered in the 20th, complete with detailed physical descriptions of people like Noah and Abraham’s wife Sarah, as being 100% White. The Jews clearly do not want Europeans discovering this as it strips them of their claim to being a chosen people. But if you think about it logically, they never could have been a chosen people. Only the White Europeans have had all the prophecies come true for them, not the Jews. The knowledge that the Europeans were of Israel must have been known at the time of the early church, or else the Apostles would not have brought Christianity to us in the first place. They were told only to go to the lost sheep of the house of Israel, the dispersed nations that grew from the displaced peoples of the Assyrian deportations.
There are many more proofs of the identity of the European people than I have listed, including all the references by our earliest historians, who all wrote that Europe was uninhabited except for the coast and that the original founders of Greece and Rome started out as captives in Egypt. There is also our mythology, which can be shown to have the same origin as our Bible. Our languages are all derived from paleo-Hebrew and our oldest European alphabet is uncannily similar to it. We have so many geographic names that originate in the Bible it is impossible to think that this was not once common knowledge. Names such as Denmark, the Danube, Zaragossa, the Hebrides, we even have places named ‘Zion’ all over Europe that have never once had any Jews living in them! Our druidic priesthood was hereditary just like the Levites’ was and they even dressed the same, as well as offered up the same sin offerings.
It really is the only logical explanation for the zeal with which our ancestors took to Christianity. They knew that it was theirs and they knew that Christ was a White European. The truth of it can be seen wherever you care to look and however you care to think. Our ancestors would never have accepted a foreign religion, as some who would like to denigrate their memory are saying today. The Bible identifies the Israelites as us, the oldest historians do, archaeology does and logic does.
Anyone who ignores all this is just lying to themselves.
|
<urn:uuid:a619a9a7-a1d0-4e94-8298-84cb6c851165>
|
{
"dump": "CC-MAIN-2021-04",
"url": "https://whiteracialidentity.net/2014/11/27/supporting-evidence-for-christian-identity-doctrine/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00330.warc.gz",
"language": "en",
"language_score": 0.9860166311264038,
"token_count": 2154,
"score": 2.53125,
"int_score": 3
}
|
Yelling or raising your voice
One of the most common Behaviors That Can Upset Your Dog is yelling or raising your voice.
Ignoring their needs
Another common behavior that can upset dogs is ignoring their needs. If you don't give your dog enough attention, exercise, or food, they may become anxious, frustrated, or upset
Punishing them excessively
Physical punishment, such as hitting or slapping your dog, can cause fear, aggression, and anxiety. Punishing your dog excessively is one of the Behaviors
Invading their personal space
Dogs need their own space and can become uncomfortable or aggressive if you invade it
Not socializing them properly
Not socializing your dog properly is one of the Behaviors That Can Upset Your Dog.
Neglecting their training
Another common behavior that can upset dogs is neglecting their training. Dogs need consistent and positive training to understand what is expected of them
Leaving them alone for too long
Dogs are social animals and need company. Leaving them alone for extended periods
Using intimidating body language
Dogs can perceive certain postures, gestures, and facial expressions as threatening or aggressive
Ignoring their health issues
Ignoring your dog's health issues is one of the Behaviors That Can Upset Your Dog.
Images Courtesy Google.com
|
<urn:uuid:62242311-9045-4ecd-9ae8-674e32321ff9>
|
{
"dump": "CC-MAIN-2024-10",
"url": "https://odishanow.in/web-stories/10-behaviors-that-can-anger-your-dog/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473401.5/warc/CC-MAIN-20240221070402-20240221100402-00339.warc.gz",
"language": "en",
"language_score": 0.9398467540740967,
"token_count": 273,
"score": 2.546875,
"int_score": 3
}
|
Beyond Words by Cecilia Spary
As an interpreter of three languages, words and their meaning are the main tool in this line of work. Within my twenty plus years of working with words, I have learned that they are more than simply that: words
What are words other than signs? Well, I can only speak as a person who can communicate in different languages, but most importantly, from my own life experiences: words are powerful, they are much more than a meaning or a group of phonetic sounds; words can build someone up or completely tear them down. Words can make people laugh, cry, encourage or frighten them, make our stomachs “flutter”…the possibilities are endless because words and emotions are deeply connected, no matter the language being spoken.
In my many years working with these seemingly innocent symbols and people, I’ve come to the realization that we as human beings, the only species able to use the spoken word, hold a huge responsibility, because we are carrying a powerful weapon.
I have also learned important lessons which I hope you find useful.
Lesson #1: I can’t repeat this enough; words are powerful. They can encourage someone or completely destroy them…and everything in between.
Lesson #2: Words can be as piercing as daggers. You might think that any sharp object is dangerous: they can cut, scratch, open deep wounds and even kill you; words are so mighty they are capable of doing that and even more. Yes, much more.
Physical wounds can heal and scar, and if we get lucky, that scar can even disappear overtime. However; words are more powerful than any sharp weapon: they can shatter your soul, your most inner self, and open wounds that may never heal. Some others heal but never scar, so the wound is so vulnerable that can open up at any given moment when the right person tells you over and over again how stupid, dumb, ugly, fat, worthless, useless etc. you are. Are you hearing those words now? Who’s saying such horrendous things to you? See? You felt that piercing insult right in your chest, and the image of a parent, relative, friend, lover came to your head. They cut you so deeply that they still hurt, right? Yes, words are powerful, piercing daggers. There’s no doctor, medication, counselor that can cure a broken heart or crushed soul. We live in this “Band-aid” society where we think a few pills a day will help….I’m not against Western medicine, I think we all need to see our doctor regularly; but if we don’t heal from within, those pills will only make us numb to our reality, our true source of suffering and constant pain in our hearts.
Do whatever you have to do to heal from the inside out…but do it now.
Lesson #3: our survival, self-preserving instinct carried in our genetic pool for over a million years teaches us how to tolerate the wounds and the chronic pain developed thanks to someone who hurt us. Even worse, we tend to ignore those signs of emotional pain. How many times have you heard broken-hearted people act like they are “over” a painful issue?:
“I have finally healed and moved on” “I couldn’t care less what he/she does…I don’t hold any grudges” and my favorite as a woman: “I’m so OVER him!” Usually followed by a loud, fake laugh. And we notice the pain pretty quickly behind that show they just put up for us.
It’s amazing how oblivious we go through life with a distorted perception of what most people perceive from us. Those invisible scars are very obvious for most people. But please don’t feel discouraged when someone receptive (an empath), picks up those weaknesses very quickly; simply own it and get professional a n d spiritual help. There’s nothing wrong with you… you’ve been probably noticing that attitude towards others yourself, and that struggle to convince yourself that you are “over” the issue, can eventually cause more damage: you’ve been hurt enough. It’s time to put your pride aside, take the mask off and ask for help. Please.
Lesson#4: I’ve noticed that people who live with open wounds are more common than we think. Don’t feel threatened or embarrassed if someone receptive enough can see those invisible wounds. Actually, consider yourself lucky-yes-lucky. Your scars are only invisible to those selfish individuals who walk around this world oblivious of other people’s suffering. We live in an overwhelmingly individualistic society, where “ME, MYSELF AND I” are the most used personal pronouns of the English language.
However; when we are broken, we are obviously more vulnerable and easily influenced by “all the right words”, but I’ll get to that later. Because of my personal experiences with suffering-believe me, I’ve been knocked down in this ring called life numerous times- I’m grateful for those who have seen through my forced smile and have been compassionate, willing to lend a helping hand and shown true concern for my well-being. These people have a good heart, have been through similar situations (or not) and they have an innate calling to help others.
Believe me, you have been blessed if you come across these rare to find individuals because they are becoming extinct. Hopefully you develop healthy relationships, whether friendly or romantic with them and lift each other up, because it is almost guaranteed that they have been through a great deal of pain as well. I’m a true believer that two hurt souls can help each other heal through mutual understanding, patience and mainly love.
Well, now it’s time to warn you about the other kind of people who have the gift to see through your soul, but their intentions are not the best. Actually, it’s quite the opposite: they take full advantage of your vulnerability and are very likely to use your pain towards their own benefit. As strange as it may sound or difficult to understand for most of us, yes, there are truly evil people in this world. How evil? Does it really matter? Evil is evil… they will tell you “all the right words” you want to hear, convince you that you’re so wonderful, so perfect, so kind and beautiful that they feel humbled by the honor of your company.
I always joke that these people should literally carry a huge red flag on them at all times! That smooth talking is just a tactic to get you “hooked”, because they are unbelievably good with words, but that’s it.
They will use you to gain financial benefit from you: they usually “forget their wallet” at home when the bill arrives or call you last minute “my checking account has been blocked for some strange reason! Can I borrow $…… from you to pay my car insurance that’s due today? I’ll pay you back this Friday”...of course Friday comes and they suddenly forget they borrowed money from you. But darn! They are good with words! You will feel so sorry they are having such a struggle…right?
Another common tactic is to use their hypnotizing smooth talking to seduce the target and well…you know what they get…some fun, and more fun…and they are good at it!
Maybe it’s a promotion, important connections, status, an attractive partner to show off, favors of all kinds or simply a roof over their heads and food in the fridge… the motives are endless. They are master manipulators who have a gift with words.
“Beware of Vultures”.
Lesson#5: We grow up. Hopefully. However; no matter how many times we have repeated to ourselves “Never again” like a mantra, we are human… we can fall in the trap again, the only difference is that this time the setup is more sophisticated. Some of us fall for “all the right words” again, and others-who have been much more alert, have the pleasure to say “I forgot my wallet too! Let me go get it from my car”, and then RUN faster than Forest. Kudos to these people! You’ve learned the “magician’s tricks” and outdid them. Standing applause, please!
They deserve it…
Now – on a more serious tone – we have hopefully learned from those past “beautifully said words and no actions” and developed an “opportunist at sight” radar, and heal from within, which takes time and lots of self-care, but it can be achieved. This is the perfect time to develop all those healthy relationships we all long for, because when our soul has healed, our heart mended and our self-esteem has been elevated to a healthy happy medium; we attract good people who might not necessarily say “all the right words”, because they don’t need to pretend anything, those people have “all the honest words”, the most exquisite ones we can hear.
Lesson#6: We also make mistakes and use hurtful words sometimes. On a conscious or unconscious level, we have said very destructive things, we have lied (yes, little white lies are still lies), we have manipulated others… maybe not at the “vultures” level – reaching that level is almost impossible if you have feelings- but we have hurt others one way or another. Maybe we said something mean to someone out of spite, jealousy, envy etc…Maybe we have made promises at a very happy moment in our lives to then realize we cannot fulfill, and end up disappointing a fellow human being.
Who hasn’t said something extremely stupid at a moment of rage or frustration? “I quit!” turn around and slam the door behind you.
“I wish I had never met you”, “You’re a nobody”, “I don’t want to see you ever again”, “Go get a life”, “I hate you”…Have you said at least ONE of these? If your answer is yes, congratulations! You’re a flawed human being!
But…watch out! If you have said most of them, just take it easy and think twice before saying something you’ll regret later.
The saying goes “Never make a promise when you are too happy or too sad”, but I think it applies to everything we say.
One more thing: please apologize when you hurt others, it’s not a sign of weakness, it actually shows strength of character.
Lesson#7: Words left unspoken. Yes, not hearing anything from the person we are expecting is probably the worst, most painful kind: feeling ignored by the one we thought cared about us.
This is honestly the most difficult “lesson” for me to write about. I always say “silence is deafening”…
Whether you are constantly checking your phone, waiting for at least a couple of words from that person who promised you eternal love to only turn into a total stranger; or checking your mail desperately to see if you were given that position you interviewed a whole month for; sitting for hours at a waiting room of a hospital, praying to see the doctor show up and say “It’s going to be fine, the surgery was a success”, asking for a much needed and well-deserved raise to only getting the silent treatment from your boss…or that “I’m sorry” you know you deserve, but never comes; and the universal, powerful “I still love you”... from that one person who swore to never give up on you because you were the love of their life…
Words…just symbols with sounds. The concept seems simple, yet it’s the most powerful weapon of them all. “The tongue is mightier than the sword” some wise person said..
Finally, I’d like to finish this article by telling you something I read on a church sign as I was driving to work a few days ago. It read:
“Make sure your words are sweet, for one day you might have to eat them”.
Love one another,
About the author
Cecilia Spary is a freelance writer whose creations include a vast range of styles: poetry, short tales, theater monologues, essays, articles the list goes on.She is a Medical Interpreter who speaks three different languages and a TESOL-certified ESL teacher. Her articles and essays have been published on well-known organizations’ websites such as The National Council On Interpreting In Health Care, Hispanics in Ohio blog and Latinos Magazine; as well as a number of literary creations in Uruguay, her native country.Cecilia is entering the world of blogging with her own “Beautiful Chaos: Diaries Of A Mad Woman” in March 2017.
This brand new blog will give an entertaining, satirical, raw and hilarious view of the world of relationships in general based on her own life experiences. Although the the tone of the blog is very sarcastic, the purpose is also to empower people -whether in a relationship or not- to face their love and relationships lives with humor (and a few “serious notes”, of course). Cecilia is a divorced mother of one who resides in Cincinnati, OH.
|
<urn:uuid:467d8b8d-44dd-45aa-8882-f6b6f2f7f5c8>
|
{
"dump": "CC-MAIN-2022-21",
"url": "https://mujerlatinatoday.com/archives/6632",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00326.warc.gz",
"language": "en",
"language_score": 0.9612869620323181,
"token_count": 2884,
"score": 2.609375,
"int_score": 3
}
|
Nobility names belonged to the first adopted hereditary names in Sweden. They were usually created from the symbols of the family coat of arms. For example Gyllenhammar (= 'golden hammer'). The elements used for creating these names were usually Swedish, sometimes German and sometimes a mixture of Swedish and German.
Typical nobility names are:
You can find many nobility name elements and their meanings here.
|
<urn:uuid:54b3c2af-1d62-48b2-b735-724ebce82d15>
|
{
"dump": "CC-MAIN-2019-26",
"url": "https://www.nordicnames.de/wiki/Nobility_Name",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999838.23/warc/CC-MAIN-20190625112522-20190625134522-00005.warc.gz",
"language": "en",
"language_score": 0.9713948369026184,
"token_count": 82,
"score": 3.15625,
"int_score": 3
}
|
My grandmother stirred her lard and lye soap with a big spoon. It took hours of gentle heating and patient stirring until the soap finally reached "trace" and could be poured into the mold. Today, I use a stick blender (also called an immersion blender, wand blender, or stab mixer), to bring most batches of soap to trace in just a few minutes.
What is trace?
Fat and lye are immiscible, meaning they do not want to mix. If soap batter is not stirred, the fat will float on top of the lye, and a thin layer of soap will form where the lye and fat layers touch. When enough soap forms to become a barrier between the fat and lye, the saponification reaction will stop. To keep the saponification reaction going, the soap maker must stir the soap batter to break up the soap particles and mix the fat and lye layers.
Historically, soap makers used a paddle (called a crutch) or a large spoon to mix the soap batter. The batter had to be stirred continuously for hours -- a tedious, boring job. When enough enough particles of soap had formed, these particles emulsified the batter. This chemical emulsification helped the lye and fat to stay mixed without the need for more mechanical stirring.
This point in the soap making process is called "trace" -- the stage at which soap batter is chemically emulsified and obviously thicker. An emulsified batter will stay mixed without having to actually stir it. There are many good articles and videos that describe trace; here is one --
"All About Trace in Cold Process Soap" by Soap Queen TV by Bramble Berry
Why does a stick blender work so well for bringing soap batter to trace?
This process of reaching trace became faster and easier when immersion blenders became popular for home cooking (and home soap making) in the 1980s. (1)
Stick blenders are high intensity mixers that blast the fat and lye into extremely small droplets. These tiny droplets of lye and fat are able to saponify much easier and quicker than the much larger droplets created by mixing with a paddle, spoon, or whisk.
The high intensity mixing also breaks the newly forming soap into tiny particles. Very small flecks of soap are more effective as a chemical emulsifier, so the soap batter usually reaches trace much quicker when a stick blender is used.
How to best use a stick blender?
Some people, especially beginners, use their stick blender with a heavy hand. This can create more problems than it solves. My motto -- Stick blend less and hand stir more!
When making a typical batch of soap, I stick blend the batter in 3 or 4 bursts that are 2-3 seconds per burst. Between bursts, I hand stir the batter with a spatula and watch how the batter changes texture, color, and thickness. This whole process of watching, hand stirring, and brief bouts of stick blending might last a total of 3 to 5 minutes.
At the end of this time, the soap batter is usually emulsified, but it does not show any obvious signs of trace. Sometimes soap makers say the soap batter is "at emulsion" when it reaches this stage. These two videos show the subtle things to look for when deciding if the batter is "at emulsion" or not --
"Stickblending to Emulsion" by SMF Soap Videos
"Mini Drop Swirls" by Kapia Mera Soap Company (emulsion demo starts at 1:00 and ends at 1:25)
When the batter is at emulsion, I will split the soap into smaller portions for coloring. Each portion gets another 1 to 2 seconds of stick blending to mix the colorants into the batter. I might also stick blend the main batter in my soap pot a little more even if it does not get any colorant, just so all of the batter gets treated the same way.
By this time, the soap is getting obviously thicker and I usually do not do any more stick blending -- I want to move on to pouring the soap into the mold and finishing up.
Before pouring soap batter into its mold, I scrape the sides of the soap pot with a spatula and stir these scrapings into the main body of the batter. This ensures all of the batter has the same consistency before I pour. If this is not done, the scrapings may be slightly different than the main batter and they may show in the finished soap as unwanted streaks or odd textures.
What stick blenders are good for making soap?
Keep it inexpensive. I have Cuisinart and Hamilton Beach stick blenders for making lotion and soap; both cost under $30 US. If I was limited to one stick blender for soap and lotion, I could be quite happy with either one. Since I have both, I tend to use the slightly more powerful Cuisnart for soap. I use the Hamilton Beach more for lotions, because it seems a bit less prone to splashing when mixing smaller batches.
Choose stainless steel. I prefer stick blenders that have a stainless steel mixing end, rather than the ones with a plastic bell. Plastic can melt when doing some types of hot process soaps and is more prone to cracking from age and accident.
There are some stick blenders that have zinc or aluminum bells and have internal seals that are not rated for exposure to strong alkalis like NaOH. These blenders are not acceptable for use with soap. One example is my expensive Bamix stick blender which fails on both counts. I use it strictly for food.
Removable mixing end is a plus. I also prefer stick blenders that have a mixing head that detaches from the motor. With a removable mixing head, I don't have to worry about the motor getting wet when I wash the mixing head. I can also put the mixing head in the dishwasher if I want.
Troubleshooting: My stick blender is overheating and my soap has not reached trace. What should I do?
Sometimes soap batter takes a long time to come to trace, even if a stick blender is used. I see this especially when I make liquid soap with potassium hydroxide (KOH).
If your soap is slow to trace, do not furiously stick blend until the machine overheats or your patience wears thin. My rule of thumb -- If I stick blend for better part of a minute, and the soap batter does not respond the way I think it should, I need to STOP and assess the situation. Some types of soap require a watchful waiting game, not a stick blender endurance contest.
Next time your soap batter is acting unusually slow, stick blend for a few seconds and then hand stir occasionally for a few minutes. If the batter is obviously separating after a few minutes, stick blend again for a few seconds and then hand stir occasionally for a few more minutes. Repeat as needed until the soap reaches a definite, stable trace.
You may find your soap will reach trace almost as fast by playing a waiting game rather than a stick blending contest.
Troubleshooting: I have been doing your "waiting game" for hours now and my batter is still not at trace! What now?
Check your measurements. Have you added too little alkali (KOH or NaOH) for the amount of fats you are using? Have you added too much water -- in other words, is your lye concentration too low?
Check if you used the correct alkali. Did you accidentally use KOH when you should have used NaOH? Or did you use sodium carbonate (washing soda) rather than NaOH? Either of these mistakes will prevent your soap batter from coming to trace.
Check the batter temperature. Is it fairly cool (under 100 F / 40 C)? If so, consider warming the batter to 120-160 F / 50-70 C to jump start a lazy saponification reaction.
Try a dose of benign neglect. If you cannot figure out what the problem is, cover the soap pot and put it in a safe place for the day or overnight. See what happens if the soap is left alone to do its thing.
Are there other ways to mix soap batter besides a stick blender?
Hand stirring with a whisk or spatula is still an acceptable, effective way to mix soap batter. Recipes that come to trace very quickly, such as pine tar soap, work best if they are only stirred by hand.
A regular kitchen blender works well for smaller batches. Many people are skeptical, but I know one full-time soap maker who uses inexpensive stand blenders for all of her soap making. More about using a stand blender to make soap...
A hand-held or stand mixer (the gadget used to make cookies or cake) is not as intense as a stick blender, but is another option to consider. A stand mixer has the advantage of being a hands-free option for soap makers with physical limitations. A drill fitted with a paint mixer is another lower-intensity mixing option that may fit your needs.
A final method suitable only for tiny batches (1 or 2 bars) is to shake the soap batter in a closed jar. I could not find anything online about the pros and cons of doing this, but this method is discussed in Kevin Dunn's book Scientific Soapmaking. (2)
More discussion about mixing methods --
Got any tips if I want to stay old-school and only stir my soap batter by hand?
Use a recipe higher in myristic and lauric acids (coconut oil or palm kernel oil).
Add grated or shredded bar soap to the soap batter. I suggest adding 1/2 to 1 ounce (15 to 30 grams) of soap per pound (500 grams) of oils to help emulsify the soap batter.
Add heat to keep the soap batter temperature between 120-160 F / 50-70 C.
Add a small amount of clove essential oil. The eugenol in clove essential oil is an accelerant.
(2) Kevin M. Dunn. Scientific Soapmaking. Clavicula Press. 2010.
Copyright © 2002-2022 - All rights reserved by Classic Bells Ltd.
Template by OS-templates.com
|
<urn:uuid:b318bd29-3422-4c7c-8de8-3393a59daacd>
|
{
"dump": "CC-MAIN-2022-33",
"url": "https://classicbells.com/soap/stickBlender.asp",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570868.47/warc/CC-MAIN-20220808152744-20220808182744-00692.warc.gz",
"language": "en",
"language_score": 0.93904048204422,
"token_count": 2262,
"score": 2.65625,
"int_score": 3
}
|
Dating from a movement of the sixth century B.C.E., about 3 million Jains now live in their original homeland of northern India. Salvation or liberation can be attained only by monks living an ascetic style of life similar to Brahmanic ascetics and Buddhist monks. A central doctrine is ahimsa , nonviolence to living things; this doctrine was crucial to Mohandas Gandhi, who was not a Jain but who grew up among Jains.
See also Buddhism, Gandhi, Hinduism
Joseph B. Tamney
M. Langley, "Respect for All Life," in Handbook to the World's Religions , rev. ed. (Grand Rapids, Mich.: Eerdmans, 1994): 207-216.
|return to Encyclopedia Table of Contents|
|
<urn:uuid:1364a9e7-db31-4ee6-afa7-0c3961920559>
|
{
"dump": "CC-MAIN-2017-17",
"url": "http://hirr.hartsem.edu/ency/jainism.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125719.13/warc/CC-MAIN-20170423031205-00076-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.899645209312439,
"token_count": 162,
"score": 3.4375,
"int_score": 3
}
|
Harm Reduction Practices for Alcohol Consumption
Harm reduction is the practice of reducing the risk of participating in hazardous activities. While ideally, someone addicted to alcohol would stop drinking entirely, it may not always be feasible without professional help. To help individuals beat alcohol addiction, let’s explore some essential harm reduction strategies.
Harm reduction decreases the harmful consequences of substance use, such as alcohol and drugs, by implementing practical and realistic strategies. Here’s what you need to know about harm reduction for alcohol use:
- Alcohol abuse is so common in the United States that many people assume they are using safe practices when they are not.
- Always focus on safety and responsible alcohol consumption.
- Treatment is always available when you are ready to quit drinking completely.
Alcohol Abuse in America
Alcohol abuse remains pervasive in the United States, with far-reaching effects on individuals, families, and society. While alcohol is a socially accepted and legal substance, its misuse has led to a significant public health concern.
An estimated 14.5 million adults in the United States suffer from alcohol use disorder (AUD). This condition jeopardizes physical health and contributes to various social problems, including accidents, domestic violence, and job loss. The economic toll is staggering, with alcohol-related costs exceeding $249 billion annually.
Efforts to combat alcohol abuse include:
- Educational campaigns.
- Stricter enforcement of drinking laws.
- Support for individuals seeking treatment.
Reducing alcohol abuse requires concerted efforts from individuals, communities, and policymakers. It involves promoting responsible drinking, providing accessible treatment options, and addressing the underlying factors that drive alcohol misuse. By raising awareness and nurturing a culture of moderation, we can work towards a safer America.
Alcohol Abuse vs. Dependence: Understanding the Difference
Alcohol abuse and alcohol dependence are two distinct but closely related terms that describe problematic patterns of alcohol consumption. It’s crucial to differentiate between these concepts to address and treat these issues effectively.
Alcohol abuse is excessive drinking that results in harmful consequences, such as neglecting responsibilities, legal troubles, or continued use despite interpersonal problems. Individuals who abuse alcohol may engage in risky behaviors, like binge drinking, but don’t necessarily exhibit physical alcohol dependence.
Alcohol dependence, on the other hand, is a more severe condition. It involves both psychological and physical dependence on alcohol. People with alcohol dependence may experience withdrawal symptoms when they try to quit drinking and often find it challenging to control their drinking.
In short, alcohol abuse is the instance of misusing alcohol, whereas alcohol dependence is the physical reliance on alcohol due to abuse over time. Understanding these distinctions helps both patients and practitioners assess the severity of a drinking problem.
Impact of Excessive Alcohol Consumption
Excessive alcohol consumption affects physical and mental health. It increases the risk of liver disease, heart conditions, and certain cancers. Alcohol impairs cognitive function, leading to accidents and poor decision-making.
Excessive alcohol use can strain relationships, cause job loss, and lead to legal issues, including DUIs. Moreover, it exacerbates mental health disorders like depression and anxiety.
The economic burden is substantial, with healthcare costs and lost productivity totaling billions. Addressing this issue requires public awareness, access to treatment, and responsible drinking practices to mitigate the far-reaching consequences of excessive alcohol use.
Excessive alcohol consumption wreaks havoc on health, yielding a cascade of adverse consequences. It substantially increases the risk of liver diseases, including cirrhosis and fatty liver.
Heart ailments, such as hypertension and cardiomyopathy, become more prevalent. Moreover, excessive drinking raises the odds of developing various cancers, particularly in the liver, mouth, and throat.
The neurological impact is also profound, with alcohol impairing cognitive functions, leading to accidents. Chronic alcohol abuse taxes the immune system, rendering the body vulnerable to infections.
Overall, the health ramifications of excessive alcohol use are extensive. They underline the importance of moderation and responsible drinking practices.
Excessive alcohol consumption exerts a significant toll on mental health. It often leads to mood swings, depression, and heightened anxiety. The depressive effect of alcohol disrupts the brain’s chemistry, worsening these conditions. Cognitive impairments, including memory lapses and poor decision-making, are expected outcomes of excessive drinking.
Furthermore, alcohol abuse can fuel the development of severe mental health disorders, such as alcohol use disorder and alcohol-induced psychotic disorders. Long-term excessive alcohol use can alter brain structure and function, making it harder to quit drinking.
The psychological impact extends beyond the individual, straining relationships and contributing to social isolation. Realizing these effects underscores the importance of responsible alcohol consumption and seeking help when needed.
Social and Economic Impact
Excessive alcohol consumption reverberates through society, leaving a trail of social and economic challenges. On a social level, it contributes to family breakdowns, domestic violence, and strained relationships. It fuels accidents, including drunk driving incidents, endangering lives.
Economically, the consequences are staggering. Healthcare costs surge due to alcohol-related illnesses and injuries. Lost productivity at work, absenteeism, and job loss create a substantial economic burden. Law enforcement expenses associated with alcohol-related crimes add to the tally.
Moreover, the impact on the criminal justice system and rehabilitation programs strains public resources. Addressing excessive alcohol consumption is vital for individuals’ well-being and communities’ financial stability.
Harm Reduction: Explained
Harm reduction is a pragmatic and benevolent approach to addressing various health and social issues, primarily focusing on substance use, including alcohol and drugs. Instead of pursuing abstinence as the only goal, harm reduction seeks to subside the adverse effects of substance use. Here’s how it works:
Safety First: The core principle of harm reduction is prioritizing safety. It acknowledges that people will engage in risky behaviors, and the goal is to make them less harmful rather than eliminating them.
Education and Outreach: Harm reduction programs provide education and outreach to individuals using substances. They offer information about safer consumption practices, such as avoiding mixing alcohol with energy drinks.
Access to Resources: Harm reduction initiatives strive to increase access to essential resources like free taxi services to avoid drunk driving.
Non-Judgmental Approach: It adopts a non-judgmental and empathetic stance toward individuals struggling with alcohol use disorder, recognizing that stigma and punitive measures often deter people from seeking help.
Measuring Success: Success in harm reduction is calculated by reducing the adverse effects of alcohol use, like drunk driving deaths or alcohol poisoning, rather than just focusing on sobriety.
Community Involvement: Harm reduction often involves the community, healthcare professionals, and social workers working collaboratively to create safer environments for everyone.
Harm reduction has proven effective in saving lives and improving public health outcomes by acknowledging the complexities of alcohol addiction. It offers a bridge to treatment and support for individuals while minimizing harm to themselves and their communities.
Moderation vs. Sobriety: Finding Balance in Substance Use
The debate between moderation and sobriety revolves around how individuals approach their relationship with alcohol.
Moderation involves consuming alcohol in controlled and responsible amounts. It acknowledges that some people can enjoy alcohol without harm, emphasizing balance and self-control.
Sobriety, on the other hand, advocates complete avoidance of alcohol. It’s often recommended for individuals who struggle with addiction or have a substance abuse history.
The choice between moderation and sobriety depends on individual circumstances, with some benefiting from the flexibility of restraint while others find total sobriety the safer path. Ultimately, both approaches aim for a healthier and more fulfilling life.
Removing the Stigma of Getting Help
Breaking the stigma enclosing seeking help for alcohol addiction is vital for fostering a supportive society. Stigma often discourages individuals from reaching out, perpetuating suffering in silence.
We must educate and raise awareness to remove this barrier, emphasizing that seeking help is a sign of strength, not weakness. Sharing personal stories and normalizing the need for support can empower others to seek assistance without fear of judgment.
By creating an environment of empathy, we can ensure everyone can access the help and resources they deserve. It is the way to promote overall well-being and resilience.
Reduce Harm on a Night Out
Harm reduction is a compassionate approach to minimizing the negative consequences of substance use. Here are practical harm-reduction tips for individuals:
- Know Your Limits: Understand your tolerance and consumption thresholds to avoid overindulgence.
- Hydrate and Eat: Drink water and have a meal before or during alcohol use to prevent dehydration and stabilize your blood sugar levels.
- Use Safer Methods: Choose to drink a glass of wine over a cocktail or drink water between drinks.
- Avoid Mixing Substances: Combining liquors with other substances can lead to unpredictable effects and heightened risk.
- Plan Transportation: Arrange for a sober driver or alternative vehicle to avoid driving under the influence.
- Reach Out for Support: Don’t hesitate to seek help for alcohol abuse. Support groups, counselors, and treatment services can provide valuable assistance.
- Practice Safer Sex: If engaging in sexual activity while drinking, use protection to prevent sexually transmitted infections (STIs).
Remember, harm reduction is about minimizing risks, not promoting drinking. These tips prioritize your well-being and safety while recognizing that alcohol is a socially and legally accepted drug in the United States.
Pathways to Treatment
Accessing treatment for substance use disorders involves multiple pathways:
- Self-Initiated: Individuals voluntarily seek treatment, driven by their awareness of the problem.
- Family Intervention: Concerned family members or friends stage an intervention, encouraging the individual to seek help.
- Medical Referral: Healthcare professionals identify substance use issues during routine check-ups or hospital visits and refer patients to treatment.
- Legal Involvement: Legal consequences, such as court-mandated treatment, can push individuals into seeking help.
- Community Support: Local support groups, community centers, or social services may connect individuals with treatment resources.
- Crisis Situations: Alcohol poisoning or other emergencies may lead to immediate treatment through emergency services.
Understanding these pathways is essential for tailoring interventions and support, helping individuals with substance use disorders embark on recovery.
|
<urn:uuid:5d9eb16e-989a-46cd-b4b8-2a028be5bf0a>
|
{
"dump": "CC-MAIN-2023-50",
"url": "https://recoveryteam.org/harm-reduction-practices-for-alcohol-consumption/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00252.warc.gz",
"language": "en",
"language_score": 0.9178358912467957,
"token_count": 2122,
"score": 3.171875,
"int_score": 3
}
|
FROM ISSUE NUMBER 6 ~ WINTER 2011 GO TO TABLE OF CONTENTS
Too often, the ends that policymakers pursue are poorly served by the means through which they pursue them. Examples of this problem abound today — but the clearest must surely be federal policymakers' efforts to combat poverty through the redistribution of income.
Like the governments of all other modern democracies, the United States government redistributes the incomes of its citizens on a massive scale. And in America, as elsewhere, the public generally supports such redistribution in principle, on the understanding that it is intended to help the poor. The lives of the needy, the argument goes, would be far worse without this aid, and presumably such redistribution is designed to avoid undue harm to everyone else.
Whether one agrees with it or not, this popular understanding of redistribution's purpose yields some useful criteria for assessing the degree to which our redistribution programs are actually succeeding. The aims of helping the poor and minimizing harm to everyone else, in other words, offer some specific ends against which our means of redistribution can be tested.
By and large, those means take three forms. First, there are direct anti-poverty programs, like Temporary Assistance to Needy Families (what we commonly think of as welfare), food stamps, Medicaid, and the Earned Income Tax Credit. Second, there is progressive taxation, which transfers wealth from richer to poorer Americans across the income distribution. And third, there are policies that tilt economic outcomes in specific markets to benefit people with lower incomes (minimum-wage laws are a classic example).
These programs are well entrenched in our public life, and each has its own assertive army of supporters. But when these programs are measured against the goals for which they are employed, they reveal an approach to redistribution that is both misguided and excessive. Almost all of our means of redistribution today lack convincing philosophical or empirical justification. They are poorly targeted, expensive, economically inefficient, and in many cases do more harm than good.
Our approach to redistribution thus requires a profound re-orientation. We need to think through the relationship between what such programs should aim to do and what the programs we now have actually do. We need to get back to basics, and ask what a straightforward and economically rational approach to alleviating poverty would look like. Above all, the federal government must focus its anti-poverty efforts on the people most deserving of help, while minimizing the cost to everyone else. Though this sounds like common sense, it is far from today's reality.
MEANS AND ENDS
The scope of income redistribution in America is truly immense. In 2007, the last year not significantly affected by the Great Recession, the federal government spent about $1.45 trillion (roughly half of its total spending) on programs aimed at redistributing income from more wealthy Americans to the less wealthy. These ranged from means-tested entitlement programs like Medicaid, housing assistance, unemployment compensation, and food stamps to broader entitlements like Medicare and Social Security (which are not means-tested, but nonetheless transfer income on a mass scale and are generally justified on the grounds that they reduce poverty among the elderly).
Beyond these direct anti-poverty initiatives, the progressive federal income tax redistributes income even further. Progressive taxation means that the tax rate increases along with a taxpayer's income; a person earning $50,000 per year, for example, might pay $10,000 in taxes while a person earning $100,000 per year might pay $30,000. The higher-income person not only pays more taxes in total: He also pays a larger portion of his income (30% versus 20% in the preceding example). The wealthier give up a greater share of their earnings so that the less wealthy can forfeit a smaller portion of theirs; in this way, our progressive tax code is redistributive. The table below displays the effective tax rate (defined as total federal taxes paid — including payroll taxes to fund Social Security and Medicare — divided by total income earned) for different segments of the income distribution. It also shows the share of the nation's total federal tax burden shouldered by each income group.
The table depicts an immense project of redistribution. In 2007, the poorest 20% of taxpayers — about 24.6 million households — paid federal taxes at an effective rate of 4% and shouldered 0.8% of the total tax burden. Meanwhile, the richest 1% of taxpayers — roughly 1.2 million households — paid federal taxes at an effective rate of 29.5%, and shouldered roughly 28.1% of the total federal tax burden. (State income taxes also tend to be progressive, though to a lesser degree than federal taxation and to varying degrees in different states.)
Whether the progressivity of the federal tax code is excessive depends on what one considers an appropriate degree of burden-sharing — a question that is of course not purely economic. But no one can deny that the burden of taxation today is highly uneven across income groups, and that our tax system is therefore highly redistributive (especially since roughly half of federal revenues are used to pay for anti-poverty programs that provide additional support to low earners).
Beyond anti-poverty spending and progressive taxation, the federal government also intervenes in specific markets with the implicit or explicit goal of shifting income toward poorer segments of the population. These interventions include (among others) special protections for unions, limits on prescription-drug prices in Medicaid, agricultural subsidies, trade protections, and mortgage guarantees and subsidies. In each case, the policy in question undoubtedly harms economic productivity rather than improving it: Union protections, for instance, artificially raise the cost of skilled labor; limits on drug prices discourage research and development on new procedures and medicines; farm subsidies reduce agricultural production and increase food prices; trade protections distort the decisions of domestic producers and consumers; and mortgage subsidies lead to over-investment in housing and, as we have learned all too painfully in recent years, can result in dangerous bubbles. The interventions are nevertheless justified with claims that they promote "fairer" prices or other positive outcomes for low-income participants in the relevant markets.
These three pillars of our system of redistribution are not equally problematic. But to understand which are more justifiable and which less so, we must first ask why our government redistributes income at all — and why the public, on the whole, believes that it should.
The first common argument for anti-poverty spending is that the alleviation of poverty is what economists call a "public good" — something everyone would like to see provided, but which few people provide voluntarily because they hope others will do it for them. According to this view, private charity alone would reduce poverty less than everyone would like; only government can address this under-provision, by levying taxes on the non-poor and making transfers to the poor. The poor benefit from such government policy because they receive a higher standard of living, and wealthier people benefit because they value helping the poor more than they resent their higher taxes (for reasons of sheer compassion, or because they believe that helping the poor is a way to achieve other desired ends, like reducing crime). As far as the "public good" advocates are concerned, government redistribution leaves everyone better off.
A second argument for anti-poverty spending holds that private markets do not allow people to insure themselves against the misfortune of being born into poverty. None of us can choose the circumstances into which we are born, but if we were to develop a society that took account of this uncertainty (and so was constructed behind a "veil of ignorance," to borrow the language of the late Harvard political philosopher John Rawls), most people would design it in a way that allowed them to purchase some kind of insurance against poverty. They would, in other words, accept the obligation of paying higher taxes if they were to end up rich in exchange for protection against being left thoroughly destitute were they to end up poor.
Private markets do not provide this "income insurance" because the people who might benefit from it are generally not able (or not motivated) to demand it in advance. Those not yet born, of course, are not around to demand it; their parents could buy the insurance, but the parents whose children would need it most are too poor to afford it. And markets might not even supply this insurance — because insurers would fear that their product would be purchased disproportionately by parents who know, for some reason, that their children will end up poor (much as health insurers either deny coverage, or charge significantly higher rates, to people with costly pre-existing medical conditions). Young workers who might someday benefit from such protection, meanwhile, are often not aware of the risk until it is too late. The conditions simply do not exist for a real market in private anti-poverty insurance to emerge.
Government, however, can address this problem by compelling everyone to participate in social insurance. The "premium" payments consist of the requisite taxes; the payouts consist of welfare benefits distributed to those "enrollees" who end up with low incomes. The imposition of social insurance would not necessarily raise everyone's welfare; people with minimal prospects of ending up poor might regard such insurance as all cost and no benefit. By and large, though, many who are not born poor — as well as those who are — would benefit from this policy.
The third argument for redistribution (which combines some elements of the first two) is a moral one. It asserts that helping the poor is simply the right thing to do. This view assumes that differences in income are driven mostly by luck rather than by effort. And it presumes that the poor benefit more from receiving wealth transfers than the non-poor suffer from paying for the transfers — such that redistribution is a net good for society as a whole.
Each of these arguments for anti-poverty spending may be reasonable as far as it goes, but that alone is not enough to justify such spending. While these perspectives suggest that anti-poverty programs can generate benefits, any responsible evaluation must balance these benefits against the programs' costs. A thorough assessment of redistribution, then, requires a serious examination of the negative consequences of our anti-poverty efforts. And a look at the way redistribution generally plays out in America today indicates that those downsides can be significant indeed.
COSTS AND BENEFITS
In looking at the drawbacks of anti-poverty programs, it is important to consider both the direct costs and the less tangible, but still potentially serious, indirect costs. One of the chief direct costs is the way anti-poverty spending alters incentives: Such programs reduce the reasons for potential recipients of income transfers to work and save, because the availability of aid — and particularly of aid that is available only as long as one remains below a certain level of income — can discourage people from striving to rise above that income level. On the other side of the ledger, the taxation required to pay for anti-poverty programs discourages effort and savings by those who pay for the transfers. People are less inclined to work hard when they know that a large portion of the income they generate will go not to them or to their families, but rather to total strangers.
Whether these effects on economic productivity are large depends on the generosity of the anti-poverty spending, as well as on the magnitude of the taxes required to pay for it. A promise of subsistence income will induce only a few people to live off the dole, while a more substantial guarantee will cause many to reduce their efforts to support themselves. Likewise, the distortions caused by the taxation required to pay for anti-poverty programs increase with the amount of anti-poverty spending, such that a small program will have only small effects on taxpayers' work and saving.
As we have seen above, however, American anti-poverty programs are rather generous. If the $1.45 trillion in direct anti-poverty spending in 2007 had been simply divided up among the poorest 20% of the population, it would have provided an annual guaranteed income that year of more than $62,000 per household — a decidedly middle-class living. The actual distribution of the money is of course less direct; overhead, waste, and other inefficiencies are intrinsic to the operation of these government programs. Moreover, much of the redistribution goes to middle-class families, so the poor are not really provided with a middle-class income. Even so, the support they do receive is substantial. It is hard to believe, therefore, that this generosity — and the tax burden imposed on the rest of the population to pay for it — does not reduce effort among the poor and everyone else, as abundant evidence confirms.
Anti-poverty spending has more subtle downsides, too. Income support for the poor generates envy and demand for transfers from the near-poor. This effect can be seen in programs that have expanded eligibility requirements to well above the poverty line, such as Medicaid in many states, along with the new federal health-care law. The result is further disincentive for effort at the margins of poverty, and additional burdensome taxation beyond. Anti-poverty programs also promote the view that low income is someone else's fault — a notion that, once enshrined in public policy, can reduce many people's work ethic, initiative, and self-reliance. Rather than devoting themselves to increasing innovation and productivity, people throw their energies into chasing government transfers.
Moreover, anti-poverty programs lend credence to the claim that most people will not share their resources unless government compels them to. The evidence of daily life in America, however, shows that assumption to be false. Private efforts to alleviate poverty are enormous: Religious institutions operate soup kitchens; the Boy Scouts organize food drives; the Salvation Army raises money for the poor; Habitat for Humanity builds homes; and doctors' associations provide free health care. In 2009, Americans gave more than $300 billion to charity, a figure made all the more striking by the deep recession. More than 60 million people volunteered, donating some 8 billion hours of work — much of it in efforts aimed at helping the poor. Government charity, moreover, may crowd out private charity: Work by Massachusetts Institute of Technology economist Jonathan Gruber, for example, shows that New Deal welfare expenditures reduced charitable spending by churches.
Finally, and perhaps most important, anti-poverty programs have not obviously reduced poverty in recent decades. The following graph shows the poverty rate in the United States (i.e., the percentage of the population living at or below the poverty level) between 1962 and 2009, along with per capita federal spending on the direct anti-poverty programs discussed above (i.e., those that added up to $1.45 trillion in 2007), adjusted for both population growth and inflation.
The poverty rate declined significantly between 1959 and 1964 as the overall economy grew, but this was before the launch of President Lyndon Johnson's War on Poverty (and therefore before the enactment of Medicaid and Medicare, two of the largest anti-poverty programs in existence today). Between 1966 and 2009, the poverty rate fluctuated with the business cycle but showed little downward trend, even as anti-poverty spending grew by a factor of six in real terms. In other words, our anti-poverty programs have been implemented at enormous and ever-increasing cost — and it is not clear that they have done much to reduce the rate of poverty.
The arguments for redistribution are even weaker when it comes to progressive taxation. This is because such taxation redistributes income not only from the rich to the poor, but also among the non-poor. And unlike the social, cultural, and economic divisions that keep the seriously poor locked in poverty for generations, differences in income among the non-poor tend to reflect a variety of personal decisions — such as education, effort, risk-taking, and leisure. Likewise, households in the middle and upper parts of the income distribution have reasonable opportunities to insure against job loss or adverse health events through savings, help from family, working spouses, and other means; generally, they can make it just fine without government's compelling other citizens to pay for their welfare. As a result, the reasons used to justify redistribution from the rich to the poor — i.e., altruism toward people who are needy through little fault of their own, or the need to insure against low income — do not apply when it comes to redistribution among the non-poor.
Redistribution across the income spectrum — instead of simply from rich to poor — also exacts much higher costs. Beyond the fact that providing benefits to middle-class Americans is not a sensible goal of anti-poverty policy, the greater amount of redistribution that it involves requires vastly increased taxation. Progressive tax schemes, moreover, bring about significant economic distortions and inefficiencies because they impose the highest rates on upper-income households, and these households have the most flexibility to shift their energies away from work that generates income (and thus tax revenue), or to channel their savings to tax-preferred investments. They are also, incidentally, the households most likely to generate employment, hiring others to work as nannies, housekeepers, gardeners, and the like. By taking away such a vast amount of their income in order to give more money to the non-poor, such redistributive tax schemes arguably prevent hiring that would allow many of the legitimately poor to improve their condition through paid work.
The redistribution among the non-poor that progressive taxation aims to achieve thus lacks any convincing justification. But it does have substantial costs. Unfortunately, no perfect dividing line exists between redistribution to alleviate genuine poverty and this broader type of redistribution. But policymakers can nevertheless strive to enforce a distinction in general, and to limit attempts at redistribution to those efforts that explicitly help the poor (perhaps by, say, means-testing entitlement programs).
In the case of interventions aimed at tilting outcomes in specific markets toward people with lower incomes, the problems that arise are even more immense than those that result from direct anti-poverty spending or progressive taxation. These interventions generally produce ambiguous (at best) effects on the distribution of income, but they most decidedly distort economic incentives. Minimum-wage laws, for instance, mean higher incomes for some workers but zero income for others, since the minimum wage reduces employment. (A 2008 survey by economists David Neumark and William Wascher confirms this effect especially for the least skilled workers.) Mandated minimum wages that are well above market wages also drive businesses overseas or underground, leading to large reductions in employment opportunities for less-skilled laborers. The minimum wage also distorts firms' decisions about cost efficiency: Seeking more value for their dollars, business managers and owners decide instead to spend their money on capital investments or higher-skilled workers. Minimum wages are thus a good example of how redistributive market interventions are often poorly targeted, and of how they reduce economic productivity.
Other examples abound. Price controls on pharmaceuticals at first appear to be compassionate; in the short term, they can help low-income households afford medicine. In the long run, however, price controls reduce the incentive for drug makers to innovate (and deprive them of capital to invest in research and development). The result is fewer new medications, yielding fewer health-care options for everyone, rich and poor.
Rent controls lower housing costs for those lucky enough to obtain covered apartments, but such controls reduce the supply of rental housing for everyone else. The beneficiaries often include people with good connections or the time to search for controlled apartments, or those whom landlords think will be good tenants (as in the case of Harlem congressman Charles Rangel, who secured no fewer than four rent-controlled apartments in the same luxury building). Many of these people have higher, rather than lower, incomes, so the redistribution involved is partially perverse. (Even in cases where building managers are required to reserve a certain portion of their units for people below a certain income level, these apartments will often be rented out to the temporarily poor — recent graduates of elite universities, say — rather than to those who genuinely need subsidized housing over the long term.)
Another particularly telling example is government support of mortgage lending. Implicit government guarantees of mortgages — through purchases by Fannie Mae and Freddie Mac, as well as other policies — encourage lenders to extend loans to riskier and riskier borrowers. Some of these people are poor, but many are not; indeed, the poorest households are not in the housing market at all, as they are almost always renters. And, as noted above, such policies can lead to dangerous market instability — as well as to bubbles like the one that set off the current recession.
Regardless of what one thinks of direct anti-poverty spending, then, one should certainly be critical of progressive taxation and market interventions as ways to help the poor. These approaches have fewer convincing arguments in their favor — indeed, no good arguments at all — and they impose substantially greater costs. Government attempts at redistribution to help the poor should therefore consist, at most, of direct transfer programs. And these programs should obviously be far better configured than the ones we have today.
AN ALTERNATIVE APPROACH
So how should they be designed? Ideally, to maximize the amount of help available to the poor while minimizing the costs to the larger economy. Anti-poverty spending should therefore involve one simple and explicitly redistributionist program: We should repeal all of our existing anti-poverty programs — TANF, food stamps, housing allowances, energy subsidies, mortgage guarantees, Social Security, Medicare, and so on — and replace them with a so-called "negative income tax."
A negative income tax — an idea advocated by Nobel prize-winning economist Milton Friedman — would have two key components: a minimum, guaranteed level of income, and a flat tax rate that is applied to the total amount of income (if any) that a person earns. The net tax owed by any taxpayer would equal his gross tax liability — that is, his earned income multiplied by the tax rate — minus the guaranteed minimum income. If the gross liability were to exceed the guaranteed minimum, the taxpayer would owe the difference. If the gross liability were to fall short of the guaranteed minimum, the government would pay the difference to the taxpayer.
To illustrate, consider a negative tax-rate structure under which the guaranteed minimum is $5,000 and the tax rate is 10%. In this situation, a person earning no income would get a transfer from the government of $5,000 and have a total income of $5,000. A person earning $100,000 would have a gross tax liability of $10,000 and a net tax liability of $5,000, for a total after-tax income of $95,000. A person earning $10,000 of income would have a gross liability of $1,000 and a net liability of negative $4,000 (that is, this person would get a check from the government for $4,000), for a total after-tax income of $14,000.
This proposal — replacing all anti-poverty programs with a negative income tax — has some crucial advantages over the existing hodgepodge of anti-poverty programs. First, the administrative costs of running one simple program would be far lower than the costs of running many complicated programs. Separate taxes to fund Social Security and Medicare would not exist; employers would not have to withhold these taxes and send them to the government. More broadly, the rules and incentives faced by potential welfare recipients would be clear and understandable, compared to the dizzying array of rules that must be navigated by people attempting to collect anti-poverty benefits today.
Second, the amount of redistribution would be totally transparent. Anyone could figure out exactly how much any individual would pay for or receive in benefits. This would be a major change from today's circumstances, under which one must aggregate over multiple programs and convert in-kind transfers to cash equivalents in order to gauge the magnitude of redistribution. Under a negative income tax, it would be easier to tell precisely how much redistribution the government engages in.
Because of this transparency, the negative income tax would likely transfer substantially less income than the panoply of existing programs — which would certainly drive opposition to the negative income tax. Advocates of anti-poverty spending would fear, with good reason, that if the amount of anti-poverty spending were obvious, society would vote for less of it. Likewise, they would worry that if redistribution were undertaken only for the benefit of the poor, middle- and upper-income voters would endorse less of it. That may well be true — but no one who believes in democracy can reasonably argue that our anti-poverty policies should be based on a grand deception of the voting public. (Anti-poverty advocates might also consider that much of what people object to in terms of today's anti-poverty spending is fraud, waste, mismanagement, and other inefficiency; the transparency provided by a negative income tax could alleviate a great deal of this opposition.)
One legitimate concern about the negative income tax is its effect on incentives to work and save. A substantial amount of anti-poverty spending today takes the form of Social Security and Medicare — benefits that recipients get once they turn 65, almost regardless of how hard they have worked or how much they have saved. Earning more and saving more therefore does not result in lower payments. The negative income tax, by contrast, would punish earnings and savings to some degree; converting Social Security and Medicare into a negative income tax, then, would create some disincentives to work and save, especially at the margins of poverty.
Even so, the negative income tax might still introduce less distortion and inefficiency than our existing array of anti-poverty programs, because these programs often incorporate much higher (implicit) tax rates on income. In other words, the various transfer programs we operate today sometimes take away so much in benefits when a recipient's income increases that the recipient is only slightly better off — or, in a few cases, is actually worse off — for getting a job or working more hours. For example, increasing income beyond a certain level can mean complete loss of Medicaid benefits, which can amount to thousands of dollars. Under the negative income tax, though, there is no point at which government benefits disappear all at once; transfer payments phase out gradually with each dollar of income earned, so that the marginal work disincentive would be modest.
It is difficult, therefore, to determine the exact net impact on incentives of a negative income tax versus our current system, holding the amount of anti-poverty spending constant. Overall, however, a negative income tax is likely to produce far less total spending, and thus lesser and fewer economic distortions. Compared to the system we have today, that would be a major change for the better.
The notion of replacing our existing anti-poverty spending with a negative income tax has one additional implication that merits discussion: Under this approach, all anti-poverty benefits would take the form of cash. This would differ from today's benefits, which come in the form of specific goods (as occurs with Medicaid, school lunches, and government housing projects) or of spending power constrained to specific goods (as occurs with food stamps, Section 8 housing vouchers, and loans for education).
The benefit of transferring cash is that it gives recipients maximum flexibility to decide how they will spend the money they receive. This flexibility is valuable because the right mix of spending will be different for every recipient and every poor family. Some people will decide that their highest priority is a good school for their children; others will decide they need to spend money on transportation to a job; still others will spend their money on medical care, and so on. Assuming reasonable spending decisions, cash benefits maximize the improvement in recipients' well-being for any given amount of income transferred. In-kind transfers, by contrast, force recipients to "purchase" not only a particular kind of good but also a specific "brand" of good — for instance, a residence in a particular housing project, or health care at a specific hospital that serves Medicaid patients. The result is severe limitations on poor people's options and independence.
One possible objection to cash transfers is that some recipients might make bad spending decisions. A common concern is that poor people will spend their cash transfers on alcohol, drugs, or gambling. Forcing recipients to use their benefits to procure basic necessities, the argument goes, might thus increase poor people's welfare, especially that of poor children.
Such fundamentally paternalistic concerns are surely overstated. Relatively few parents, for example, deprive their children of food, shelter, and clothing, no matter how dire their own circumstances. But paternalistic concerns are not entirely unfounded, and in any case they carry great weight with the general public.
The best way to address such concerns in a system of cash transfers is through vouchers. Although these can be redeemed only for broad categories of goods — such as food, education, or housing — recipients still retain a great deal of flexibility regarding the exact purchases they make and the vendors from whom they choose to make those purchases. Housing vouchers, for example, force recipients to spend their transfer funds on housing; the money the vouchers provide, however, can be used in any neighborhood, not just at a government housing project in an inner-city slum. Vouchers are also preferable to the direct provision of goods by government because they leave production to the private sector, which is typically more efficient. At the same time, vouchers pose little risk of serious misuse: One can spend an education voucher on a mediocre school, but one cannot easily gamble it away at the race track.
A second possible objection to the cash-only approach is that the markets for some goods might work badly on their own, such that direct provision of these goods by the government is the only way to supply them at low cost to poor people. The standard example invoked in this argument is health insurance. A considerable body of economic analysis claims that private health-insurance markets do not operate well because of adverse selection — the tendency of people with worse health to be more likely to buy insurance. But these concerns, too, miss the point. Private insurers can readily identify who is healthy (or not), and, equipped with this knowledge, they would happily insure everyone they could, as long as they could charge appropriate premiums. The problem is that such premiums — the especially high ones for sick customers — would price some people out of the market. Thus the main justification for subsidizing health care is not fixing a market inefficiency, but redistribution. And indeed, the direct provision of health insurance by the government is one of the major sources of inefficiency — and therefore high costs — in our health-care system today, as the expenses of Medicare and Medicaid spiral out of control.
But while concerns about a dysfunctional health-insurance marketplace may be excessive, they are not entirely misplaced. When it comes to health care — more than any other good the government might provide to the poor — there may be an argument for some direct provision of insurance, rather than simply the money to purchase it. Means-tested health-insurance vouchers can guarantee the poor a minimal amount of health insurance with relatively few adverse effects on the health-care marketplace.
The ideal system of redistribution to benefit the poor in America, then, would involve mainly cash transfers. Possible exceptions might include vouchers for education and health insurance.
Where possible, these exceptions should be managed at the state level; to the extent that the federal government is involved, it should supply funding to the states through block grants — which provide a set amount of money to the states, generally based on each state's population, and allow the states to make their own decisions about how to allocate the funds. This, in turn, allows more decisions about benefit allocation to be made by people closer to the ground — people who have a better understanding of the lives and needs of benefit recipients. It also gives the states more flexibility to experiment with different approaches.
Indeed, the negative income-tax system as a whole might well work best at the state level — that is, if it replaced both federal and state redistribution efforts with 50 state-level negative income-tax systems. Advocates of redistribution fear that state-level negative income taxes would start a "race to the bottom," in which states slash benefits to avoid attracting poor populations. But in fact, numerous examples — including minimum-wage laws, TANF and Medicaid provision, and pre-1935 state-level Social Security programs — show that states can exhibit substantial altruism, so that state-level anti-poverty efforts would not be miserly. Concern over in-migration might nevertheless counter political pressures in favor of excessive expansion, yielding a better balance than a federal system could attain. And state experimentation would no doubt produce some innovations and improvements that a federal system would fail to discover. Of course, a transition from our current system to a negative income tax might be easier if it first involved a transition to one federal negative income tax. In an ideal world, however, such a system would eventually be turned over to the states.
A MORE RATIONAL SAFETY NET
Moving from the complex and cumbersome redistributive anti-poverty scheme we have today to the one proposed above would be a truly radical reform. A system in which government would eschew intervention in specific markets, abandon progressive taxation, combine nearly all existing anti-poverty programs into a negative income tax, and assign as much of the remaining work of redistribution as possible to the states could come about only through a dramatic shift in our thinking about anti-poverty policy. And such shifts never come easily.
Still, the goals in question — helping people who are truly in need, while also not weakening the broader economy — are so important, and so poorly served by today's anti-poverty programs, that they simply demand such a change. Of course, an anti-poverty policy that met these two criteria might still be far from perfect: Even a minimalist negative income tax would have some distorting effects. Keeping such a policy minimal, moreover, would be a constant struggle. Still, this approach would certainly be an improvement on our existing array of anti-poverty tools. It would be better designed to advance the goals shared by much of the public; it would also avoid the immense costs and dangerous pitfalls that, too often, have accompanied America's anti-poverty efforts over the past half-century.
Most of all, the discussion surrounding these proposals would force Americans to carefully consider some important issues we have ignored for far too long: the justice of redistribution, the wisdom and effectiveness of the methods through which we pursue it, and whether those methods have met any reasonable empirical standards of success. Given how much of our citizens' resources, and our nation's politics, are consumed by these questions, the least we can do is devote some serious attention to them — and, in the process, perhaps finally embrace an anti-poverty strategy that makes sense.
Jeffrey A. Miron is a senior lecturer and director of undergraduate studies in the Department of Economics at Harvard University, and a senior fellow at the Cato Institute. He is the author, most recently, of Libertarianism, from A to Z.
|
<urn:uuid:66e6ea38-0fc4-4055-99fb-94c01f7d2fd6>
|
{
"dump": "CC-MAIN-2014-42",
"url": "http://www.nationalaffairs.com/publications/detail/rethinking-redistribution",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507445190.43/warc/CC-MAIN-20141017005725-00130-ip-10-16-133-185.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9609456658363342,
"token_count": 7207,
"score": 2.5625,
"int_score": 3
}
|
Imbedded in US history is a rich tradition of combat bravery, heroism, and patriotism exhibited by African Americans. In fact, the heroic contributions of AAs predate the USA, itself. Many AAs fought for the colonies in pre-Revolutionary War skirmishes against the British, and an AA named Crispus Attucks had the dubious distinction of becoming the first fatality of the War when he was killed in the “Boston Massacre” in 1770. Below is a brief outline of the combat history of AAs and Japanese Americans in each war.
AAs fought on both sides both as freedmen and slaves. According to renowned Revolutionary War historian, Gary Nash, approximately 9,000 AAs fought for the Patriots in the Continental Army and Navy. For example, despite official prohibition by the Continental Congress both George Washington and Francis Marion, aka “The Swamp Fox,” relied on them heavily, even in combat. It is estimated that, at times, as much as half of Marion’s forces consisted of freed AAs. Noted historian and author Ray Raphael adds that many slaves also fought for the British. Many, if not most of them, were no doubt motivated by Emancipation Proclamations issued by the military leaders of British forces in some of the colonies, notably Virginia and New York. Although some served in combat, most, particularly on the British side, served as orderlies, mechanics, laborers and in other support capacities.
War of 1812
There were no legal restrictions with respect to the enlistment of AAs in the Navy for the very practical reason of manpower shortage. For example, in the pivotal Battle of Lake Erie approximately 25% of the American personnel were AAs. In addition, Andrew Jackson’s forces in the famous Battle of New Orleans, which, was actually won after the War was officially over, were supplemented by the Louisiana Battalion of Free Men of Color and a unit of AA soldiers from Santo Domingo.
Again, the Louisiana Battalion of Free Men of Color was an active participant. Additionally, many AAs served as servants of or slaves to Army and Naval officers.
Nearly 200,000 AAs participated. Approximately 163 units, consisting of both freedmen and runaway slaves, fought for the Union. The Confederacy utilized fewer amounts of both freedmen and slaves for labor and other support tasks.
These conflicts occurred primarily between 1863 and the early 20th Century. AAs were very active participants. In 1866, the Army formed two regiments of all AA cavalry and four of infantry. These were the first peacetime all-AA regiments. For the most part they were under the command of white officers, although occasionally, there was an AA in command, such as Henry Flipper. Their primary roles were support-related, such as building and maintaining roads and guarding the US mail. These units were nicknamed “Buffalo Soldiers” by the Native American tribes in the area, and the moniker “stuck.”
Spanish American War
The most notable battle in which the Buffalo Soldiers fought in this war was the Battle of San Juan Hill in July 1898, made famous by the participation of Theodore Roosevelt and his “Rough Riders.” TR’s “Rough Riders” got most of the publicity and notoriety for the famous victory, but, in reality, it was the “Buffalo Soldiers” who had done most of the heavy fighting.
Approximately, 350,000 AAs served. Again, most of them were utilized in support roles. Perhaps, the most notable unit was known as the “Harlem Hellfighters,” which served with great distinction on the Western Front for several months, one of the longest tenured units to serve on the Front. The “Hellfighters” were a new unit, organized in 1916 in NY. The unit consisted entirely of AA enlisted men with both AA and white officers. The unit earned over 100 medals, both French and American. Two members became the first Americans to earn the very distinguished French Croix de Guerre (Cross of War).
AAs enlisted in copious amounts, however, regardless, were still not treated well or fairly. Segregation was very much alive and well in the Armed Forces. Approximately, 125,000 AAs served overseas primarily as truck drivers, stevedores, mess cooks, and in other support capacities.
One of the most notable units was the Tuskegee Airmen under the command of Benjamin Davis, Jr. The derivation of the name was based on the fact that all of the approximately 1,000 pilots in the unit had been educated at Tuskegee University in Tuskegee, Alabama and also trained in the area. These pilots were widely considered to be among the finest pilots in the Armed Services. During the War, they served with great distinction primarily in North Africa and Italy, earning a considerable number of medals. Afterwards, following integration, many of them became officers and instructors. Three of them became generals. Mr. Davis became the first AA general in the US Air Force. He was following in the footsteps of his father, Benjamin Davis, Sr., who had become the first AA general in the US Army in 1940. Quite the family. The unit has been the subject of countless books, movies and tv programs.
Another notable AA was Mr. Doris Miller. Mr. Miller was a mess attendant in the Navy stationed at Pearl Harbor on December 7, 1941. During the Japanese surprise attack he voluntarily manned an anti-aircraft gun against the Japanese with great distinction despite having had no prior training on the weapon. As a result of his extreme bravery he became the first AA to earn the Naval Cross.
In 1944 thirteen AAs, aka “The Golden Thirteen,” became the Navy’s first AAs to be commissioned as officers. One of them, Mr. Samuel Gravely, Jr., went on to become the first AA Admiral. Also, in 1944, the Allies were suffering heavy losses of combat soldiers during the pivotal Battle of the Bulge, and there was a severe shortage of replacements. As a result, General Eisenhower made the executive decision to integrate AAs into some white combat units. As always, AAs fought with great distinction. No doubt, the success of this de facto integration influenced President Harry Truman’s decision to order the integration of the Armed Forces, which he did in July 1948 by Executive Order. In turn, many believe that the integration of the Armed Forces influenced American Society as a whole toward overall integration of AAs.
This account would not be complete without mentioning the contributions of Americans of Japanese descent during WWII. Most of you are cognizant of the fact that because of the fear and hysteria following the attack on Pearl Harbor (not to mention racism and prejudice toward Asians in many quarters), Japanese living on the Pacific Coast of the US were placed in Internment Camps, where they suffered many deprivations and indignities. (I have blogged about this previously.)
Despite this, many of them, being patriots, volunteered to fight for the US. These JAs, aka “Nisei,” were second generation American citizens. They were assigned to segregated units under the command of white officers. They were active mostly in Italy, Southern France and Germany. One of these units, the 442nd Regiment, became the most highly decorated unit in the entire war. They were very brave and aggressive, and suffered very heavy casualties, earning the moniker “the Purple Heart Battalion.” It is estimated that many of the original 4,000 men had to be replaced over three times. The unit’s motto was “Go for broke!”
One of the unit’s most famous achievements was the rescue of a Texas-based unit that was hopelessly pinned down by superior German forces. This became a legend known as the “rescue of the lost battalion.” Thus, a unit comprised of “disgraced” persons, who had been considered “disloyal” and “untrustworthy,” if not out and out spies, ended up becoming heroes.
AAs and other persecuted minorities have a rich history of heroism on behalf of America in times of conflict. They have earned more than their fair share of medals and awards. Due to time and space, I have only recounted a fraction of their contributions. How ironic that these persecuted minorities, time and again, have, nevertheless, risen above these circumstances to achieve greatness. Only in America!
|
<urn:uuid:27a37f3e-9404-49c7-976a-4ae0bc6b477f>
|
{
"dump": "CC-MAIN-2018-05",
"url": "https://webuiltitblog.com/2016/01/20/military-heroes/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888041.33/warc/CC-MAIN-20180119144931-20180119164931-00679.warc.gz",
"language": "en",
"language_score": 0.9833343029022217,
"token_count": 1760,
"score": 3.84375,
"int_score": 4
}
|
Breaking Bad may seem like a lot of fun and games, but what is often overlooked is that someone had to cleanup the mess Walter and Jesse made. The discovery of a meth lab, or the waste from one, provides law enforcement with a significant amount of hazardous waste to contain, manage, transport, and dispose. Unfortunately, the generators of the waste are not often the type that are concerned about the environment or take heed of the USEPA regulations regarding the “Cradle-to-Grave” management of hazardous waste. That leaves state and local law enforcement holding the bag and responsible for the disposal of the waste. Sometimes a small police force with a limited budget may find itself subject to the RCRA regulations as a large quantity generator of hazardous waste due to its discovery or closure of meth labs.
In an attempt to assist law enforcement the Methamphetamine Remediation Research Act of 2007 required USEPA to develop guidelines for the cleanup of meth labs and the disposal of waste associated with them. These are voluntary guidelines and are based on the best currently available knowledge in the field of meth lab remediation. A partial list of what the guidelines address includes:
- Hiring a contractor
- Worker safety and health
- Removal of contaminated materials
- Waste characterization and disposal procedures
- Detergent-water solution washing
- Outdoor remediation
- Recommended best practices for the remediation of specific materials (eg. walls, ceilings, floors)
- Methods of collecting samples from a structure to identify the presence of methamphetamine
- Additional information and resources are included in the Appendices
From the USEPA Emergency Response website (here):
Guidelines Questions and Answers:
Why is EPA publishing these voluntary guidelines?
The Methamphetamine Remediation Research Act of 2007 required EPA to develop guidelines for remediating former methamphetamine labs. This document provides those guidelines for States and local agencies to improve “our national understanding of identifying the point at which former methamphetamine laboratories become clean enough to inhabit again.” The legislation also required that EPA periodically update the guidelines, as appropriate, to reflect the best available knowledge and research.
Who should use these guidelines?
The guidelines are geared towards state and local government personnel charged with remediating or otherwise addressing former methamphetamine (meth) labs. This document helps disseminate the best available knowledge and research on meth lab remediation and will also prove useful to cleanup contractors and could be a resource for homeowners.
Does this document create new regulations for meth lab cleanup?
EPA prepared this document based on best current practices to provide voluntary cleanup guidelines to state and local governments, cleanup contractors, industrial hygienists, policy makers and others involved in meth lab remediation. It does not set requirements, but rather suggests a way of approaching meth lab remediation. Those using this document should also consult their appropriate municipal, county or state guidance documents, regulations and statutes. This document is not meant to supersede municipal, county or state guidance documents, regulations or statutes (however this document may be useful as they develop and/or review and revise their own guidelines).
The generation of a hazardous waste, even in a legal environment such as a factory, has its own risks and dangers. Be sure you are in compliance with the regulations of the USEPA and those of your state if you are a generator of hazardous waste (Large Quantity Generator, Small Quantity Generator, or Conditionally Exempt Small Quantity Generator). I provide the training in any format you require (Onsite, Seminar, Webinar, or Learning Management System) to ensure compliance and a happy ending.
Daniels Training Services
|
<urn:uuid:80776e92-926d-4a91-a1d9-935f929cd446>
|
{
"dump": "CC-MAIN-2017-43",
"url": "http://danielstraining.com/guidance-from-the-usepa-voluntary-guidelines-for-methamphetamine-laboratory-cleanup/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826840.85/warc/CC-MAIN-20171023221059-20171024001059-00350.warc.gz",
"language": "en",
"language_score": 0.9181399345397949,
"token_count": 735,
"score": 2.65625,
"int_score": 3
}
|
As my husband and son sat challenging each other to see who could eat the most hot sauce, I had to laugh at the variety of pepper sauces and salsas on the table. Who knew there were so many kinds of chile peppers? You can find a wide assortment of colors and shapes at your local market right now, from sweet bells to spicy habaneros, and you may be inspired to increase your intake once you learn more about their myriad health benefits.
Health benefits of peppers are a hot topic. Although they have been used therapeutically for centuries, researchers are currently exploring their potential to assist with weight loss, pain reduction, indigestion, and disease prevention.
- Capsaicin, the substance that gives peppers their heat, has been found to speed metabolism and increase fat burning, but only when extremely large-dose capsules were taken. When eating tolerable amounts, effects are not significant.
- Pain reduction occurs when capsaicin-containing creams are applied to the skin, rather than ingested. One exception is stomach pain; while peppers can cause abdominal pain in some people, they have not been found to increase symptoms of heartburn. In fact, regular intake has been linked to decreased indigestion. Eating spicy peppers may also reduce the stomach discomfort and damage caused by anti-inflammatory drugs.
- Capsaicin also has the ability to clear congestion and relieve sinus pain. If you have ever suffered tearing eyes and a runny nose after a bite of a hot serrano, you understand.
- A variety of peppers have been associated with a reduction of: blood clotting, inflammation, free radicals, blood pressure, heart rate and insulin levels. While they alone cannot prevent diabetes, heart disease or cancer, peppers can play an important role in your healthy lifestyle.
Most known about peppers are their generous levels of nutrients, including fiber, vitamins, minerals, phytochemicals and antioxidants, which provide countless benefits.
- Antioxidants fight free radicals, which are responsible for the aging process and play a role in heart disease and cancer.
- Phytochemicals defend against inflammation, boost immunity and protect against cancer and heart disease.
- Fiber helps with digestion and can help with the prevention of diverticulosis, diabetes, weight gain, and heart disease.
- Vitamin A plays an important role in vision, the immune system and bone health.
- Vitamin C is necessary for growth and repair of tissues in all parts of your body and may reduce cancer risk.
- Potassium is critical for muscle movement, brain function, and maintaining blood pressure.
- Folate helps with new cell formation and growth.
- Vitamin B6 promotes brain and immune function.
- Lutein and zeaxanthin protect again ultraviolet light in the eyes and age-related macular degeneration.
Which peppers supply which specific nutrients? Most of them provide significant amounts of those listed here. For the most benefit, choose a colorful assortment, including bell, chile, jalapeno, cayenne, serrano, habanero and banana peppers. Whether you enjoy them fresh, roasted or dried, make peppers a regular part of your diet for both their flavor and health benefits.
Melissa Wdowik is an assistant professor at Colorado State University in the Department of Food Science and Human Nutrition, and director of the Kendall Anderson Nutrition Center.
((M2 Communications disclaims all liability for information provided within M2 PressWIRE. Data supplied by named party/parties. Further information on M2 PressWIRE can be obtained at http://www.presswire.com on the world wide web. Inquiries to email@example.com)).
© 2013 M2 COMMUNICATIONS
|
<urn:uuid:df9560da-17b2-4669-97c2-9941e5fcd991>
|
{
"dump": "CC-MAIN-2019-22",
"url": "http://www.goodearthnaturalfoods.com/promog/ConditionCenter.asp?ConditionID=36&ArticleID=1102&StoreID=PJ102JRNHNGT8G0QMPEQ7LDC7GX6C2W2",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00083.warc.gz",
"language": "en",
"language_score": 0.9356505274772644,
"token_count": 766,
"score": 2.71875,
"int_score": 3
}
|
Books about “how to create comics” often talk about the two ways to write a comics story: “full script” and “plot first/Marvel-style.” These discussions leave out a third method.
Throughout the past century thousands of comics scripts have been generated by sketching the pictures on the page and handwriting or typing the dialogue on it. You can see a dramatization of Harvey Pekar using this method in the movie American Splendor. According to Mark Evanier, this script method was common practice in funny animal comics, and at Western (a.k.a. Dell, Gold Key, Whitman). At times, Archie Comics has required writers to provide thumbnail sketches of page.
Michael Barrier has posted a terrific look at a 1948 Porky Pig script by Chase Craig on his blog. It’s a fascinating look at the creation of a classic era comic book story. Check it out.
|
<urn:uuid:21361514-b8ce-4a91-af74-838d49755a36>
|
{
"dump": "CC-MAIN-2014-41",
"url": "http://www.comicscareer.com/?p=19",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657136896.39/warc/CC-MAIN-20140914011216-00284-ip-10-234-18-248.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9467772245407104,
"token_count": 192,
"score": 2.53125,
"int_score": 3
}
|
Description of the project: Affected by Confucian ideology, patriarchy remained dominant in Vietnam. It has been not only limiting to the development and social equality of women but also prevents them from taking part in disaster response and climate change adaptation even though they belong to the most vulnerable group. They are not considered eligible and often described passively as support recipients during disasters even though their experience and contributions are as valuable as men’s. Therefore, from 2014 –2015, Sustainable Rural Development (SRD) worked with 5700 women in 2 lagoon communes in central Vietnam to promote their voices and to include them in village rapid response teams as active contributors.
Climate Impact: The vulnerability of women has been increased in the context of climate change. They have found it extremely difficult to adapt and maintain life for themselves and for their families and their children. Through the project, 60 local women were added to the 12 village rapid response teams which previously only involved men. Furthermore, their participation is now recognized by local authorities as officially equal to men. It is encouraging to local women who can from now on have the same opportunity to go the trainings for capacity building in preparedness planning, first aid, communication, environmental protection, and adaptive livelihoods. Additionally, they have experienced trainings to enable them to reduce their vulnerability, to help the community be more responsive to disaster and to strengthen their community resilience.
Gender Impact: With women’s participation in the rapid response team, the perception of women by the men and by the community is positively changed. Women themselves feel more confident in speaking out in public and in participating not only in disaster response and climate change adaptation but also in many other activities in the community. Women are now highly appreciated by their families and society. They are mentioned as skillful and patient campaigners in disaster response and as creative and hardworking in climate change adaptation. Thanks to this, local men are now more aware of sharing housework with women than before. Women’s voice and position as well as decision making powers in the family and community are thus strengthened and leveraged.
|
<urn:uuid:e1eebeb5-8fc7-409b-8893-7ba5013577d0>
|
{
"dump": "CC-MAIN-2021-43",
"url": "https://womengenderclimate.org/gjc_solutions/confronting-disaster-response-in-lagoon-regions-through-women-empowerment-at-community-level/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587794.19/warc/CC-MAIN-20211026011138-20211026041138-00624.warc.gz",
"language": "en",
"language_score": 0.9836092591285706,
"token_count": 421,
"score": 2.609375,
"int_score": 3
}
|
People who are curious succeed in business and in social situations more than people who hold back their curiosity. All of us start out as curious infants, exploring our world as busily as we can. Curiosity is built-in and propels development of our senses and our abilities.
Often, curiosity is halted because of societal restraints, family restraints, and environmental barriers. People who experience too much repression of their natural curiosity add to the repression by quashing their urges to learn and explore. Sometimes, the desire to experience is so strong that curiosity leads the way and the repression can be overcome.
People who grow up in a nourishing environment that allows natural curiosity to flourish are able to develop more freely. This type of environment does not guarantee achievement, but it does offer support.
To enhance curiosity
- Approach the known with questioning. Do you always do something a certain way? Why is that? Notice your habits and question the ones that don’t make sense.
- Approach the known with innovation. Notice the choices you make repeatedly. Do you eat the same foods over and over again without evaluating their appeal? Do you tire at the same time every day? Why is that? What can be done about these things?
- Approach the known with wonder. When the rain starts, don’t rush to take cover. Feel the drops and be connected to them. Look at the trees and other vegetation that you see every day and really notice them.
- Approach the known with certainty. The things that are familiar are comforting. Let them bring comfort, but then move beyond them. Explore something less familiar while keeping the familiar within reach.
Curiosity is with us from the moment we can experience awareness until the moment that we cannot. The more we let ourselves develop, the more fully we live!
|
<urn:uuid:7664f76a-7520-4c35-b881-a5c63f3226a5>
|
{
"dump": "CC-MAIN-2018-51",
"url": "https://energy-guidance-complete.com/tag/development-2/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00551.warc.gz",
"language": "en",
"language_score": 0.9611887335777283,
"token_count": 367,
"score": 3.1875,
"int_score": 3
}
|
Earthworms creep over the ground by using a mechanism called peristalsis to squeeze and stretch muscles along their bodies, inching forward with each wave of contractions. Now researchers at MIT, Harvard University, and Seoul National University have engineered an autonomous robot that moves the same way. The robot, made almost entirely of soft materials, is remarkably resilient: when stepped upon or bludgeoned with a hammer, it can inch away unscathed. Researchers say such a soft robot may be useful for navigating rough terrain or squeezing through tight spaces.
To build the robot known as Meshworm, researchers created what they call “artificial muscle” from wire made of nickel and titanium—a shape-memory alloy that stretches and contracts with heat. They wound the wire around the mesh body to create segments along its length, much like the segments of an earthworm. Applying a small current to the wire heats it to squeeze the mesh tube and propel the robot forward.
The researchers, led by assistant professor of mechanical engineering Sangbae Kim, published details of the design in the journal IEEE/ASME Transactions on Mechatronics. They noted that earthworms have two main muscle groups: circular muscle fibers that wrap around the tubelike body and longitudinal muscle fibers that run along its length. The two muscle groups work together to inch the worm along.
To design a similar soft, peristalsis-driven system, the researchers first made a long, tubular body by rolling up and heat-sealing a sheet of polymer mesh. The mesh, made from interlacing polymer fibers, allows the tube to stretch and contract like a spring.
They fabricated a nickel-titanium wire and wound it around the mesh tube, mimicking the earthworm’s circular muscle fibers. Then they fitted a small battery and circuit board inside the tube and generated a current to heat the wire. Kim and his colleagues developed algorithms to control the wire’s heating and cooling, directing the worm to move in various directions.
The group subjected the robot to multiple blows with a hammer, even stepping on it to check its durability. Despite the violent impacts, the robot crawled away intact. “You can throw it and it won’t collapse,” Kim says. “Most mechanical parts are rigid and fragile at small scale, but the parts in Meshworms are all fibrous and flexible.”
|
<urn:uuid:3151038e-c805-4e18-ab05-c7763098bff4>
|
{
"dump": "CC-MAIN-2020-24",
"url": "https://www.technologyreview.com/2012/10/24/183093/a-robotic-creepy-crawler/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387219.0/warc/CC-MAIN-20200525032636-20200525062636-00550.warc.gz",
"language": "en",
"language_score": 0.9316633939743042,
"token_count": 492,
"score": 4,
"int_score": 4
}
|
Suitable for students with three or more years of modern Chinese language instruction, Anything Goes uses advanced materials to reinforce language skills and increase understanding of contemporary China in one semester. This fully revised edition provides learners with a deeper fluency in high-level Chinese vocabulary and grammar, and includes newspaper articles and critiques as well as other primary source documents, such as political speeches and legal documents. The textbook covers topics that are essential to understanding contemporary Chinese society, including changing attitudes toward women and marriage, the one-child policy, economic development, China's ethnic minorities, and debates surrounding Taiwan and Hong Kong. The lessons intentionally investigate thought-provoking and sometimes controversial issues in order to spark lively classroom discussions.
This new edition incorporates suggestions and improvements from years of student and teacher feedback. With an improved, more user-friendly format, Anything Goes juxtaposes text and vocabulary on adjacent pages. Grammar explanations and exercises have also been thoroughly updated.
- Advanced-level Chinese language textbook
- Includes newspaper articles and primary source documents
- Thought-provoking topics on contemporary Chinese society
- Updated grammar explanations and exercises
- New user-friendly format
Other Princeton books authored or coauthored by Chih-p'ing Chou:
Another Princeton book authored or coauthored by Hua-Hui Wei:
|
<urn:uuid:c739b774-0e84-41d6-9b10-4b9660a5e7aa>
|
{
"dump": "CC-MAIN-2015-40",
"url": "http://press.princeton.edu/titles/9589.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738004493.88/warc/CC-MAIN-20151001222004-00101-ip-10-137-6-227.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8974690437316895,
"token_count": 261,
"score": 2.703125,
"int_score": 3
}
|
Originally posted April 30, 2009
I just returned from a lecture by Dr. Sapolsky on the campus of UC Davis. This guy does some awesome research! And has some awesome hair! Here is a synopsis of his talk:
When an individual (human or otherwise) encounters a particularly stressful situation, their body releasesglucocorticoid hormones in response (e.g., cortisol release in people). These hormones divert energy away from numerous bodily functions and send that energy to important muscles that your body may need to help you escape. For example, your body will put off ovulating for awhile so that the energy can instead go to your thighs, which need to quickly carry you away from the predator at your heels.
When stressors are acute and stress responses infrequent, then this system works wonderfully. Unfortunately, organisms in highly social societies (e.g., primates like ourselves) often experience frequent, chronic stress. This means that our stress hormone levels are frequently high and are continuing to divert energy away from particular bodily functions in order to prepare it for use elsewhere. One area where this is especially a problem is in the part of the brain known as the hippocampus(important in long term memory and spatial navigation).
The hippocampus has lots and lots of glucocorticoid receptors. When an individual is frequently stressed, then energy is frequently being diverted AWAY from hungry neurons in this region. Neurons in chronically stress individuals are therefore experience a state of near constant low energy. Most of the time, this low energy state does not result in neuron death.
Unfortunately, extremely high stress situations (such as strokes, gran mal seizures, etc.) can push the neurons past their tipping point, resulting in cell death and a loss of brain mass in the hippocampus. After an event such as a stroke, the body maladaptively (but understandably!), responds by releasing more stress hormones. This causes further energy deprivation to the neurons, knocking many of them out.
So what can we do to preserve neurons after events such as strokes? You can inhibit the body from producing glucocorticoids, you can bind up glucocorticoid receptors in the hippocampus so they can’t respond to the glucocorticoids, or you can supply the hippocampus with additional nutrients to make up for the energy loss. The problem with these three solutions is that they’re mainly effective if you implement them very soon after the event and they can only dampen negative effects.
The Sapolsky lab has recently been working on an awesome new solution to this problem. While cortisol is associated with neuron death, estrogen seems to have a regenerative effect. So how do you get the hippocampus to release estrogen in response to increasing cortisol levels? The answer: surprisingly, gene therapy through the use of VIRUSES.
Viruses inject themselves into cells and direct the cells to produce particular proteins that the virus requires in order to replicate. Scientists now use viruses to direct cells to create proteins that code for particular proteins that scientists would like the cell to produce.
Here is the genius of the Sapolsky Lab. The lab has manufactured a virus that gets cells to produce a protein that is BOTH a glucocorticoid receptor and a molecule that binds to estrogen receptors. In anthropomorphic terms, this protein knows when stress levels are up and goes to estrogen receptors to tell cells to start producing estrogen. The result is neuron regeneration INSTEAD of neuron death after super stressful events.
Although this solution is brilliant, it comes with some serious drawbacks. First, it requires purposefully introducing a virus into a patient. Even worse, lots of the viruses that are the most suited for this technique are related to pretty nasty viruses. There are fears that a virus may sometimes recover its ability to become infective once it is in the patient and the possibility also exists that the patient’s body will recognize the disease as foreign and mount an immune attack against it. Unfortunately, the problems don’t end there.
The viral vector also needs to be introduced directly into the hippocampus. It would be a bad thing for the entire human body to begin upregulating estrogen in response to stress, so it’s important that the virus injection be targeted. One of the only methods for getting the viral vector into the hippocampus is to use a needle to inject it directly. Clearly, you will not be able to drill a hole and inject a needle into the head of a patient experiencing a gran mal seizure. You don’t really want to be removing the skull cap of stroke victims either. At the moment, there is no clear solution. Dr. Sapolsky even postulates that, if this problem isn’t solved in the next decade or so, then you can expect funding for this area of research to dry up real fast.
Kind of makes you wish research on nanobots (robots that work at super microscopic scales) were moving along faster, huh? If we could manufacture nanorobots to deposit these viral vectors in the hippocampus, the problem would be solved. Unfortunately, these tiny biological works are more theoretical than anything at the moment.
While no biological nanobots have been created at this time, work on nanomachines has been progressing. The Tour Lab at Rice University was able to create a “nanocar”, the movement of which could be controlled through the use of a scanning tunneling microscope. It seems promising to me that we’ve at least figured out how to manufacture molecular robots and have figured out how to direct their movements!
Hopefully science comes up with a way to safely utilize the Sapolsky Lab’s viral invention. In the meantime, you should check out some of Sapolsky’s books!
Monkeyluv: and Other Essays on Our Lives as Animals
Edit: Travels with Darwin posted a summary of the last part of Sapolsky’s talk focusing on his primate work.
Leave a Reply
|
<urn:uuid:4b977d31-43ba-4f9c-85e6-d59a220e6f60>
|
{
"dump": "CC-MAIN-2023-23",
"url": "http://www.weinersmith.com/stress-and-your-brain/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657735.85/warc/CC-MAIN-20230610164417-20230610194417-00709.warc.gz",
"language": "en",
"language_score": 0.9392465949058533,
"token_count": 1223,
"score": 2.6875,
"int_score": 3
}
|
This documentary film is a story about democracy, human rights, and what it means to stand up for your values in America today. On January 21, 2017, hundreds of thousands of women marched on Washington, DC. That same day, hundreds of sister marches took place across the country and around the world. It grew into the largest one-day protest in American history.
Shot on location in five U.S. cities, the film shares firsthand experiences of the day from individual participants.
From marches in Boston, San Francisco, Oakland, Santa Rosa, and Washington D.C., the film explores several individual women’s stories and their motivations to march. For some people, it was their first time marching. For others, it was the continuation of a decades-long fight for human rights, dignity, and justice. For all, it was an opportunity to make their voices heard.
The film also features public figures like Elizabeth Warren, Kamala Harris, Gloria Steinem, and Malkia Cyril.
How did the film come about?
During the election, our team talked about working on a film related to politics, social justice, racial justice and equality. In our daily work, our team of creative filmmakers at TrimTab Media help progressive brands and nonprofits tell their stories with video and other online media. We also produce independent documentaries about social and environmental issues that inspire audience to take action.
After the election and as the Women’s March began organizing, we knew we needed to get started right away. We contacted the San Francisco Bay Area Women’s March organizers, and began developing a concept to tell this story. That concept quickly grew to includes stories in Boston, Washington, DC, and beyond. Our team is honored and excited to help share the story of the Women’s March.
|
<urn:uuid:9e7f2c55-374b-4bd6-8464-5a1d3092596b>
|
{
"dump": "CC-MAIN-2020-10",
"url": "http://womensmarchfilm.com/story/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143784.14/warc/CC-MAIN-20200218150621-20200218180621-00101.warc.gz",
"language": "en",
"language_score": 0.9631417393684387,
"token_count": 370,
"score": 2.71875,
"int_score": 3
}
|
However, these coal-fired plants produce high levels of carbon-dioxide (CO2) emissions, and Poland is attempting to meet stringent European Union (EU) requirements regarding CO2 emissions by reintroducing nuclear power plants.
Poland currently generates more electricity than it uses and exports mainly to the Czech Republic and Slovakia. However, domestic consumption in Poland is expected to grow by as much as 90% by 2025. With nearly half of the country's natural gas supplies coming from Russia, new, cleaner and more reliable sources of power generating capacity are required.
In the 1980s, four Russian 440-WER units, each with a power generation capacity of 440 megawatts (MW), were being developed in Zarnowiec in northern Poland. However, after the Chernobyl nuclear power station accident in April 1986, these projects were canceled and the resources were sold.
In 2005, Poland's cabinet decided to diversify the country's electricity portfolio and opted to investigate nuclear as an alternative. A feasibility study in 2006 proposed that the generation of 11 gigawatts (GW) of power from nuclear sources would be optimal but too expensive. The target was lowered to 4.5 GW from nuclear sources by 2030. In 2008, the Minister of Economy announced plans for the first nuclear plant to be built by 2023 in Zarnowiec.
In January 2009, state-owned Polska Grupa Energetyczna SA (PGE) announced plans for two 3,000-MW nuclear power plants to be built in the northern and eastern regions of the country. The first of these is scheduled to be commissioned in 2020, followed by the second plant two or three years afterward.
In mid-May, it was announced that preparations for the construction of the first nuclear plant, with an investment of about $4.97 billion, had already started in the Pomorskie region, while the second plant will be built in the Pomorskie, Podlaskie or Wielkopolskie regions.
In 2006, Poland agreed to cooperate with Lithuania, Estonia and Latvia to build a large nuclear reactor in Visaginas, Lithuania. The plant is intended to replace the previous obsolete nuclear plant in Ignalina that Lithuania was forced to close as a condition of its entry into the EU in May 2004.
The Lithuanian Energy Organization was formed in May 2008 by Lithuania's government in conjunction with PGE and energy companies from Estonia and Latvia to oversee the construction of the new plant.
Under the agreement, Poland expects to obtain 1,000 MW of electricity from the project, named the "Baltic States power plant," which will have a generating capacity of about 3,300 MW. To gain access to this power, a 400-kilovolt, 1,000-MW cable connection between Poland and Lithuania will be built, with construction planned for completion by 2015.
|
<urn:uuid:956e6bad-b9ba-42f1-bace-53aef353e6f8>
|
{
"dump": "CC-MAIN-2019-30",
"url": "https://www.electricityforum.com/news-archive/jun09/Polandturnstonucleartoreducecoaldependence",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00323.warc.gz",
"language": "en",
"language_score": 0.9728294610977173,
"token_count": 584,
"score": 3.171875,
"int_score": 3
}
|
Essential Question: How have the treatment, rights and acceptance of the disabled changed over time?
Esherick, Joan. "Chapter 4: The LAW TODAY: ACCESS to JOBS and PUBLIC PLACES AND SERVICES." Guaranteed Rights: The Legislation That Protects Youth with Special Needs, Jan. 2004, p. 62. EBSCOhost.
Byzek, Josie. "ADA: a People's History." New Mobility, vol. 26, no. 262, July 2015, p. 31. EBSCOhost.
"44 Years Later, the Truth About the 'Science Club'." The New York Times. The New York Times, 30 Dec. 1993. Web. 24 May 2017
"Working with a Mental Health Condition ." Women's Health, US Department of Health and Human Services, 28 Mar. 2018, www.womenshealth.gov/mental-health/your-rights/americans-disability-act.html.
McNeese, Tim. Rise in Disabilities Institutions. North Mankato, MN: ABDO, 2014.
"Disability and Health Information for People with Disabilities." Disability and Health, Center for Disease Control, 9 Aug. 2018, https://www.cdc.gov/ncbddd/disabilityandhealth/people.html.
Lewis, Thomas Tandy. “Americans with Disabilities Act.” Salem Press Encyclopedia, 2019. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=ers&AN=95329099&site=eds-live.
|
<urn:uuid:2b5e88c6-1e27-4ea1-9793-33d23dc8a07b>
|
{
"dump": "CC-MAIN-2020-29",
"url": "http://guides.rilinkschools.org/parkview/mccusker_disabilities_flowers",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00334.warc.gz",
"language": "en",
"language_score": 0.7751166820526123,
"token_count": 336,
"score": 2.59375,
"int_score": 3
}
|
Why does Saint John switch from one style of Greek to another in his
final chapter 21 ?
“Critical analysis makes it difficult to accept the idea that
the gospel as it now stands was written by one person. Jn
21seems to have been added after the gospel was completed; it
exhibits a Greek style somewhat different from that of the rest
of the work.”
Introduction to John in the NAB Bible.
Because the author of this note cannot think of a reason why one
author would switch from one style to another, that proves that
no such reason could exist, therefore there are at least two
authors. “If I do not know it, then it is not worth knowing.”
The conclusion seems to be based on an arrogant assumption.
But, John did have a reason to switch styles.
The reason why Saint John would switch from one style of Greek to
another in the final chapter is not obvious to everyone. Some serious
study of this issue is required. After an in-depth study we can see that
if Saint John, a single author, had not switched his style of
Greek in the final chapter he should have done so. It fits perfectly in
line with achieving what he states is his primary goal.
I have already written over 16 pages of study on this chapter 21 and it
is too much to copy and paste here. So, I will just list some key points
For the sake of conversation let us suppose this Gospel had one author.
Then we need to ask ourselves why would John switch styles ? Who was he
trying to affect or influence ? Most readers who only know Greek loosely
and not as a first language would not even recognize the difference.
Only someone steeped in Greek culture and language would have recognized
the difference. So, we need to ask how was John specifically trying to
influence them ? This leads us to investigate who these Greeks were. We
need to understand what they valued.
Irenaeus tells us that John the Apostle wrote this Gospel in the Greek
city of Ephesus.
( Irenaeus Against Heresies III.1.1)
This was one of the major cities of Macedonia. So, John’s was a
Gospel written in Greek to the Greek community to which John was
ministering and pastoring. So, it is natural to think he would have
appealed to their known strengths and all the while wanting to address
their weaknesses, especially the main stumbling block to their
As for their strength the Greeks were known and still are today for
their wisdom on the natural level. It was almost as if they worshipped
wisdom. They excelled so well in it.
… One of the most important characteristics of the Pythagorean order was
that it maintained that the pursuit of philosophical and mathematical
studies was a moral basis for the conduct of life. Indeed, the words
philosophy (love of wisdom) and mathematics (that which is learned) are
said to have been coined by Pythagoras.
Plato, Archimedes, Pythagoras, and Euclid ( Euclidean Geometry)
were all great ancient Greek mathematicians.
As for their weaknesses the New Testament tell us,
1 Corinthians 1:22-24
“For Jews demand signs and Greeks look for wisdom, but we
proclaim Christ crucified, a stumbling block to Jews and
foolishness to Gentiles, but … Christ the power of God and the
wisdom of God.” NAB
So, their strength on the natural level was wisdom, became an area
weakness on the supernatural level. Without the gift of faith the idea
that a perfect God dying for a sinful creation does sound like
foolishness. As their pastor this is the very issue John the Apostle
needs to address.
Just before the beginning of Chapter 21 – there were no chapter
divisions in the first hundreds of years of the Church – John tells us
his purpose for his Gospel.
“Now Jesus did many other signs … that are not written in this
31 But these are written that you may (come to) believe that
Jesus is the Messiah, the Son of God…”
So, John is appealing to Greeks that are not yet Christians so that
they “may (come to) believe.”
We need to look from the perspective of the non-believing Greeks to
see what John was contending with. These non-believers would not be
approaching John’s Gospel from a humble Christian perspective of trying
to get closer to God. They esteemed wisdom and would have considered
themselves as the masters of wisdom. They probably would not have read
the Gospel for the purpose of becoming John’s disciples. Rather, as the
masters of wisdom they probably esteemed themselves as the experts who
knew or at least studied the wisdom from all the cultures with Judaism
and now this Christian “offshoot” as just as one of the many lessor
attempts to express wisdom throughout the world.
Presumably John would have written his Gospel in attempt to attract
their interests. The following notes are interesting.
“ … and the sayings of Jesus have been woven into long
discourses of a > quasi-poetic form resembling the speeches of
personified Wisdom in the
Introduction to John’s Gospel in the NAB Bible
So, it is quiet plausible that the Greeks would have considered
reading John’s Gospel not as students wanting to learn, but rather as
masters who were just monitoring a lessor’s work. As they were coming to
end of John’s final chapter they were probably gaining an egotistical
satisfaction of having consumed this “lessor” piece of wisdom and being
the masters of knowledge of all worldly wisdom. And.as reading sometimes
causes they might have been quiet relaxed as if in preparation for a
nice nap. But, then John changes the Greek style. This is a difference
they would have noticed. John is saying, “Wake up. This is important
John would have had good reasons for wanting to get his point in chapter
21 across by way of metaphor.
But, he needs to wake up his audience so they look for the hidden
meaning behind the metaphor. By switching his style of Greek in this
final chapter, John by analogy could be compared to a chauffeur who all
of the sudden drops the transmission into a lower gear and then
accelerates quickly pressing his passengers back into their seats. John
is alerting them so that they would pay closer attention so they would
uncover the meaning of his metaphor in Chapter 21.
All the early church fathers agreed that John was using “153 Fish” as a
special metaphor. But, the all disagreed as to what it actually meant.
See the hidden meaning of “153 Fish” in John 21.
I cannot copy and paste all 17 pages of text here, so just let me
highlight a few key points.
If I were to state the number 3.14 I can be sure that many of my readers
would automatically be thinking about Pi. Similarly, John would have
known, that his use of 153 would have automatically have brought into
the thoughts and minds of his Greek audience of AD 90 the idea of
wisdom. This is because these Greeks who prided themselves on wisdom
would have highly esteemed one of their greatest mathematicians of all
time, Archimedes. And his greatest work in terms of influence on others
was his work on a new method for calculating Pi. It is a very short
explanation on about two pages. It consists of only 10 equations and
extremely brief commentary. And the first nine of his equations all
ended with the number 153.
Archimedes Work on Pi
So, John switches his style of Greek to get the Greeks to pay special
attention to this final chapter. By appealing to their strength, natural
wisdom, he ministers to their stumbling block of thinking their wisdom
proves that the Gospel of Jesus to be “foolishness” (1 Corinthians
1:22-24). By using a gentle metaphor John gets around their pride to get
them to consider how Jesus is the source of “153 Fish”, He is the source
of all wisdom. The net does not tear that contains all 153 fish, all
wisdom, because there is no inconsistency, there is no contradiction
between the Greek’s natural wisdom and the wisdom of Jesus’ revelation.
|
<urn:uuid:f78a35f9-0946-4fac-a9bd-ef6a21aeac67>
|
{
"dump": "CC-MAIN-2018-13",
"url": "http://defendingthebride.com/ss/fish/style.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645069.15/warc/CC-MAIN-20180317120247-20180317140247-00776.warc.gz",
"language": "en",
"language_score": 0.9714112281799316,
"token_count": 1820,
"score": 2.890625,
"int_score": 3
}
|
Women who engage in unhealthy eating behaviours may be more likely to experience mood changes, experts have claimed.
Scientists at Penn State in the US asked 131 college-aged women to answer questions about their mood and eating behaviours several times a day using handheld computers.
All of the women had high levels of unhealthy eating habits and had expressed concerns about their body shape and weight.
They found that the women tended to experience mood changes following bouts of disordered eating.
Research associate Kristin Heron revealed: 'There was little in the way of mood changes right before the unhealthy eating behaviours.
'However, negative mood was significantly higher after these behaviours.'
The findings were presented at the American Psychosomatic Society Conference and shed light on the links between emotions and eating.
Previous studies have shown that people who are prone to depressive symptoms may benefit from eating a healthy, balanced diet including plenty of fruit, vegetables, whole grains, proteins and oily fish.
|
<urn:uuid:6d6e685b-d3c7-400c-afa3-0cc22179818d>
|
{
"dump": "CC-MAIN-2017-17",
"url": "http://www.netdoctor.co.uk/healthy-living/news/a23645/unhealthy-eating-habits-bad-for-mood/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124371.40/warc/CC-MAIN-20170423031204-00089-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9702914953231812,
"token_count": 192,
"score": 2.921875,
"int_score": 3
}
|
Europe and Russia celebrated placing a robot explorer into Mars orbit today, but ground controllers faced an anxious night searching for the tiny lander it had despatched to the Red Planet's surface.
The "Schiaparelli" lander, a trial-run for a Mars rover to follow, was meant to touch down at 1448 GMT, after separating from its mothership, the Trace Gas Orbiter (TGO), on Sunday.
But contact with the paddling pool-sized lander was lost during its six-minute descent through the Red Planet's thin, carbon dioxide-rich atmosphere.
"It's clear these are not good signs," ESA operations head Paolo Ferri said at ground control in Darmstadt, Germany. "But we need more information" before declaring the operation failed.
Schiaparelli was Europe's first attempt at a Mars landing since the British-built Beagle 2 was lost without trace 13 years ago .
An update should be ready by 0800 GMT on Thursday, Ferri said.
On the upside, flight operations manager Michel Denis announced that the TGO itself, which is to sniff Mars' atmosphere for gases possibly excreted by molecular life forms, had correctly entered Red Planet's orbit.
"It's a good spacecraft in the right place, and we have a mission around Mars," he said.
The TGO and Schiaparelli comprise phase one of the ExoMars mission through which Europe and Russia seek to join the United States in probing the hostile Martian surface.
Schiaparelli's experiences will inform technology for a rover set for launch in 2020 - the second phase and high point of ExoMars.
© Nine Digital Pty Ltd 2018
|
<urn:uuid:4e1ef72a-b15e-4e42-8295-edeff5bfbd15>
|
{
"dump": "CC-MAIN-2018-43",
"url": "https://www.9news.com.au/technology/2016/10/20/06/08/euro-russian-craft-enters-mars-orbit-but-lander-s-fate-unknown",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517628.91/warc/CC-MAIN-20181024001232-20181024022732-00329.warc.gz",
"language": "en",
"language_score": 0.946137547492981,
"token_count": 355,
"score": 2.640625,
"int_score": 3
}
|
Galaxy clusters can contain hundreds of galaxies, and the ones at their centers are the largest galaxies known to astronomers. These huge objects are the product of successive mergers between smaller galaxies, a process that has exhausted the raw materials for the formation of stars. That is, in every case but one—a new set of observations have revealed one of the largest and hottest galaxy clusters yet seen, and its central galaxy has an unexpectedly high rate of star formation. This finding may provide new insights into the history of galaxy clusters and the formation of structure in the Universe.
Galaxy clusters are the largest objects in the Universe bound by their own gravity; rich clusters contain hundreds of galaxies. While most of a cluster's mass is in the form of dark matter, each also has a significant amount of hot plasma filling the space between galaxies. This intracluster medium (ICM) is very bright in X-ray light, due to internal temperatures greater than 10 million Kelvins (107 K).
In most cases, the ICM is diffuse, but there are some hot cores where the density is high enough that the plasma can cool by emitting radiation. Once cooled, the matter will fall to the center of the cluster due to gravity, creating what's called a cooling flow.
According to theoretical models, the incoming mass of cooling flows are usually offset by strong jets from the supermassive black holes (SMBHs) in the large galaxies lying near the centers of many clusters. The balance between cooling flows and SMBH feedback drives the evolution of galaxy clusters.
Researchers used observations (both new and archival) from 10 different observatories to characterize the galaxy cluster SPT-CLJ2344-4243, known colloquially as the "Phoenix Cluster" for its location in the Phoenix constellation. The "phoenix" designation is also apt because of the extraordinary high rate of star formation in a class of galaxy where such activity usually had ceased long before.
The researchers combined data from observatories including the South Pole Telescope (SPT), Chandra X-ray telescope, the Blanco Telescope at Cerro Tololo (incidentally where dark energy was first discovered), the Magellan telescope, and the Herschel Space Observatory. The reason for this was to obtain data across the electromagnetic spectrum, from radio to X-ray. This allowed the astronomers to characterize the cluster's shape, the star formation rate, and the behavior of the SMBH at the heart of the Phoenix.
The detailed look at this cluster has provided the strongest observation of a cooling flow to date. What they found was striking: the Phoenix Cluster emitted 8.2×1038 watts in X-rays alone, half as luminous as the next-brightest known galaxy cluster. Most of that radiation came from the ICM, and it resulted in a cooling flow as the plasma in the ICM gave up energy and fell inward.
Based on infrared and ultraviolet data, the researchers estimated that, as it reached the central galaxy, the infalling matter drove a star formation rate equivalent to 740 new Suns per year (though most of new stars will be less massive than the Sun).
Additionally, they studied the behavior of the SMBH in the central galaxy of the Phoenix Cluster. The evidence pointed to a very active black hole—expected from the largest galaxies in other clusters—but one obscured by dust, making it appear more like the black holes in dust-filled star-producing galaxies. In other words, the central galaxy in the Phoenix Cluster exhibited characteristics of both normal central galaxies and star-forming galaxies found in other environments. The dust helps shield the environment from the black hole, which keeps it from forming jets that balance out the incoming matter.
Finally, the shape of the cluster was found to be remarkably spheroidal, meaning it likely hadn't undergone a merger in the recent past. Mergers can also set off a wave of star formation, so ruling this out provides another indication that the cooling flow was driving star formation.
The whole picture from all the different data sources appeared consistent: the rapid star formation in the central galaxy of the Phoenix Cluster was the result of a cooling flow of gas, uninhibited by strong feedback from the SMBH. Since the cluster's shape appeared regular, the star formation was unlikely to have been caused by two clusters merging. Together, this data suggests a new way galaxy clusters can give birth to new stars—a significant addition to our understanding of the history and evolution of the Universe.
(Thanks to Peter Edmonds of the Chandra X-ray Center at the Harvard-Smithsonian Center for Astrophysics for providing additional information and the image.)
|
<urn:uuid:c5724fc6-39c1-4535-b07c-5c2ae4ffb6d6>
|
{
"dump": "CC-MAIN-2015-48",
"url": "http://arstechnica.com/science/2012/08/unexpected-burst-of-star-formation-re-ignites-the-phoenix-cluster/?comments=1&post=23181226",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446248.56/warc/CC-MAIN-20151124205406-00040-ip-10-71-132-137.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9403454065322876,
"token_count": 947,
"score": 4.1875,
"int_score": 4
}
|
This article does not have any sources. (December 2009)
Radio is a way to send electromagnetic signals over a long distance, to deliver information from one place to another. A machine that sends radio signals is called a transmitter, while a machine that "picks up" the signals is called a receiver. A machine that does both jobs is a "transceiver". When radio signals are sent out to many receivers at the same time, it is called a broadcast.
Television also uses radio signals to send pictures and sound. Radio signals can start engines moving so that gates open on their own from a distance. (See: Radio control.). Radio signals can be used to lock and unlock the doors in a car from a distance.
History of radioEdit
Many people worked to make radio possible. After James Clerk Maxwell predicted them, Heinrich Rudolf Hertz in Germany first showed that radio waves exist. Guglielmo Marconi in Italy made radio into a practical tool of telegraphy, used mainly by ships at sea. He is sometimes said to have invented radio. Later inventors learned to transmit voices, which led to broadcasting of news, music and entertainment.
Uses of radioEdit
Radio was first created as a way to send telegraph messages between two people without wires, but soon two-way radio brought voice communication, including walkie-talkies and eventually mobile phones.
Now an important use is to broadcast music, news and entertainers including "talk radio". Radio shows were used before there were TV programs. In the 1930s the US President started sending a message about the country every week to the American people. Companies that make and send radio programming are called radio stations. These are sometimes run by governments, and sometimes by private companies, who make money by sending advertisements. Other radio stations are supported by local communities. These are called community radio stations. In the early days manufacturing companies would pay to broadcast complete stories on the radio. These were often plays or dramas. Because companies who made soap often paid for them, these were called "soap operas".
Radio waves are still used to send messages between people. Talking to someone with a radio is different than "talk radio". Citizens band radio and amateur radio use specific radios to talk back and forth. Policemen, firemen and other people who help in emergency use a radio emergency communication system to communicate (talk to each other). It is like a mobile phone, (which also uses radio signals) but the distance they reach is shorter and both people must use the same kind of radio.
Microwaves have even higher frequency; shorter wavelength. They also are used to transmit television and radio programs, and for other purposes. Communications satellites relay microwaves around the world.
A radio receiver does not need to be directly in view of the transmitter to receive programme signals. Low frequency radio waves can bend around hills by diffraction, although repeater stations are often used to improve the quality of the signals.
Shortwave radio frequencies are also reflected from an electrically charged layer of the upper atmosphere, called the Ionosphere. The waves can bounce between the ionosphere and the earth to reach receivers that are not in the line of sight because of the curvature of the Earth's surface. They can reach very far, sometimes around the world.
|Wikimedia Commons has media related to Radio.|
|The Simple English Wiktionary has a definition for: radio.|
|
<urn:uuid:2cc7da4e-134a-4032-a1c7-996ba78a7784>
|
{
"dump": "CC-MAIN-2020-10",
"url": "https://simple.m.wikipedia.org/wiki/Airplay",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00414.warc.gz",
"language": "en",
"language_score": 0.9700207710266113,
"token_count": 703,
"score": 3.90625,
"int_score": 4
}
|
This Thursday, May 28 is the fifth day of the fifth moon according the Lunar Calendar. On that day, a good billion plus people around the world will be celebrating Duānwǔ Jié (端午節), a Chinese holiday that’s also known as the Double Fifth Festival in China and the Dragon Boat Festival in the West. What’s the deal with dragon boats? The story is that the holiday commemorates the drowning suicide death of Qu Yuan, a poet, scholar, and minister to the King of Chu in 278 BCE. A man willing to sacrifice his own life for the sake of his moral convictions, Qu was banished for treason when the king allied with a rival warlord. When the rival warlord over took the Chu state, Qu threw himself into the Miluo River in Hunan province.
The local people admired Qu so much that they attempted to preserve his body by throwing food into the river to distract the fish from eating his corpse. This is how the practice of making zòngzi , a.k.a. Chinese tamales, began. You have to paddle dragon boats out into the river and then throw the bamboo-leaf wrapped dumplings into the water. Pyramid-like zòngzi, made primarily of sticky rice and either a sweet or savory filling, becomes very heavy and hearty after its hours boiling so they’re bound to keep the fish busy for a while.
|
<urn:uuid:cc44449b-eb1a-4b6e-b81a-077e9d2b5d75>
|
{
"dump": "CC-MAIN-2015-40",
"url": "http://www.vietworldkitchen.com/asian_dumplings/2009/05/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682947.6/warc/CC-MAIN-20151001215802-00213-ip-10-137-6-227.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9679262042045593,
"token_count": 298,
"score": 2.609375,
"int_score": 3
}
|
Training Free Ranging Ducks
Training Free Ranging Ducks – Ducks are amazing animals to keep. Ducks seem to have a character all their own. They don’t need pampering, find a good portion of their food, eat insects, and weed seeds from the garden and can be very beautiful wandering around the backyard.
Although ducks are amazing animals, there are a few factors before making any commitment to keep them. They need proper care, a proper diet, and a large enclosure to feel safe. Ducks are similar to the chicken and they will seek a safe place to sit just before the dark. However, when new to the property, they may have some issues knowing where to go initially. Training Free Range Ducks
It may be surprising, but ducks are quite trainable. By having the right motivation and a little patience, you can teach your pet ducks to free-range and then come back to their pens on their own.
They have the affection and become comfortable being petted and held, responding to their names if the bond develops. The main factor is to get used to specific conditioning techniques little by little until they begin adapting to them.
Many Backyard Homesteaders love to raise Free Range Ducks. The ducks are smart. They can be trained to
- Come to their name
- Jump to a reasonable height
- Return to their roost at night
- Lay Eggs in Nest Boxes
Many asked if ducks can be potty trained, but Ducks do not have sphincter / Butt / Anus muscles. When the feces comes they just drop it, where ever they are. No-Fault of their own just made that way.
Many Duck Pet Owners have been successful, in using duck diapers.
Encouraging Ducks to go to Free-Range (Free Range Ducks)
Keep New Ducks Restricted to Their Pens for One week.
Initially, place the new ducklings in their pens and let them more time there to get familiar with the cages and new surroundings. Before long, they will start to feel their cages as a safe shelter to return to.
However, it is necessary to have a setup large enough to house all the ducks you are planning to keep comfortably. A thumb rule to remember is to provide at least four square feet of space per duck.
Make the Ducks Pens Comfortable
The floor of the pens should have a thick layer of grass or straw to provide ducks with a cozy place to huddle. Set out types of equipment for quality duck feed and fresh, clean water and keep them well-stocked. The ducks should have all the basic needs met the entire time they are restricted to their pens.
There should be a space heater or heat lamps positioned nearby during the winter seasons for newborn chicks and growing ducklings. Be sure to remove any messes your ducks happen to make regularly so they aren’t cooped up with their filth.
Start Leaving the Doors Of The Pens Open
After the initial acclimation period, allow your ducks to come and go as they please. They may be reluctant to leave at first. However, with time they will get over their fear and take more of an interest in the world outside.
It is not advisable to force them out of their pens. They will make their way out on their own when they are ready to go out. The young chicks are more willing to venture out of their cages when there are other adult ducks out. If you don’t own more ducks, a handful of mealworms will usually do the trap.
Shepherd / Lead Ducks Back into their pens at Night
With the arrival of dusk, the ducks will instinctively return to their shelter. However, if they seem confused or inattentive, they may require some guidance. Usage of a long pole or stick to calmly direct the ducks back towards the opening of the pens. Once they are inside the enclosures, leave the door open so they can start getting used to coming and going as they please.
Please, avoid using any herding tool to move the ducks physically. Doing so can hurt them or send them into a frightened frenzy. Ducks usually have the habit of grouping up and follow one another, so they don’t require too much coaxing.
Repeat this routine as Long as Required ( Ducklings)
It is better to let the ducks free to wander freely during the daytime and corral them back to their pens during the night. They will get used to this routine after a couple of weeks. After that, it may not be compulsory ever to close them again.
Even if you are planning to train the ducks to free-range, it is always a good idea to keep them shut within a large enclosure to prevent them from getting lost or snatched by the predators in that area.
Allowing ducks to roam is right for them. This roaming will provide them with much-needed exercise, keeps them well-fed, and aids them to control the population of standard pests like slugs, beetles, etc.
World Duck Breeder Associations
|NSW Waterfowl Breeders Association||Australia||NSW|
|Rare Breeds Poultry||Australia||Rare Breeds Poultry|
|International Waterfowl Association||Minnesota||IWBA|
|National Call Breeders of America||Ohio||NCBA|
|
<urn:uuid:e0b4836a-0eff-4976-a799-50cacfa6dfaf>
|
{
"dump": "CC-MAIN-2023-40",
"url": "https://www.farmanimalreport.com/2020/09/24/training-ducks-to-free-range/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510462.75/warc/CC-MAIN-20230928230810-20230929020810-00756.warc.gz",
"language": "en",
"language_score": 0.9579112529754639,
"token_count": 1133,
"score": 2.609375,
"int_score": 3
}
|
In addition we can say of the number 652 that it is even
652 is an even number, as it is divisible by 2 : 652/2 = 326
The factors for 652 are all the numbers between -652 and 652 , which divide 652 without leaving any remainder. Since 652 divided by -652 is an integer, -652 is a factor of 652 .
Since 652 divided by -652 is a whole number, -652 is a factor of 652
Since 652 divided by -326 is a whole number, -326 is a factor of 652
Since 652 divided by -163 is a whole number, -163 is a factor of 652
Since 652 divided by -4 is a whole number, -4 is a factor of 652
Since 652 divided by -2 is a whole number, -2 is a factor of 652
Since 652 divided by -1 is a whole number, -1 is a factor of 652
Multiples of 652 are all integers divisible by 652 , i.e. the remainder of the full division by 652 is zero. There are infinite multiples of 652. The smallest multiples of 652 are:
It is possible to determine using mathematical techniques whether an integer is prime or not.
for 652, the answer is: No, 652 is not a prime number.
To know the primality of an integer, we can use several algorithms. The most naive is to try all divisors below the number you want to know if it is prime (in our case 652). We can already eliminate even numbers bigger than 2 (then 4 , 6 , 8 ...). Besides, we can stop at the square root of the number in question (here 25.534 ). Historically, the Eratosthenes screen (which dates back to Antiquity) uses this technique relatively effectively.
More modern techniques include the Atkin screen, probabilistic tests, or the cyclotomic test.
Previous prime number: 647
Next prime number: 653
|
<urn:uuid:54482639-de54-4a79-b748-aab12fafda5f>
|
{
"dump": "CC-MAIN-2021-21",
"url": "https://calculomates.com/en/divisors/of/652/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00287.warc.gz",
"language": "en",
"language_score": 0.9002577066421509,
"token_count": 434,
"score": 3.015625,
"int_score": 3
}
|
Asmus gives a good history.
ANSI does not refer to any specific character set, codepage, or standard.
Today, 'ANSI' in Windows documentation just means the current system
codepage or SBCS/DBCS/MBCS codepages in general. You can think of it as the
name for "not Unicode".
--- Paul Chse Dempsey
Microsoft Visual Studio Text Editor Development
> -----Original Message-----
> From: Asmus Freytag [mailto:email@example.com]
> Sent: Tuesday, June 08, 1999 5:21 AM
> To: Unicode List
> Subject: Re: Microsoft's ANSI Character Set
> Before the ISO standard, what was to become 8859-1 was an ANSI draft.
> One of the early Windows developers must have been plugged in enough
> to get ahold of this document and based the code page on it.
> That was in the mid eighties sometime.
> 'ANSI' stood in contrast to 'OEM' (read DOS) character set,
> and this is
> reflected all over function names and manuals from the earliest days.
> This was handy, since with windows 3.1 there were many international
> code pages, all paired off with a local DOS code page. ANSI on the
> screen, OEM in filenames (and more often than not in those days, in
> plain text files).
> With NT, 'ANSI' could be paired off again, this time against Unicode.
> All function entry points now exist in two versions an A (from ANSI)
> and a W (for wchar_t or wide character) version.
> Important to note is that the original ANSI set (in Windows) followed
> 8859-1 strictly and then with CP 1252 became a superset. The other
> parts of 8859 did not fare that well: they each have a corresponding
> code page that can cover the repertoire, but with a little bit
> scrambled layout to allow a set of common characters in common
> Customers wanted to be able to run US or Latin-1 based software and
> did not care to see the paragraph marks, curly quotes, and other
> characters used in the UI, change to random local characters. On
> could say the market voted for a larger *common* subset than 96
> ASCII characters already then.
> With Unicode the times of careful tweaking of 8-bit sets for minor
> advantage has hopefully come to an end.
> At 04:41 AM 6/8/99 -0700, Markus Kuhn wrote:
> >A question, just to satisfy my historic interest:
> >Why is the MS-Windows documentation full of references to the "ANSI
> >character set" and what ANSI standard does this refer to?
> >It this just ANSI/ISO 8859-1 or has CP1252 at some point become
> >an ANSI standard?
> >Some branches of the (low end) computing literature have
> become full of
> >references to the "ANSI character set", but know body seems
> to know what
> >that means, except that everyone copies it from Microsoft documents.
> >Indeed it seems that references to the "ANSI character set"
> have become
> >a good indicator for the general level of competence of the
> author of a
> >computing publication.
> >Markus G. Kuhn, Computer Laboratory, University of Cambridge, UK
> >Email: mkuhn at acm.org, WWW: <http://www.cl.cam.ac.uk/~mgk25/>
This archive was generated by hypermail 2.1.2 : Tue Jul 10 2001 - 17:20:46 EDT
|
<urn:uuid:8078657c-350a-415d-9ef1-fb4edb58c456>
|
{
"dump": "CC-MAIN-2013-48",
"url": "http://www.unicode.org/mail-arch/unicode-ml/Archives-Old/UML016/0144.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164561235/warc/CC-MAIN-20131204134241-00069-ip-10-33-133-15.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9096541404724121,
"token_count": 828,
"score": 3.15625,
"int_score": 3
}
|
A popular sarcasm with the internet is – ‘If it is online, it must be true’. A few decades ago, the internet was a very censored place, where anything that appeared would probably be from an authority site. That trust people built on the internet over the time is carried on till date.
The Freedom of Press has become a debate
In a parliamentary system, the press is called the 4th estate of democracy after the Legislature, Executive and Judiciary. This is a title well deserved by it because while all other mechanisms can fail, the people, their awareness and the concerned public opinion cannot.
The public has every right to remain well informed, and thus almost every nation across the globe (few exceptions apart) have mandated the freedom of the Press. The media and press have no less power than any other estate of the democracy. They can trigger protests, change governments and manipulate the law in the long run.
However, with huge power comes huge responsibility. If media resorts to corrupt practices or becomes biased towards a particular person or ideology, it could shatter the foundation of democracy, for people might form wrong opinions.
How has media changed over time?
When the importance of media was an essential debate back in the 1900’s, it was limited to newspapers or written Press in general. With time it shifted to televised media, and now it is all about online media, social media and phone app-based media.
While recognized authority organizations still handle the other forms, online media and social media remain widely unregulated.
Social media has a huge impact on the younger generation because they spend most of their time on social networking sites. Thus, they read most of their news in their feeds when logged on to Facebook or Twitter.
Whatever is seen there is likely to be perceived as true. However, social media, to quite an extent, is an expression of individual opinion which is bound to be biased. But even more, it is the hub for marketing propaganda news. It might not be so easy to be fooled by fake news in general when searched through search engines, but social media makes it unrecognizable.
Fake News vs Satire
One thing is satire, where the intent of writing false news is humor. It is mentioned as a disclaimer that the story is false, and at times, it is evident by the heading. However, fake news is different. When we use the word fake news, it means propaganda, hoaxes, disinformation, etc. Disguised as real news, spread with the intent to fool people.
Fake news websites on the internet pretend to be an authority and at times name their websites in similar to well-known news authority brands. Eg. Bloomberg.ma to imitate Bloomberg.com. In many cases, the fake website copied everything from the logo to the site’s structure.
Thanks to the social media revolution, fake news websites have a huge approach to the public now. They intend to shake the foundation of democracy. Eg. A lot of fake news websites mushroomed during the 2016 US Presidential Elections to side with political parties.
One thing noticed during the US Presidential Elections in 2016 was that the media became polarized with different media outlets writing news biased towards various political parties. The standard of good news is to cover news of importance and set priority criteria. For whatever news is covered, both the sides of the story should be reported.
Exaggerated news would mean not to report a part of the main news and exaggerate whatever is said to favor a person or ideology. As they say, a half-truth is worse than a complete lie; exaggerated news is harmful to a true democracy.
It might be worse than news published on fake news websites because the reader might believe it blindly considering that it has been published on an authority site.
Types of fake news websites
- Using names similar to those of known news brands: This is something that has been done with almost every known news website. The website owner would make a duplicate website with a similar logo and slightly different name. At times the personal information of the website owner is hidden from the online database. The intent is to make the readers believe that the information published on these fake news sites are from the authority site they are imitating.
- Creating biased political websites: A strong purpose of many news websites is to bias political opinion in favor of a particular party. This is because political parties represent political opinions which may favor or be against particular communities and their interests. In many countries, politics is a huge craze and thus people create news websites which grossly favor certain political opinions.
- Clickbait websites: Other than spreading propaganda, financial returns is a reason for the flourishing of so many fake news websites. The news they write is exactly what they believe people want to read. When people open their sites and check the ads, it helps the website owners build revenue.
Fake news sites list
The list is endless and probably as many new fake news websites are launched as they are shut down. What you need to observe to identify a fake news site is its domain. They usually copy the name of some major real news site. Some examples of such websites are – cnn-trending.com, bloomberg.ma, drudgeReport.com.co, usatoday.com.co, washingtonpost.com.co, and so on. It is therefore important that you learn to spot fake news sites.
Measures to control fake news websites
Any news website has two ways of reaching the masses.The first one is to be approved for Google News or Bing News and ranked for their search engine. It is the same with other search engines. However, this is tough because of the strict quality criteria and monitoring. The second method is to reach people through social media like Facebook and Twitter. As of recent, Facebook has introduced an option to report fake news and Twitter might follow the trend soon.
Despite the laws assisting in the freedom of the press, fake news is illegal in most countries. Thus, to prevent the problem from spreading further, many countries are teaming up to cross check the false news.
The European Commission is contributing more resources towards checking the fake news. Germany is setting up a center to monitor disinformation. 17 newsrooms in France are also joining hands with Facebook and Google to overcome the problem.
The human tendency is to immediately share something if the ‘news’ is something the person likes or wants to be true! But before sharing indiscriminately, we as users must fact check all the news available online by comparing it with what all has been published on authority websites.
|
<urn:uuid:59c977f7-296f-402c-a9eb-2ca8cab74ef2>
|
{
"dump": "CC-MAIN-2018-22",
"url": "http://www.thewindowsclub.com/fake-news-websites",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00486.warc.gz",
"language": "en",
"language_score": 0.9602448344230652,
"token_count": 1348,
"score": 2.78125,
"int_score": 3
}
|
Learning how to get rid of cold sores is what each of us needs from time to time. Many companies convince us in their magic ointments, but the key thing here is to find out much simpler and elementary first-aid remedies. It turns out to be more effective, than all those commercial promises. One should really understand what cold sores are, along with their causes and prevention. Once you find them out, you will be always armed with proven methods which work and, thus, get the best results overnight.
All You Need to Know
1. The main reason of cold sore events and those fever blister outbreaks is either type 1 or type 2 Herpes Simplex Virus. But it’s always activated by stress, weakened immunity, cold, excessive sun exposure, and so more triggers.
2. Nearly 90% of the globe population is affected by it. As you can see, this is a common condition. Cold sores live near the original infection area in our nerve fibers for a lifetime.
3. Normally, the virus is latent, causing no problems. On the one hand, 34% of infected individuals never experience 1 cold sore. On the other hand, 66% usually get more than 1 cold sore a year.
4. Cold sores are not only contagious, but rather durable, lasting for 7-14 days.
5. Any attempts to camouflage your cold sore only aggravate the problem.
6. Healing speed varies from person to person. It depends on your physical condition, activity levels and necessary nutrients from the diet.
How to Fight with Cold Sores in No Time:
3 Easy Steps
- When a nasty cold sore is just a few hours away, you can feel the virus ‘moving’ to the surface, that is tingling or itching. Your first task is to treat the affected area as quickly as possible, in order to prevent oral herpes from further developing. Do not delay and apply ice cubes. This way, the virus won’t be able to reproduce. It’s amazing, but sometimes the virus just gives up and hides back.
- Your body knows best how to heal itself. One of those ways is creating thick fluid for washing away all new oral herpes viruses. You can really speed up the process here, by washing your sore more often. So, use tissues/cotton balls soaked in alcohol or peroxide. Thus, you’ll help to get rid of million of virus particles, as well as prevent secondary bacterial infection.
- Further healing should drive the blood away from the area. That is why, use warm and moist heat on broken open sores. This will help a lot. Try tea bags for such applying, as tea contains antioxidants and chemicals for repairing damaged skin. They are cheap and comfortable to use. The longer you’re heating, the better. In fact, 10-20-minute sessions will work good.
Follow the above suggestions. Perhaps, that 3-step procedure is enough for you. If not, experiment with more varied ways to heal your cold sore outbreaks.
How to Get Rid of Cold Sores:
4 Golden Rules
- Avoid triggers – Reduce stress and stay out of sun. Take vitamins and minerals. Care about your immune system.
- Do not spread infection – It means neither touching nor scratching the infected areas. Never share any eating utensils and personal items with others. Forget about kissing.
- Buy any L-lysine ointment and apply it 3 times daily.
- Ice, alcohol and heat, do you remember?
Now you know everything about how to get rid of cold sores, for giving you the relief faster than you ever believed possible! By the way, there is a joke: “What is common between a cold sore and a brother-in-law coming from out-of-town for a week? Actually, you cannot dispose of either of them easily; so you usually have to wait till they go away.” But this is not about your cold sores anymore.
|
<urn:uuid:0402e7b4-182d-4a36-a230-0d94874815c2>
|
{
"dump": "CC-MAIN-2022-33",
"url": "http://howtogetbeautyup.com/how-to-get-rid-of-cold-sores/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00129.warc.gz",
"language": "en",
"language_score": 0.9391083121299744,
"token_count": 839,
"score": 2.65625,
"int_score": 3
}
|
Submarines have used sonar for decades. Bats and dolphins have used it for millions of years. And thanks to a little math, humans could soon be echolocating with their mobile phones.
At the École Polytechnique Fédérale de Lausanne (EPFL), in Switzerland, experts in signal processing discovered a mathematical technique that allows ordinary microphones to "see" the shape of a room by picking up ultrasonic pulses as they bounce off the walls. The work was published in this week's edition of the journal Proceedings of the National Academy of Sciences (PNAS).
Microphone echolocation is harder than it sounds. Ambient noise in any room interferes with the sounds used to locate the walls, and the echoes sometimes bounce more than once. There is also the added challenge of figuring out which echoes are bouncing off which wall. [See also: "How Bats Stay on Target: Bio Sonar"]
Bats have had millions of years to evolve specialized neural circuits to fine-tune their echolocation abilities, said Ivan Dokmanic, a doctoral researcher and lead author of the PNAS paper. He added that humans can echolocate too, though not as precisely. (Some blind people have demonstrated this ability.)
One reason echolocation is easier for bats and humans than it is for computers is that bats and humans have skulls that filter the sound. Tracking where a sound originates is easier for humans because people's two ears hear slightly different things. This allows humans to pinpoint the origin of a sound.
To enable echolocation in mobile devices, Dokmanic investigated the math behind echolocation. What he found was that it's possible to treat the echoes of sounds emitted by a speaker as sources, rather than as waves bouncing off of something.
It's kind of like what happens when you look into mirror: Your eyes see a reflection, but there's the illusion that there's another person who looks just like you standing at precisely the same distance from the mirror.
That's what Dokmanic did with sound. He assumed that each echo was a source, and created a kind of grid, called a matrix, of distances. Using some advanced math, he was then able to create an algorithm that could group the echoes in the correct way to deduce the shape of a room.
First, the team experimented with an ordinary room at the EPFL, using a set of microphones and a laptop computer to test whether the algorithm worked. It did, and their next step was to test their program in the real world. So they went to a cathedral and tested it there.
"It was really the opposite environment," Dokmanic said, adding that unlike a controlled lab setting, a cathedral has a lot of ambient noise and the space isn't perfectly square.
The algorithm worked there too, showing that the echolocation scheme could detect the cathedral's walls.
"The innovation is in the way that they process the signal to calculate the shape of the room," said Tommaso Melodia, an associate professor of electrical engineering at the University at Buffalo who was not involved in the study.
Martin Vetterli, professor of communications systems at EPFL and a co-author of the paper, said that mobile phones could be used to locate people more precisely. One problem with getting anyone's precise location on the phone is that only certain frequencies penetrate building walls, so GPS signals are sometimes useless.
Moreover, GPS is not always precise — if there's a lot of interference,it's not uncommon for a phone to say it can't locate you more precisely than within a half mile. Wi-Fi could work, but it depends on the existence of a local network.
Echolocation partly solves that problem, because it can measure the distance from where a user is standing to the walls of an individual room, and send that more precise information to tell the network exactly where that person is located. Instead of knowing where someone is within a city block, you'd be able to see that he or she is inside a room of a certain size or is surrounded by walls that give an intersection a certain shape.
One other issue is the distance between two microphones on a mobile phone. Many mobile phones have two mics —the directional mic is used when it's pressed to your head while you're on a phone call, and the other is used for canceling out the ambient noise.
The two microphones on a phone calculate the distance by triangulating – measuring the small gap between when an echo reaches each microphone. The distance between the microphones is the base of a triangle, and the time difference between echoes' time of arrival tells you the length of the other two sides.
But these two microphones usually aren't very far apart on phones, so calculating the distance to a source that's far away is harder to do.
One solution, Vetterli said, might be to use people's tendency to walk with their phones in order to help echolocate walls more accurately.
Since you can't make phones much bigger, it is simpler to have the phone take measurements from more than one spot as the user walks with it, so the base of the triangle is longer, he said.
- Invisibility Cloak Hides Objects From Sonar
- Seven Sci-Fi Weapons from Tomorrow Are Here Today
- 10 Profound Innovations Ahead
Copyright 2013 LiveScience, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
|
<urn:uuid:6b0a97c4-baf1-4327-9c6f-a5abd07aa79b>
|
{
"dump": "CC-MAIN-2017-34",
"url": "https://www.yahoo.com/news/cellphone-could-sonar-device-195110933.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102757.45/warc/CC-MAIN-20170816231829-20170817011829-00522.warc.gz",
"language": "en",
"language_score": 0.96405428647995,
"token_count": 1150,
"score": 3.90625,
"int_score": 4
}
|
Brentwood shown within Essex
|Population||73,600 ( 2011 Census)|
|OS grid reference|
|- London||20 miles (30 km)|
|Sovereign state||United Kingdom|
|Postcode district||CM13, CM14, CM15|
|Ambulance||East of England|
|EU Parliament||East of England|
|UK Parliament||Brentwood and Ongar|
Brentwood is a town and the principal settlement of the Borough of Brentwood, in the county of Essex in the east of England. It is located in the London commuter belt, 20 miles (30 km) east north-east of Charing Cross, and near the M25 motorway.
Brentwood is an affluent suburban town with a small but expanding shopping area and high street. Beyond this is extensive sprawling residential development entirely surrounded by open countryside and woodland; some penetrating to within only a few hundred yards of the town centre.
The name was assumed by antiquaries in the 1700s to derive from a corruption of the words 'burnt' and 'wood', with the name Burntwood still visible on some 18th century maps. However, "brent" was the middle English for "burnt". The name describes the presumed reason for settlement in the part of the Forest of Essex (later Epping Forest) that would have covered the area, where the main occupation was charcoal burning. An alternative meaning of "brent" is "holy one", which could refer to the chapel dedicated to Thomas Becket, for the use of pilgrims to Canterbury.
Although a Bronze Age axe has been found in Brentwood and there are clear signs of an entrenched encampment in Weald Country Park it is considered unlikely that there was any significant early settlement of the area which was originally covered by the Great Forest covering most of Essex at that time. Rather it is believed that despite the Roman road between London and Colchester passing through, the Saxons were the earliest settlers of the area.
Robert Graves, in his book I, Claudius, refers to Brentwood as the site of the battle where Claudius defeated the Ancient Britons in 44 AD. However, Graves also states that names and places in the book are sometimes fictitious.
The borough began as a small clearing in the middle of a dense forest, created by fire, giving it the name of Burntwood, or 'the place where the wood was burned'. People began to settle there and, because it was on the crossroads of the old Roman road from Colchester to London and the route the pilgrims took over the River Thames to Canterbury, it grew into a small town. A chapel was built in or around 1221, and in 1227 a market charter was granted. The new township, occupying the highest ground in the parish, lay at the junction of the main London-Colchester road with the Ongar-Tilbury road. Its growth may have been stimulated by the cult of St. Thomas the Martyr, to whom the chapel was dedicated: the 12th century ruin of Thomas Becket Chapel was a popular stopping point for pilgrims on their way to Canterbury. The ruin stands in the centre of the high street, next to the tourist information office, and the nearby parish church of Brentwood retains the dedication to St. Thomas of Canterbury. Pilgrims Hatch, or 'Pilgrims' gate', was probably named from pilgrims who crossed through on their way to the chapel. It is likely, however, that Brentwood's development was due chiefly to its main road position, its market, and its convenient location as an administrative centre. Early industries were connected mainly with textile and garment making, brewing, and brickmaking.
During the Peasants' Revolt of 1381, Brentwood was the meeting place for some of the instigators, such as John Ball and Jack Straw. They, apparently, met regularly in local pubs and inns. The first event of the Peasants' Revolt occurred in Brentwood, when men from Fobbing, Corringham and Stanford were summoned by the commissioner Thomas Bampton to Brentwood to answer as to who had avoided paying the poll tax. Bampton insisted that the peasants pay what was demanded of them. The peasants refused to pay and a riot ensued as Bampton attempted to arrest the peasants. The peasants mved to kill Bampton, but he managed to escape to London. The rioters then, fearing the repercussions of what they had done fled into the forest. After the event, the peasants sent word to the rest of the country and initiated the Peasants' Revolt. The Essex assizes were sometimes held here, as well as at Chelmsford. One such pub was The White Hart (now a nightclub called Sugar Hut Village and showing little of its original historic interest), which is one of the oldest buildings in Brentwood; it is believed to have been built in 1480 although apocryphal evidence suggests a hostelry might have stood on the site as much as a hundred years earlier and been visited in 1392 by Richard II, whose coat of arms included a white hart. The ground floor was originally stabling and in the mid-1700s the owners ran their own coach service to London. On 13 September 2009, the building and roof suffered significant damage during a fire.
Marygreen Manor, a handsome 16th century building on London Road, is mentioned in Samuel Pepys' diaries and is said to have been often visited by the Tudor monarch Henry VIII when Henry Roper, Gentleman Pursuant to Queen Catherine of Aragon, lived there in 1514. It is now a hotel and restaurant. In 1686, Brentwood's inns were estimated to provide 110 beds and stabling for 183 horses. There were 11 inns in the town in 1788.
Protestant martyr William Hunter was burnt at the stake in Brentwood in 1555. A monument to him was erected by subscription in 1861 at Wilson's Corner. Brentwood School was founded in 1557 and established in 1558, in Ingrave Road and behind the greens on Shenfield Road by Sir Anthony Browne and the site of Hunter's execution in commemorated by a plaque in the school. Thomas Munn, 'gentleman brickmaker' of Brentwood, met a less noble end when he was hanged for robbing the Yarmouth mail and his body was exhibited in chains at Gallows Corner, a road junction a few miles from Brentwood, in Romford. A ducking stool was mentioned in 1584.
As the Roman road grew busier, Brentwood became a major coaching stop for stagecoaches, with plenty of inns for overnight accommodation as the horses were rested. A 'stage' was approximately ten miles, and being about 20 miles (32 km) from London, Brentwood would have been a second stop for travellers to East Anglia. This has not changed; there is an above average number of pubs in the area - possibly due to the army being stationed at Warley Barracks until the 1960s. Some of the pubs date back to the 15th and 16th centuries. Brentwood was also significant as a hub for the London postal service, with a major post office since the 18th century. The most recent major post office on the high street was recently closed in the 2008 budget cuts; Brentwood residents now must rely on sub-postal offices.
Daniel Defoe wrote about Brentwood as being "...full of good inns, and chiefly maintained by the excessive multitude of carriers and passengers, which are constantly passing this way to London, with droves of cattle, provisions and manufactures."
The 'Brentwood Ring', the earliest Christian ring ever to have been discovered in Britain was found in Brentwood in the late 1940s. It now resides at the British Museum in London. The only other ring of its type in existence can be found at the Vatican Museum in Rome.
Brentwood originated as an ancient parish of 460 acres (1.86 km²). In 1891 the population was 4,949. Under the Local Government Act 1894, the Brentwood parish formed part of the Billericay Rural District of Essex. In 1899 the parish was removed from the rural district and formed the Brentwood Urban District. In 1934 the parish and district were enlarged by gaining Hutton, Ingrave and South Weald. The district was abolished in 1974 by the Local Government Act 1972, and Brentwood urban district was joined with the parishes of Ingatestone and Fryerning, Mountnessing, Doddinghurst, Blackmore, Navestock, Kelvedon Hatch, and Stondon Massey to form the Brentwood district with a total area of 36,378 acres. In 1976 the new district was divided into 18 wards, with 39 councillors. In 1993, Brentwood gained 'borough status.
In 1917, the parish church was awarded cathedral status, then between 1989 and 1991 the building was modified to appear in an Italianate Classical style. Brentwood Cathedral is currently the seat of the Roman Catholic Bishop of Brentwood.
Incidentally, Ingatestone Hall, noted for its Roman Catholic connections through the Petres, is a 16th-century manor house built by Sir William Petre at Yenge-atte-Stone. The staunch Petres played a significant role in the preservation of the Catholic faith in England. Sir William was assistant to Thomas Cromwell when Henry VIII sought to dissolve the monasteries and ascended to the confidential post of Secretary of State, throughout the revolutionary changes of four Tudor monarchs: Henry VIII, Edward VI, Mary I, and Elizabeth I. Queen Mary, in 1553, on her way to claim her crown in London, stopped at Ingatestone Hall; later, Queen Elizabeth I spent several nights at the hall on her royal progress of 1561.
Today, Ingatestone Hall, like all other large Tudor houses, is an expression of wealth and status and retains many of the features of a 16th-century knightly residence, despite alterations by descendants who still live in the house. Ingatestone Hall represented the exterior of Bleak House in the 2005 television adaptation of Charles Dickens' novel, and also appeared in an episode of the television series Lovejoy. It is open to the public for tours, concerts, and performances; the hall and grounds can be rented for weddings and other occasions.
Brentwood was the location of Warley Hospital, a psychiatric hospital, from 1853 to 2001. A British East India Company elephant training school was based in Brentwood and this remained an active army base as a depot for the Essex Regiment until 1959, when much of the site was redeveloped as the European headquarters for the Ford Motor Company. A few buildings remain from the Barracks - the regimental chapel, the gymnasium (now home to Brentwood Trampoline Club) and the officers mess (now Marillac Hospital).
Brentwood's military history
The military has associations with the Warley section of Brentwood going back over 200 years. It also had strategic importance during the time of the Spanish Armada - it was used as a meeting place for contingents from eight eastern and midland counties (900 horsemen assembled here) to then travel on to Tilbury.
The local common was used as a military camp in 1742, with thousands of troops camped there during the summer months. It was an ideal base, as it was less than a day's march to Tilbury, where the troops would leave for foreign service. In the 1778 encampment, George III came to inspect the troops, and Dr Samuel Johnson stayed for five days. The camps were made permanent in 1804, with space for 2,000 cavalry. 116 acres (0.47 km2) of land were bought and used for two troops of horse artillery - 222 horses, with 306 soldiers of varying ranks and ten officers - a hospital, and half a battalion of the Rifle Brigade.
In 1842 the East India Company's barracks at Chatham became inadequate, and they purchased the land to move their troops in. Accommodation was created for 785 recruits and 20 sergeants with new buildings for the officers. Married family housing was also provided, and a chapel. In 1856 further building work was carried out, and a total of 1,120 men were housed there every year. After training they were deployed to India.
The area and men were absorbed into the British Army after the Indian Mutiny in 1857, and in 1861 the barracks was bought by the War Office. By 1881 the many different regiments had evolved into the Essex Regiment, which saw active service in the Boer War and both World Wars. The barracks served as a training centre and depot for the Essex Regiment for a number of years after the war, with many National Servicemen serving their first weeks here, but with the ending of conscription in 1960 the barracks closed.
During World War II, over 1,000 bombs were dropped on Brentwood, with 19 flying bombs (doodlebugs), 32 long-range rockets (V2s) and many incendiary bombs and parachute mines. 5,038 houses were destroyed, 389 people were injured and 43 died. The 15th- and 16th-century pubs, however, survived. Brentwood had been considered a safe enough haven to evacuate London children here - 6,000 children arrived in September 1939 alone.
The town is increasingly suburban, but it does have a very rural feel, with trees, fields and open spaces all around the town; Shenfield Common is also less than one mile from town centre shops.
Brentwood's high street has also been subject to major redevelopment works costing between £3 million and £7 million. This included the demolition of the Sir Charles Napier pub to build an additional lane to improve traffic flow at the west end of the high street, and re-laying the pavements and road surface in the high street itself.
Primary education is provided by a mixture of state schools, parochial schools (Church of England and Catholic) and independent prep schools.
The Ford Motor Company's United Kingdom headquarters are located in the suburb of Warley, as are the property developer Countryside Properties. Hinge manufacturers NV Tools are based in the commuter suburb of Hutton.
From the financial services sector, Equity Insurance Group, comprising Equity Red Star (Lloyd's syndicate 218), affinity provider Equity Direct Broking Limited and motorcycle insurance broker Bike Team, is headquartered in the town centre. General insurance broker Brents Insurance established in the town in 1963. The Bank of New York Mellon also have a substantial presence in the town.
LV= also has a major office in the town, employing 350 people at present.
The previous and current headquarters of electronics company Amstrad are located in Brentwood. The television show The Apprentice used overhead views of the Canary Wharf business district in London as an accompaniment to interior shots of the previous Amstrad offices, Amstrad House, which has since been converted into a Premier Inn hotel. Amstrad's current headquarters are located directly opposite the old Amstrad House.
Well-known businesses that used to operate in the town include vacuum flask manufacturer Thermos, and Nissen whose UK factory and headquarters were established in the town by Ted Blake in the mid-1960s but closed in the 1980s.
The unemployment rate in Brentwood is 1.9%.
Local government and politics
Brentwood forms part of the larger Borough of Brentwood which also encompasses the surrounding smaller suburbs and villages. For elections to Westminster, Brentwood forms part of the Brentwood and Ongar constituency. In the 2010 General Election, Conservative Eric Pickles retained his seat in parliament, a position he has held since taking office in 1992.
Arts and media
The Brentwood Theatre and The Hermitage are the main cultural buildings in Brentwood; located on the same site in the town centre, the yellow and blue theatre and the historic brick buildings are difficult to miss. Owned and maintained by an independent charity, Brentwood Theatre receives no regular arts funding or subsidy. However, through careful management and with the support of a team of volunteers it is able to keep costs low so that hire rates are good value for a 100- to 176-seat professional venue. The Hermitage is used as the centre for Brentwood Youth Service.
Brentwood Theatre is a fully fitted community theatre that serves more than 40 non-professional performing arts groups. With high-quality lighting and sound, set design and production, and flexible staging, it is also an ideal venue for touring professional troupes, like Eastern Angles, who recently staged Return to Akenfield to critical acclaim. Audrey Longman, the retiring chair of Brentwood Theatre Trust, and patron Stephen Moyer led a fundraising campaign to build on much-needed backstage facilities. The "Reaching Out, Building On" campaign, well publicised by local radio station Phoenix FM, enabled the theatre to build on an entire back wing, enhancing the theatre with dressing rooms, a kitchen, office space, lifts, and the Audrey Longman Studio, a 40-seat multi-purpose room outfitted with dance mirrors, staging, seating, lighting, and sound for intimate performances, rehearsals, workshops, classes, and community meetings and ceremonies.
Local involvement provided support for Brentwood Theatre's renovation, but the campaign received a significant bump when a fan-based fundraiser became known to American fans of actor Stephen Moyer, the first patron of the theatre.
The theatre has also become known for its month-long Annual Holiday Children's Production every December. In 2008, local families enjoyed Roald Dahl's Fantastic Mr. Fox with Stephen Gunshon, Deborah Leury, and Katie-Elizabeth Allgood; the theatre presented Dahl's The Twits in the 2009 season.
The Hermitage youth service operates its own cafe, youth club and a live music venue called The Hermit, which has had hosted bands such as Motörhead and InMe. InMe were heavily supported in their early years by the venue, whose purpose is to promote and encourage youth bands. It also plays host to private events such as a weekly jazz club that was run by the saxophonist Spike Robinson until his death. Both venues co-host the Brentwood Blues Festival, a music event that has played host to the Blockheads and Bill Wyman.
A community radio station, Phoenix FM serves the Brentwood area. The station was formed in August 1996 and broadcast ten trial broadcasts under a restricted service licence, each lasting 28 days, the first starting on 29 December 1996 and the last ending on 25 February 2006. On 23 March 2007, the station started to broadcast permanently on 98.0 FM, featuring popular music, local musicians and acts, local events, and interviews with key local figures.
The Brentwood Art Trail has become a popular annual summer event which was developed to create an arts experience whereby art created by local people can be recognised and appreciated.
Brentwood is also home to the Royal British Legion Youth Band, which perform at many events throughout the year, including the military tattoo at Haileybury and Swanage Carnival. It is a successful band and attracts youngsters from the age of eight from Brentwood and surrounding areas. It was the first British band to ever take part in the Tournament of Roses Parade in Pasadena, California. It meets twice a week in Warley.
Among the many theatre companies in the region, Brentwood Operatic Society and Shenfield Operatic Society are two that represent the many groups providing excellent theatrical productions for the community. Brentwood Operatic Society also trains young actors with its BOSSY Youth acting program, headed by Gaynor Wilson, who formerly directed actor Stephen Moyer. David Pickthall serves as musical director when not heading the music department at Brentwood School, scoring films and television shows for the BBC, directing British orchestras, and composing. The award-winning composer wrote two operas and three musicals, published worldwide by Samuel French Ltd. He is also the musical voice of the villainous penguin in the Oscar-winning Wallace & Grommit: The Wrong Trousers.
Brentwood's Orchestras for Young People was founded in 1990 and grew to include five ensembles for orchestral instrumentalists of school age, who perform regularly in and around the town. Regular rehearsals and workshops introduce the musicians to a wide variety of music, from well-known classical pieces to modern music.
The Brentwood Performing Arts Festival has now been accepted into membership by the British and International Federation of Festivals of which Queen Elizabeth II is patron. With this, the Festival has achieved recognition as the Festival of Performing Arts for Brentwood.
The town is the venue of the Brentwood International Chess Congress which was set up in 2006 and first ran 17–18 February 2007. The congress attracted 235 competitors who included three Grandmasters and five International Masters. The prize fund is relatively generous in comparison to many other similar congresses, being around £4,000. In 2007 it was the largest chess competition to be held in Essex and was organised by Brentwood Chess Club.
Sport, parks and open spaces
Although close to the extremities of Greater London, Brentwood is surrounded by open countryside and woodland. This has been cited as showing the success of the Metropolitan Green Belt in halting the outward spread of London's built-up area.
Brentwood has a number of public open spaces including King George V Playing Field, Shenfield Common, and two country parks at South Weald and Thorndon. Weald Country Park was first chosen to hold the 2012 Olympics mountain biking but was declared to be "too easy" a course. Brentwood does however host a number of Criterium Cycle Races that attract many of Britain's greatest cyclists.
The town has two large sports centres providing access to a range of sports including badminton, squash, swimming, and football. There are a number of golf courses, including a 70-par municipal course very close to the town centre at Hartswood as well as others in the surrounding countryside. A number of cricket clubs exist in and around the town although the County Ground, closest to the town centre, no longer hosts Essex matches. Brentwood is also home to Non-League football club Brentwood Town F.C. and basketball team London Leopards, who both play at the Brentwood Centre Arena. The town is also home to London junior league club Brentwood Elvers RLFC, the only rugby league club in west Essex. Brentwood Hockey Club is also based in the town at the Old County Ground and fielded 7 Mens and 5 Ladies league teams for the 2009-10 season.
Although no longer manufactured here, Brentwood became the centre of trampolining in the United Kingdom between 1965 and 1981 after George Nissen brought the new sport to the town in 1949 and eventually manufactured trampolines in the town, continuing to do so for many years after they ceased production in the USA for fear of litigation. Ted Blake, a long-term Brentwood resident, was managing director of Nissen UK from its inception until shortly before it closed and became a leading figure worldwide in the development of modern trampolining. Brentwood still has a thriving trampolining community but no longer a local factory.
- Jonty Hearnden - television presenter
- Noel Moore - Civil Servant
- Sarah Kane - playwright
- Paul Wickens - musician
- Pixie Lott - singer
- Steve Davis - snooker player
- Jodie Marsh - glamour model
- Stephen Moyer - actor
- Logan Sama - grime DJ
- Danny Young - actor
- Frank Bruno - boxer
- Carlton Leach - former criminal
- Billy Murray - actor
- Ray Parlour - footballer
- Dave McPherson - musician
- Neil Ruddock - former footballer
- Trevor Brooking - former footballer
- John Jervis - admiral of the fleet and patron of Nelson
- Fatima Whitbread - Olympic medallist
- Ross Kemp - actor
- Barry Hearn - sports investor
- Johnny Herbert - three times Formula One race winner and winner of 1991 Le Mans 24 hours
- Perry McCarthy - Formula One driver (former Top Gear Stig)
- Louise Redknapp - model and singer
- Leddra Chapman - singer
- Frank Lampard - footballer (attended Brentwood School)
- Ian Pont - cricketer (attended Brentwood School)
- Noel Edmonds - television presenter (attended Brentwood School)
- Eric Pickles - conservative MP
- Jeff Randall - journalist
- Amy Childs - reality television star
- Douglas Adams - writer (attended Brentwood School)
- Griff Rhys Jones - actor (attended Brentwood School)
- Russell Tovey - actor (Being Human, UK)
Brentwood is served by a number of bus services, many being operated by First Essex. The other main public transport providers include Arriva Shires & Essex, Imperial Buses, Regal Busways and First London. London Buses route 498 links Romford with Brentwood and operates daily.
Brentwood railway station is located to the south of the town centre and is served by stopping services between London Liverpool Street and Shenfield. Also within the borough of Brentwood are Ingatestone and Shenfield stations (with fast services to Liverpool Street) and West Horndon (with services between London Fenchurch Street and Shoeburyness).
- Brentwood Borough Council - About Brentwood
- Brentwood Replacement Local Plan
- http://www.brentwood-council.gov.uk/index.php?cid=52 brentwood-council.gov.uk
- Anglican Parish of St Thomas of Canterbury
- Dedman, M., 2012. Peasants were revolting - and they did it right here. Brentwood Gazette, 6 Jun. p. 20.
- "Post office closures: Full list of branches". The Daily Telegraph (London). 28 February 2008. Retrieved 26 April 2010.
- Antiquaries Journal Volume 65 (1985) 'A Roman Christian ring from Brentwood, Essex' pgs. 461-463
- Vision of Britain - Brentwood: Total Population
- Vision of Britain - Parish boundaries with 1934 enlargement shown
- This is Total Essex.co.uk - £3 million revamp - Retrieved 2009-11-01
- This is Total Essex.co.uk - £6 million revamp - Retrieved 2009-11-01
- Brentwood High Street.co.uk - £7 million revamp - Retrieved 2009-11-01
- This is Total Essex.co.uk - Sir Charles Napier pub demolition
- http://www.brentwoodweeklynews.co.uk/news/localnews/4646465.New_hotel_set_to_open/ brentwoodweeklynews.co.uk
- Guardian.co.uk - Eric Pickles local Conservative MP. Accessed 1 June 2010.
- The Guardian, Lyn Gardner review of Eastern Angles "Return to Akenfield" at Brentwood Theatre http://www.guardian.co.uk/stage/2009/apr/23/return-to-akenfield-review, 23 April 2008
- Review of Fantastic Mr. Fox production at Brentwood Theatre, http://www.thestage.co.uk/reviews/review.php/22772/fantastic-mr-fox
- Rose Parade Participants
- Adams, Nicky (January 2009). "A Tuneful Town". Essex Life (Archant). pp. 55–58.
- Brentwood Borough Council - Brentwood Local Plan - Green Belt & the Countryside
- Brentwood Borough Council - Leisure and Culture - Sports and Activities - Golf
- Brentwood Cricket Club Official Web Site
- Brentwood Trampoline Club
- Kelly, Francis (2 August 2010). "Handball handed popularity rise". BBC News.
- "Antiques star Jonty back to his roots". This is Total Essex. Retrieved 30 October 2009.
- IMDB, "Stephen Moyer". Retrieved on 2009-03-30.
|Wikimedia Commons has media related to Brentwood, Essex.|
- http://www.activbrentwood.com - Comprehensive Guide to Brentwood
- Brentwood Borough Council - Welcome to Brentwood (PDF)
- Churches in Brentwood - The Website of Churches Together in Brentwood
- BBC's H2G2 Entry on Brentwood
- - Website for Brentwood Theatre, Essex, UK
- - Inside Brentwood Local Business Directory and Guide for the Brentwood area
- Brentwood Town Football Club - Brentwood Town Football Club's Official Website
|
<urn:uuid:e271613d-0371-4744-9d89-48cacb525df0>
|
{
"dump": "CC-MAIN-2014-10",
"url": "http://en.wikipedia.org/wiki/Brentwood,_Essex",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010746376/warc/CC-MAIN-20140305091226-00034-ip-10-183-142-35.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.958490788936615,
"token_count": 5962,
"score": 2.9375,
"int_score": 3
}
|
Curating success stories to illuminate the path ahead
Collaboration and interdisciplinarity is key to achieving the green recovery we desire. At Wolfson College, University of Cambridge, our researchers have brought about positive change by working with industry, and through the innovative use of interdisciplinary methods.
Dr Chris Town, a computer scientist and research fellow at Wolfson, was inspired by the work of marine biologists studying manta rays. The marine biologists were painstakingly going through thousands of images to manually identify individual mantas, so Dr Town developed a pattern matching algorithm to automate the task. With the help of the MantaMatcher algorithm, we now have a more comprehensive understanding of mantas’ behaviour and ecology. The scientific data collected using the algorithm also led to increased protection of the species under CITES. A citizen science platform — mantamatcher.org — was also created to crowdsource images of mantas, which will assist scientists in tracking mantas’ movements and documenting their numbers. Dr Town continues to assist in the collection and analysis of data of animals, including whales and turtles.
Professor Steve Evans, a Wolfson fellow and Director of Research in Industrial Sustainability, decided to tackle the resource-intensive process of jeans manufacturing. Working with an industrial manufacturer, they found that a pair of jeans requires about 800 litres of water to produce. His work resulted in a LEED certified facility with 98% efficiency, which reduced water use to just 0.4 litres per item. Over a year, the new process would save over 430 million litres of water, equal to the annual water consumption of 432,000 people. Research applied to industry can revolutionise inefficient processes, reducing our ecological footprint and assisting with our green recovery.
By collecting and publicising these successful research interventions, Wolfson’s Sustainability and Conservation Hub seeks to inspire the next generation of thinkers and practitioners. These stories and past experiences help us imagine new ways of harnessing research and technology. They also show how collaborating across disciplines and with industry can lead to big successes. Only by working together and thinking outside the box can we have a chance at achieving the green recovery we seek.
The Sustainability and Conservation Hub at Wolfson College, University of Cambridge, convenes interested individuals and organisations to inform, educate, and explore disruptive solutions to combat the destruction of the natural world. By empowering Wolfson’s diverse and international network with thought leadership, action and ambition, we seek to facilitate the deep changes needed to wider global systems. We take interdisciplinary focus, multi-generational collaboration and systems thinking to be key elements of the change required. We host events, projects, mentoring and much more, with strengths in inclusion and diversity of people and thought. We connect returning professionals, new students from 85+ nations, world leading researchers and engaged external individuals and organisations towards action. For more information, visit our website: www.wolfson.cam.ac.uk/sc-hub
Did you miss Climate Exp0? You can watch the highlight videos from all five days of the conference here. All conference content and sessions will be freely and permanently available on Cambridge Open Engage from mid-June. For those who attended Climate Exp0, you can view all sessions from Climate Exp0 now via the Climate Exp0 homepage, using your conference login.
|
<urn:uuid:154d11c1-c811-4fa3-9201-191e6e04f06b>
|
{
"dump": "CC-MAIN-2022-33",
"url": "https://climateexp0.medium.com/curating-success-stories-for-a-better-tomorrow-cd9a8bc025ca?source=user_profile---------6----------------------------",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571472.69/warc/CC-MAIN-20220811133823-20220811163823-00446.warc.gz",
"language": "en",
"language_score": 0.9256798624992371,
"token_count": 680,
"score": 2.671875,
"int_score": 3
}
|
Role in WW2
Main article: History of Liechtenstein § Liechtenstein during the World Wars
Shortly following the end of World War I, Liechtenstein concluded a customs and monetary agreement with neighboring Switzerland. In 1919, the close ties between the two nations were strengthened when Liechtenstein entrusted Switzerland with its external relations. At the outbreak of war, Prince Franz Josef II, who had ascended the throne only months before, promised to keep the principality out of the war and relied upon its close ties to Switzerland for its protection.
Attempts to sway the government did occur. After an attempted coup in March 1939, the National Socialist "German National Movement in Liechtenstein" was active but small. The organization, as well as any Nazi sympathies, virtually disappeared following the eruption of war.
|
<urn:uuid:d98df784-d42e-4588-b1c3-e8fdf29e54de>
|
{
"dump": "CC-MAIN-2021-43",
"url": "https://www.nevingtonwarmuseum.com/liechenstein.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00651.warc.gz",
"language": "en",
"language_score": 0.9590149521827698,
"token_count": 168,
"score": 4.09375,
"int_score": 4
}
|
The Sacrament of Baptism incorporates us into the Church, the Body of Christ, and is our introduction to the life of the Holy Trinity. Water is a natural symbol of cleansing and newness of life. Through the three-fold immersion in the waters of Baptism in the Name of the Holy Trinity, one dies to the old ways of sin and is born to a new life in Christ. Baptism is one's public identification with Christ Death and victorious Resurrection. Following the custom of the early Church, Orthodoxy encourages the baptism of infants. The Church believes that the Sacrament is bearing witness to the action of God who chooses a child to be an important member of His people. From the day of their baptism, children are expected to mature in the life of the Spirit, through their family and the Church. The Baptism of adults is practiced when there was no previous baptism in the name of the Holy Trinity.
|
<urn:uuid:681f62ff-8eff-4d9a-9604-794e7fb6ac73>
|
{
"dump": "CC-MAIN-2017-17",
"url": "http://www.greekfoodfest.org/seven_sacraments/baptism.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118552.28/warc/CC-MAIN-20170423031158-00144-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9671870470046997,
"token_count": 183,
"score": 2.671875,
"int_score": 3
}
|
The area of finanical mathematics has expanded remarkably over the last few years.
It brings together ideas from statistics, calculus, numerical analysis, operational research
and other areas of maths to solve financial problems, usually with a computer program.
What are derivatives?
The area of financial derivatives very often requires complex mathematics to
evaluate most problems. Essentially, a derivative is a type of financial contact whose
payoff is based on an underlying asset. The underlying asset could be an equity,
bond or virtually anything. Take for example weather derivatives whose payoffs can be
based on the difference between an agreed temperature and the actual temperature several months in the future.
Derivatives can either be bought to insure against a future event or can be bought by speculators,
to profit from changes in their value.
|
<urn:uuid:3a66d894-a0cd-483c-9840-375b6e7cc485>
|
{
"dump": "CC-MAIN-2017-51",
"url": "http://saeedamen.com/index.aspx?primary=financial%20maths",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948522205.7/warc/CC-MAIN-20171213065419-20171213085419-00335.warc.gz",
"language": "en",
"language_score": 0.9470116496086121,
"token_count": 165,
"score": 3.296875,
"int_score": 3
}
|
Recovery after strenuous exercise involves processes that are dependent on fluid and food intake. Current sports nutrition guidelines provide recommendations for the quantity and timing of consumption of nutrients to optimise recovery issues such as refuelling, rehydration and protein synthesis for repair and adaptation. Recovery of immune and antioxidant systems is important but less well documented. In some cases, there is little effective recovery until nutrients are supplied, while in others, the stimulus for recovery is strongest in the period immediately after exercise. Lack of appropriate nutritional support will reduce adaption to exercise and impair preparation for future bouts. Ramadan represents a special case of intermittent fasting undertaken by many athletes during periods of training as well as important competitive events. The avoidance of fluid and food intake from sunrise to sundown involves prolonged periods without intake of nutrients, inflexibility with the timing of eating and drinking over the day and around an exercise session, and changes to usual dietary choices due to the special foods involved with various rituals. These outcomes will all challenge the athlete's ability to recover optimally between exercise sessions undertaken during the fast or from day to day.
Statistics from Altmetric.com
Competing interests None.
Provenance and peer review Not commissioned; externally peer reviewed.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
|
<urn:uuid:34a26ba4-ab1b-4cbc-aa0a-94c6d710193c>
|
{
"dump": "CC-MAIN-2017-39",
"url": "http://bjsm.bmj.com/content/early/2010/05/10/bjsm.2007.071472",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686705.10/warc/CC-MAIN-20170920071017-20170920091017-00384.warc.gz",
"language": "en",
"language_score": 0.9402046203613281,
"token_count": 304,
"score": 3.109375,
"int_score": 3
}
|
As a caregiver, it is important to understand the communication challenges of those afflicted with Alzheimer’s. Communication skills and short term memory are the first to be affected by the disease. Your loved one may began using the same words or phrases repeatedly even when they do not apply to the conversation. They may also invent new words to represent objects they recognize but are no longer able to recollect the correct term. If the person is multilingual, he or she can revert back to their native language. Forgetting names, losing train of thought, and difficulty expressing ideas in a logical order, and a decline in speaking are all common changes.
Despite these various obstacles, there are many ways you can continue to communicate with someone who has Alzheimer’s and prevent further deterioration of communication skills. Being patient and supportive shows your loved one that you care about their needs. Offering comfort and reassurance encourages them to continue speaking even when they are having trouble expressing themselves. You can also offer a guess if they start to get frustrated by the communication barrier. Avoid negativity such as criticism, corrections, and arguments. Instead, focus on listening to decipher meaning. Understand that actions and emotions speak louder than words. The way they are expressing themselves will give you more insight on how they are feeling than the actual words they are saying.
Love and attention are the best medicines for someone with Alzheimer’s. Regardless of their level of communication, take the time to engage in conversation with them regularly. Speak slowly, repeat information when needed, and maintain eye contact. They will be more likely to remember words if they hear them often.
Edison Home Health Care is happy to advise and assist you or any loved one who seek appropriate care of Alzheimer’s disease. Give us a call at 888-311-1142, or fill out a contact form and we will respond shortly.
|
<urn:uuid:3e56c234-f5d2-4d1e-842b-ea8b49646f88>
|
{
"dump": "CC-MAIN-2017-22",
"url": "https://edisonhhc.com/best-practices-to-help-alzheimers-patients-communicate/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607860.7/warc/CC-MAIN-20170524192431-20170524212431-00460.warc.gz",
"language": "en",
"language_score": 0.9572676420211792,
"token_count": 380,
"score": 2.96875,
"int_score": 3
}
|
NEW MEMBERS TO THE SOCIETY.
I am not discussing here about me shifting to a new location or about my new neighbours who have recently moved in from Nagpur. But I am referring to a new Amphibian family found in Northeast India.
These new amphibian belong to the caecilian group of amphibians, has been called Chikilidae after the name used by the local Garo tribal language. Much less is known about these legless amphibians unlike their more famous and local amphibian cousins, the frogs.
They are often misunderstood as dangerous and poisonous snakes these vulnerable creatures end up being chopped by farmers because of their mistaken identity. Rather harmless creatures, the Chikilidae may even be the farmer’s best friend, feasting on worms and insects that might harm the crops and churning the soil as it moves underground where it seems more safe and secure.
This new family was discovered by S.D Biju, a professor at the University of Delhi who led this project with team members from Britain and Belgium working for the past five years.
It was a challenging physical job digging with spades at locations looking for worms like creatures, which are about 20 centimeter long(eight inches precisely) long and often 25 centimetres deep into the earth.
An adult Chikilidae remains with its eggs until they hatch, forgoing food for nearly 50 days. When the eggs hatch, the young ones emerge as tiny adults.
They grow about 10 centimetres and have hard skull which they use to get through some of the regions toughest soils, escaping quickly at the slightest vibration. Just like a rocket. If you miss it you’ll never catch it again.
Biju however feels, there are still 30-40 percent of the country’s amphibians yet to be found.
Habitat Destruction Is A Big Problem Worldwide And Discoveries Like This Prove That We Must Protect The Environment To Save Parts Of The Natural World We Know Very Little About.
So Let’s Do Our Bit To Save Nature. We Have No Right To Destroy Anything Not Created By Us.
Roll No. 06
|
<urn:uuid:8ca87d72-5932-41cb-887d-cc587ed63357>
|
{
"dump": "CC-MAIN-2019-04",
"url": "http://donboscobmm.blogspot.com/2012/02/new-members-to-society.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662690.13/warc/CC-MAIN-20190119054606-20190119080606-00217.warc.gz",
"language": "en",
"language_score": 0.9382951855659485,
"token_count": 446,
"score": 2.578125,
"int_score": 3
}
|
- Critics have long argued that the inability of satellites to track deforestation with precision created a loophole that could allow tropical countries to cheat regarding their annual deforestation rates.
- Past satellite imaging systems could not resolve objects smaller than 500 meters (1,640 feet) across. A new system developed by Alessandro Baccini and his Woods Hole, Massachusetts, research team can see objects just 30 meters (98 feet) across.
- Satellite imaging, combined with imaging from airplanes, along with ground-truthing will help make observation of tropical deforestation rates and carbon offsets far more precise in real time, preventing cheating and under reporting.
PARIS: Critics of REDD+, the carbon offset policy designed to preserve tropical forests, argue that the notion that tropical countries will accept huge sums from developed countries to not clear their rain forests — if offered more lucrative deforestation opportunities — is not just a fallacy, it’s unverifiable.
But that’s changing. Rapidly. Dramatically. With greater precision than ever.
In the past, deforestation verification was inexact: relying on static, blurry satellite images with poor resolution — imaging systems couldn’t see anything smaller than 500 meters across. Over the course of a year, scientists would compare fuzzy images of a specific tropical sector and estimate — guess, really — how much forest had gone missing. Then they would estimate again how much carbon storage capacity was removed. Another guess.
Now, Alessandro Baccini, a remote sensing scientist and his team at Woods Hole Research Center in Falmouth, Massachusetts, are reducing the guesswork. They have gained the ability to track a forest sector in real time, at 30 meters (98 feet), not 500 meters (1,640 feet). The dramatic difference in precision is best illustrated in a photo provided by Woods Hole — the equivalent of a myopic person getting a new pair of glasses.
“For each 30-meter cell in the entire tropics, if that cell is deforested we know the carbon density of that vegetation before, and can make an accurate comparison,” Baccini said in an interview with Mongabay at COP21. “A 30-meter cell is a pixel size — a minimum unit of a satellite image.
“And it’s a huge difference. The problem before: when it was 500 meters, you couldn’t detect deforestation at smaller scales, like trees cut down here and there. So much critical information was missing. There was this mismatch between the carbon density and the deforestation. Now we’ve solved that mismatch.”
The methodology Woods Hole uses is multifaceted. It employs:
- The latest Landsat technology, which processes satellite images and can detect a single missing tree, or an entire clear-cut hectare. It also reads the forest canopy.
- LIDAR imaging, of the kind perfected by Stanford University’s Greg Asner and his Carnegie Airborne Observatory, to gauge the height of trees and vertical structure of carbon-storing vegetation.
- It figures in data from scientists on the ground who literally measure a tree’s mass to calculate more precisely the carbon stored within. When that tree falls and is removed, observers know how much carbon-sink capacity is lost, and how much carbon is released if the tree is burned.
“The types of forest degradation that cause carbon emissions, such as selective logging and fire, can be quantified and monitored [now] using current satellite technology,” said Asner. “Aircraft-based technology is much more accurate, however, and can be use to train and improve satellite-based approaches. Together, the two scales of activity readily resolve the effects of forest degradation on carbon emissions.”
“This combined data really makes [deforestation results] more transparent and easier to understand, and should provide more confidence [to REDD+ participants and critics, demonstrating] that we know what emissions come from deforestation,” Baccini said. “You can see it on the screen. It’s not blurry. It’s easy to measure. Every GIS technician can do it.”
Baccinbi added: “If you want REDD to be successful and stabilize the concentration of CO2 in the atmosphere, to have a big impact on [stabilizing] the climate, REDD needs to be able to measure emissions on a global scale. It needs to be transparent and reliable.
“And you need data that is consistent across countries, consistent across political boundaries and that everybody can access and use. Any estimate that comes out would be easy to assess, understand and explain.” The new satellite imaging refinements now available offer all these capabilities, and those sophisticated systems are about to go to work monitoring global tropical forests in a big way.
On November 30th, the opening day of the 21st UN climate summit, Norway, Germany and the United Kingdom pledged up to $5 billion US between now and 2020 to support REDD+ projects around the world. Colombia, for example, which has been approved as a REDD+ participant, will receive $100 million from the three countries to reduce deforestation and forest degradation in the Amazon. Colombia is part of what’s called the REDD Early Movers Program.
Nearly all REDD+ agreements are designed to be results based. Tropical countries are not paid for preserving forests unless they can demonstrate, year after year, that deforestation is not taking place, and that an equivalent carbon offset in standing forest exists for the industrial carbon-emitting nation offering payment.
Chris Meyer, a senior manager on tropical forest policy for the Environment Defense Fund in Washington, D.C., said at COP21 that Colombia can benefit from the new improved Woods Hole methodology.
“There is no one answer to every situation, but this will start slowly turning the ship around,” Meyer said. “Verification is complex. It’s multiple things — including hiring more inspectors [on the ground] to enforce laws against deforestation. But the Woods Hole work definitely has a role to play.”
Steve Panfil, a technical adviser for REDD+ with Conservation International in Washington, D.C., agrees:
“Our confidence in the ability to track outcomes is quite good and getting better and better over time. We now have [Woods Hole improved imaging] global data sets to actually monitor what’s happening on the ground — the coverage of the forest torn down, and how much carbon was there to begin with. On both sides of that, we are getting better and better on how we track those things.”
Tropical forests store 25% of global carbon and harbor 96% of the world’s tree species. Now, thanks to the new satellite imaging systems, it will be possible to track forest carbon storage tree-by-tree, hectare by hectare, making REDD verification of deforestation far more accurate.
Justin Catanoso is director of journalism at Wake Forest University in North Carolina. His reporting is sponsored by the Pulitzer Center on Crisis Reporting and the Center for Energy, Environment and Sustainability at Wake Forest.
|
<urn:uuid:1aae0c41-2c8a-4aa3-9bb3-f15847a7042f>
|
{
"dump": "CC-MAIN-2018-09",
"url": "https://news.mongabay.com/2015/12/cop21-new-satellite-imaging-tracks-redd-deforestation-tree-by-tree/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00067.warc.gz",
"language": "en",
"language_score": 0.9426192045211792,
"token_count": 1479,
"score": 3.640625,
"int_score": 4
}
|
This study investigated phthalate emissions from vinyl flooring in a large-scale chamber. Vinyl flooring materials were examined for their phthalates content; one with high contents of diisononyl phthalate (DINP) and di(2-ethylhexyl) phthalate (DEHP) was selected for emissions testing in a small chamber at two different temperatures. Using the same type of vinyl flooring, large-scale chamber experiments were then conducted in three testing phases. In the first phase, the gas-phase concentrations of DINP and DEHP in the large chamber at 36°C were about three times lower than those in the small chamber under the same temperature, which is consistent with its lower area/volume ratio. In the second phase, when a large air mixing fan inside the chamber was replaced with a small fan, the gas-phase concentrations of DINP and DEHP in the large chamber were reduced slightly, due to the decease of mass transfer coefficient and emission rate. During the last phase, when the temperature of the chamber was reduced to 25°C, phthalate concentrations dropped instantly and steeply due to the significantly reduced emissions. However, they did not decrease as quickly thereafter because of desorption of phthalates from the internal surfaces of the large chamber. A fundamental mechanistic model was developed to interpret the experimental results in the large chamber based on the emission characteristics obtained in the small chamber measurements. Reasonable agreement was obtained between the model calculation and experimental data. Further model simulations show that temperature and air mixing above the source material have important effects on the fate of phthalates, while the impact of air change rate (ACH) is not significant.
- Large-scale chamber
ASJC Scopus subject areas
- Environmental Engineering
- Civil and Structural Engineering
- Geography, Planning and Development
- Building and Construction
|
<urn:uuid:068118bd-8edd-40c0-9765-9b1256817916>
|
{
"dump": "CC-MAIN-2021-39",
"url": "https://experts.syr.edu/en/publications/large-scale-chamber-investigation-and-simulation-of-phthalate-emi",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00600.warc.gz",
"language": "en",
"language_score": 0.9593995213508606,
"token_count": 383,
"score": 2.734375,
"int_score": 3
}
|
There was a need to cross the Humber from very early in the history of Barton, and a ferry crossing was around at least as early as Roman times, probably somewhere near Poor Farm as this was the first point along the Humber that had a solid chalky shore and not the muddy banks further down. This ferry service was mentioned in the Domesday book (although possibly not in the same place) and continued for many years (click here for a history of crossing the Humber by ferry). In the mid-1800s however a permanent crossing was desired. This initial suggestion was based around a railway line linking Hull with Goole and Doncaster, and at this time a tunnel under the Humber was even suggested. The next proposal was a line linking Hull with Lincoln, crossing the Humber at Hessle and passing a mile or so west of Barton. Even at this early stage it was known as the "Humber Bridge Scheme". This scheme was knocked back when in 1865 it was not given the capital needed to complete it. This proposal was raised again in 1882 when it was reported in the London Gazette (28 November 1882) that In Parliament - session 1883
"Hull and Lincoln Railway
(Incorporation of Company; power to make Railways with a bridge over the Humber; ......"
There was to be 15 parts to this railway, Railway number 8 and 9 relating to the parish of Barton and was reported as:
"Railway No. 8, wholly situate in the intermixed parishes of Saint Mary and Saint Peter, Barton-upon-Humber, in the parts of Lindsey, in the county of Lincoln, commencing by a junction with the intended Railway No. 1, at a point on the fence forming the eastern boundary of a field belonging or reputed to belong to James Grassby, and in occupation, such field being situate on the northern side of the Ings-road, opposite and to the east of the junction with the Four Ings-road with the said Ings-road, such point being 88 yards or thereabouts, measuring along that fence in a northerly direction from the southern end of that fence at the Ings-road, and terminating by a junction with the Barton-upon-Humber branch of the Manchester, Sheffield and Lincolnshire Railway, at the termination thereof."
"Railway No. 9, wholly situated in the intermixed parishes of Saint Mary and Saint Peter, Barton-upon-Humber, in the parts of Lindsey, in the county of Lincoln, commencing by a junction with the intended Railway No. 1, at the public road known as Dam-road, at a point 370 yards or thereabouts, measuring in a westerly direction along that road with the Four Ings-road, and terminating by a junction with the intended Railway No. 8 at the said Dam-road, at a point 33 yards or thereabouts, measuring in an easterly direction along that road from the said junction of that road with Four Ings-road."
Railway No. 7 was proposed to be in Hull following the Selby line, to around Hessle, and Railway No. 10 was proposed to go through Appleby. The report in the London Gazette would suggest the proposed bridge would have been built somewhere near the existing bridge, but probably a little further west, and the line would travel along field boundaries in the Ings to link up with the current station, and then leave Barton by travelling along Dam Road.
This scheme was rejected again as the promoters could not satisfy the Commons Committee that the bridge would not interfere with the shipping on the Humber. The main concerns were the number of support pillars in the Humber supporting the bridge. At this point a suspension bridge had not been considered. A few more attempts was made to propose a railway crossing over the Humber, but none were successful.
The railway crossing idea was abandoned and in the 1930's a multi-span road bridge was designed for the river Humber. This bridge would have cost around £1.75m, with the Ministry of Transport covering 75% of the cost in the form of a grant. This Bill was passed but then subsequently withdrawn due to the financial crisis which then hit the country. The following years were difficult and the war put paid to any real chance of building a bridge at that time. However the bill was revived and updated in 1955. Again the Bill was agreed and an Act of Parliament was passed in 1959 (the Humber Bridge Act 1959) to build a single span suspension bridge from Barton upon Humber to Hessle, joining Lincolnshire and Yorkshire. The decision to build the Humber bridge was finally made on 30 April 1969.
The Humber Bridge During Construction
|The two towers during construction.||The caissons on the south bank.|
Work on the bridge finally started in July 1972 when work on the embankment on the Barton side began and in March the following year work began on the anchorage, tower foundations and towers. The anchorages are where the cables are attached. The south anchorage has foundations to 35 metres and the north anchorage has foundations to 21 metres. The north tower was completed in 1974 but the south tower was still being prepared. This was because the north tower was built on a hard bed of chalk but there was a problem finding a suitable place to build on the south side. Finally the south side was built on some heavy fissured Kimmeridge clay which was in the river about 30 metres down. The depth of the foundations of both towers illustrate this. The north tower has foundations to a depth of 8 metres whereas the south tower has foundations to a depth of 36 metres. This meant the south tower was not completed until 1976. To build the south tower a 500 metre long jetty with a cofferdam was built to create an artificial island (note of interest - there is a time capsule at the bottom of one of the caissons.). The spinning of the cables started in 1977 and finished in 1979 when the box sections started to be put into place. There are 124 hollow deck sections, each weighing 130 tonnes and 18 metres long. In July 1980 the last piece of deck was hung and was welded by December. The tarmac was laid ready for the opening by Her Majesty Queen Elizabeth II on 17th July 1981. (The Bridge had been open to traffic since June 1981).
The Humber Bridge During Construction
|A section of the decking coming down the river.||A section of decking being lifted into position.|
|The ferry passing the north tower.||A section of decking collapsed on the north side.|
A main span of 1410m, north span of 280m and south span of 530m making a total length between anchorages of 2220m. The deck is 28.5m wide and the tower height above the deck is 155.5m. It has a clearance over high water of 30m. There are 27500 tonnes of steel and 480000 tonnes of concrete. The cables are 0.62m in diameter, having 14948 parallel galvanised drawn steel wires. Each wire is about 5mm in diameter, with the total length of wire being 71000km and a load in each cable of 27500 tonnes. The towers consist of two tapered vertical reinforced concrete legs braced together with four reinforced concrete horizontal beams. The legs are 155m high and vary from 6m*6m at the base to 4.5m*4.75m at the top. The roadway is a dual 2-lane carriageway, with combined footway/cycleway to the side and slightly lower. The carriageways are surfaced with a 38mm thickness of mastic asphalt and the footways with a double dressing of rubber bitumen and 3mm chippings. Along the side of the carriageways are crash barriers. Theses consist of four tensioned wire ropes. The anchorages are massive concrete structures where the main cables splay out into separate strands and attach to steel crosshead slabs at the face of the anchor blocks. The North Anchorage weighs 190000 tonnes and the South Anchorage weighs 300000 tonnes.
On 24th September 1993 the 50,000,000th vehicle passed over the bridge and on 8th February 2002 the 100,000,000th vehicle passed over the bridge.
The Humber Bridge Today
|This picture shows the length of the south section.||This picture shows the curve of the bridge.|
|The underside of the bridge looking north.||An aerial view of the bridge.|
Other Interesting Facts :-
To travel from Hull to Scunthorpe is almost 26 miles shorter by crossing the Humber Bridge than by going round the motorway.
To travel from Hull to Grimsby is almost 48 miles shorter by crossing the Humber Bridge than by going round the motorway.
(2002) The Humber Bridge is currently the third longest single span suspension bridge in the world, bettered only by the Akashi Kaikyo bridge in Japan (1990m) and the Great Belt bridge in Denmark (1624m).
There have been 75,157,672 cars and light vans cross the bridge between 1988 and 2000. In this time the cost of crossing for cars and light vans has increased by £1.20 from £1.20 to £2.40.
The current debt (of approx £349m) is anticipated to be paid off within the next 35 years and the original cost of building the bridge (around £55m) has been paid back a long time ago.
Related Links (inbarton is not responsible for the content of external Internet websites)
The Humber Bridge - The official Humber Bridge
Images may not be reproduced without permission from the Webmaster first. Contact at firstname.lastname@example.org
|© copyright 2011 Dazxtm|
|
<urn:uuid:95fb0863-a7ba-4ddc-be51-296faae7ec45>
|
{
"dump": "CC-MAIN-2021-21",
"url": "http://inbarton.atwebpages.com/hbridge.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00323.warc.gz",
"language": "en",
"language_score": 0.9738706946372986,
"token_count": 2040,
"score": 3,
"int_score": 3
}
|
30 June 1971: The Soyuz 11 crew, Cosmonauts Georgiy Timofeyevich Dobrovolsky, Vladislav Nikolayevich Volkov and Viktor Ivanovich Patsayev, ended their 22 days aboard the Salyut 1 space station in Earth orbit and began their return to Earth. At 2128 hours on the 29th, they undocked and completed three more orbits while they prepared for reentry.
At 0135 hours, the Soyuz spacecraft retrorockets fired to decelerate the ship so it would drop back into the atmosphere. 12 minutes, 15 seconds later, at an altitude of 104 miles (168 kilometers), a series of explosive bolts which connected the descent module to the service module detonated. They were intended to fire individually to limit the force on the capsule. Instead, they all fired simultaneously. The impact cause a seal in a pressure-equalization valve to fail and the capsule depressurized. Within 3 minutes, 32 seconds, the capsule’s atmospheric pressure had dropped to zero.
The cosmonauts were not wearing pressure suits. They died in less than one minute.
Georgiy Dobrovolsky, Vladislav Volkov and Viktor Patsayev are the only people from Earth to have died in space since manned space flight began, 12 April 1961, with the flight of Yuri Gagarin.
© 2016, Bryan R. Swopesby
|
<urn:uuid:28dba237-bdd4-4664-b826-6c2e3916b65f>
|
{
"dump": "CC-MAIN-2016-44",
"url": "https://www.thisdayinaviation.com/30-june-1971/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720238.63/warc/CC-MAIN-20161020183840-00484-ip-10-171-6-4.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9446985125541687,
"token_count": 287,
"score": 3.375,
"int_score": 3
}
|
Forty Hours Devotion to the Blessed Sacrament - 2013
Also called Quarant' Ore or written in one word Quarantore, is a devotion in which continuous prayer is made for forty hours before the Blessed Sacrament exposed. It is commonly regarded as of the essence of the devotion that it should be kept up in a succession of churches, terminating in one at about the same hour at which it commences in the next, but this question will be discussed in the historical summary.
A solemn high Mass, "Mass of Exposition", is sung at the beginning, and another, the "Mass of Deposition", at the end of the period of forty hours; and both these Masses are accompanied by a procession of the Blessed Sacrament and by the chanting of the litanies of the saints. The exact period of forty hours' exposition is not in practice very strictly adhered to; for the Mass of Deposition is generally sung, at about the same hour of the morning, two days after the Mass of Exposition.
|
<urn:uuid:c9f8438f-86d9-44c4-b69b-c8f65f92c736>
|
{
"dump": "CC-MAIN-2023-50",
"url": "https://www.stadalbert.org/virtual-tour-photo-gallery/40-hours-devotion-2013/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00166.warc.gz",
"language": "en",
"language_score": 0.9572797417640686,
"token_count": 207,
"score": 2.5625,
"int_score": 3
}
|
We’ve recently seen strong trends with some of America’s biggest global tech corporations making their data greener. This has subsequently created further demand for environmental energy professionals to manage internal resources towards achieving business and green goals.
In order to facilitate these goals the world’s data centers have steadily been converging on utilization of the Green Grid’s PUE (Power Usage Effectiveness) metric in order to establish how energy efficient their data centers are. One of the primary aims is to allow information and ideas to be shared globally by using a more standardized set of compatible data that can more easily be compared.
Are we certain tech companies are taking this seriously? The answer seems to be yes, activist organization Green Peace were “soft” campaigning against Facebook in order to make the global social networking company less reliant on coal in order to power their data centers after revelations that the company were estimated to be more than half dependent upon carbon and fossil based fuel.
In response, Facebook subsequently made an agreement to invest in greener technologies in order to power the digital infrastructure supporting the world’s love with 24 hour social connectivity. It has since announced it will be building a new dedicated European data center in Sweden, which is to be powered exclusively by hydro-electricity.
Whilst examples utilizing wind and hydro power such as the above are not particularly feasible in many locations, companies still have a responsibility to ensure their data centers are at least environmentally and eco aware by using technology solutions wisely and that efficient cooling systems are in place to deal with the vast quantities of surplus server heat that are output.
Other cooling technologies include evaporated air cooling that are far more energy efficient than traditional heat exchange cooling systems. However it’s not simply about investing in new tech, but also ensuring smart solutions and thinking are implemented that also recycle energy towards other beneficial purposes such as server heat being used to heat habitable spaces. This is where newer metrics such as ERE, CUE and WUE are starting to emerge collectively focusing on and emphasizing aspects of reuse and recycling.
As accurate data and metrics become more available, data center costs and upkeep continues to soar as tech giants’ scale to meet increased world demand, then for the foreseeable future we expect to see higher demand and investment in the people and human resources who are responsible for undertaking and evaluating the effectiveness and results of subsequent forward thinking.
With a sustainable business agenda in place, heavily dependent tech companies can look forward to reducing their overall operational costs and to be seen to be actively meeting the world’s expectations in terms of using their influence to drive both profit and green agenda forward harmoniously.
Telehouse has over 20 years of experience in building and designing data centers. They also have the wherewithal to know when things better start to go green. That’s why they have their “Build Your Own Data Center Anywhere” campaign (UPDATE: link now gone) that can really cut the environmental costs of a traditional data center. Says Telehouse, “the Data Center Anywhere solution is guaranteed to be fully rated, self contained, air and water tight, energy efficient, reusable and green.”
This post was written by David Beastall on behalf of Acre Resources who provide executive recruitment and job placement for the world’s health, safety and environmental energy professionals.
|
<urn:uuid:98b29b27-1c08-4ff8-ad12-76179f392f80>
|
{
"dump": "CC-MAIN-2022-40",
"url": "https://www.colocationamerica.com/blog/how-and-why-are-data-centers-going-green",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00742.warc.gz",
"language": "en",
"language_score": 0.9579957723617554,
"token_count": 682,
"score": 2.59375,
"int_score": 3
}
|
The AP US History exam requires students to be familiar with the early colonization of the Americas by the Spanish, French, Dutch, and English. The video lectures on New Spain, New France, New Netherland, and the early English colonies will all be helpful in summarizing, comparing, and contrasting the motives and actions of the colonial powers and the Indians they encountered.
The introductory lecture will be helpful in understanding the key characteristics of the Thirteen Colonies and how to compare and contrast the New England, Middle, and Southern Colonies. The additional lectures on the Virginia Colony and religious freedom in Colonial New England will be helpful in reviewing the details about political, economic, and social life in the Virginia and New England Colonies.
I am a history teacher who creates YouTube videos and instructional materials. I use this blog to publish lecture notes, book reviews, and personal reflections inspired by history, politics, and literature.
|
<urn:uuid:ad74d281-b7c3-4886-93dd-8fc14eaf388b>
|
{
"dump": "CC-MAIN-2021-04",
"url": "https://www.tomrichey.net/blog/apush-period-1-2-review",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00289.warc.gz",
"language": "en",
"language_score": 0.8804160356521606,
"token_count": 184,
"score": 3.703125,
"int_score": 4
}
|
SAIMA IBRAHIM (email@example.com)
Feb 8 - 14, 2010
In every sacred book from the Bible, the Koran, to the Vedas, and other ancient scripts, the medicinal benefits of herbs are quoted.
Herbs have a variety of uses including culinary, medicinal, or in some cases even spiritual. Culinary and medicinal herbs differ because of their usage.
In medicinal or spiritual use any of the parts of the plant might be considered herbs including leaves, roots, flowers, seeds, resin, root bark, inner bark (cambium), berries and sometimes the pericarp or other portions of the plant.
Herbalism is a traditional medicinal or folk medicine practice based on the use of plants and plant extracts. Herbalism is also known as botanical medicine, medicinal botany, medical herbalism, herbal medicine, herbology, and phytotherapy.
World over, researches focus on various herbs and traditional medicines to find out refined cures for illness and diseases.
SOME IMPORTANT TRADITIONAL HERBS
Beetroot is used traditionally as a blood building food. It has liver, spleen, gall bladder, and kidney cleansing properties. Beetroot is particularly rich in vitamin C, calcium, phosphorus, and iron. The iron contained in beetroot is organic and non-irritating and does not cause constipation. Beetroot is useful in acidosis due to it being rich in alkaline elements.
Carrot contains large quantities of vitamin A, in the form of beta-carotene. Carrot juice has anti-carcinogen properties. It helps prevent cancer. It is also good for the skin. Carrot juice is like a tonic. It will improve the overall health and increase immunity. In fact, two glasses of carrot juice a day can increase your immunity by as much as 70%.
Apple is a highly nutritive food. It contains minerals and vitamins in abundance. Iron contained in the apple helps in formation of blood. Apple is useful in kidney stone. Raw apple is good for relief from constipation. Cooked or baked apple is good for diarrhea. It is of special value for heart patients. It is rich in potassium and phosphorus but low in sodium. It is also useful for patients of high blood pressure.
Banana is a complete balanced diet when combined with milk. It is known for promoting healthy digestion. The banana is used as a dietary food against intestinal disorders because of its soft texture and blandness. It is the only raw fruit, which can be eaten without distress in chronic ulcer cases. It neutralizes the over-acidity of the gastric juices and reduces the irritation of the ulcer by coating the lining of the stomach.
Cucumber, a beautician's secret, is excellent for facial skin. It also promotes the growth of healthy hair and nails. Of course, it performs wonders on baggy eyes and dark circles. It contains vitamins like B and C, and minerals like calcium, magnesium, potassium, and phosphorus.
White tea popularity is still in its early stage although it has been produced for over one thousand years (first in China). Early research showed its promising effects as anti-virus/anti-bacteria, and protective shield against skin cell damage and colon cancer.
Cinnamon spice: The aromatic scent of cinnamon is powerful because it makes many people feel warm and fuzzy. The health benefits of cinnamon have taken the backseat in favor of its spice properties. Many health experts claim that a dash of cinnamon can be a way to add flavor to many dishes and at the same time improve one's health in many ways.
Emu oil has been proven through scientific studies for its healing property of pain and inflammation caused by arthritis and many other ailments.
Red yeast rice has been an ancient Asian dietary staple. It is made by fermenting white rice with the red yeast (Monascus Purpureus). The low cholesterol levels found in the Chinese population led to discovering the benefits of red yeast rice consumption.
Ginseng is a slow growing herb, which consists of a light colored root, a single stalk and long oval green leaves. Ginseng contains complex carbohydrates called saponins or ginsengsines. It possesses anti-inflammatory, antioxidant, and anti-cancer elements.
Capsicum: Alternative medicine practitioners are realizing that healing herbs should be part of their arsenal against diseases. The newest touted natural herb is Capsicum, found in cayenne pepper. It has many benefits whether taken internally or externally.
Lavender oil (Lavandula angustifolia) is recognised for its healing properties since ancient times, and it has been used traditionally to treat many disorders.
|
<urn:uuid:318a2009-7884-49ab-bf21-2363eefd5fef>
|
{
"dump": "CC-MAIN-2017-17",
"url": "http://www.pakistaneconomist.com/pagesearch/Search-Engine2010/S.E115.php",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121644.94/warc/CC-MAIN-20170423031201-00171-ip-10-145-167-34.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9436764717102051,
"token_count": 971,
"score": 2.734375,
"int_score": 3
}
|
Using WWII Fiction Books To Teach Your Class About The War
World War II was a complex and major historical event that still influences world events to this day. There are a variety of historical studies that can be used by teachers, but these books can be boring to their students. That's why historical fiction is such a great teaching choice. Historical Fiction Is A Major Classroom Benefit Historical fiction about World War II uses real events and difficult situations to teach readers about the horrors and lessons of the war.
The "Bs" of Romance Fiction: 2016's Most Popular Themes
Romance sub-genres are constantly changing and evolving. If you're an avid reader and have been for some time, you've probably seen a few shifts here and there over the years. But if you're new to romance and trying to figure out what "MC" stands for or what this sudden fascination with bikers and billionaires is all about, you're probably not alone. Here are the "Bs" of romance fiction and 2016's most popular themes.
Hyperlexia, Autism, And Books: Why Focused Subject Matter Is Perfect
Hyperlexia is the ability to read and comprehend books far above one's own developmental age. Autism is a developmental and neurological disorder that causes some children to have hyper-focused interests, especially when it comes to subjects that interest them. If you have a child with both of these unusual disorders, then trying to get him or her to read is conflicting at best and a nightmare at worst. Books with focused subject matter are actually perfect for your child, and here is why.
Writing Great Books Or Stories About The "Separation Of Church And State"
The idea of "separation of church and state" has been highly controversial at times. Many people argue that the two should remain completely separated, while others believe the government should promote specific religions. The disagreement here gives you many unique opportunities to write interesting stories that delve deeply into the concept. This Idea Is NOT Spelled Out In The Constitution Although it is commonly believed that this idea is in our Constitution, careful studying of that document reveals it is not.
The Benefits Of Riddles For Your Child
When you want to be sure that your child is growing and developing mentally, one of the best things you can do is challenge them and put quality material into their hands. In addition to summer reading lists and books that they enjoy, you should also expose your children to riddles. Riddles are a great way to build your child's mind, as referenced by the excellent benefits below. Riddles Teach Your Child Critical Thinking
|
<urn:uuid:28ec5a27-0562-4f7c-8b8a-26265742b75d>
|
{
"dump": "CC-MAIN-2019-04",
"url": "http://royalprintinggroup.com/post/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583875448.71/warc/CC-MAIN-20190122223011-20190123005011-00167.warc.gz",
"language": "en",
"language_score": 0.9652655720710754,
"token_count": 523,
"score": 2.875,
"int_score": 3
}
|
Composer Hector Berlioz and tenor Enrico Caruso are to have craters on the planet Mercury named after them by the International Astronomical Union, the body in charge of naming planetary features.
They feature in a list of ten newly titled craters on the nearest planet to the sun named after ‘deceased artists, musicians, painters, and authors who have made outstanding or fundamental contributions to their field and have been recognized as art-historically significant figures for more than 50 years’.
Among the other names being remembered are John Lennon (who already has a crater on the Moon named after him), the authors Truman Capote and Erich Maria Remarque and the sculptor Alexander Calder.
More than 120 craters on Mercury have been named since the first flyby of the planet in 2008 by the Messenger spacecraft, 46 of which now bear the names of composers and musicians, including Beethoven, Bartók, Purcell and Wagner. The medieval music theorist Guido of Arezzo has also been commemorated.
Leave a Reply
You must be logged in to post a comment.
|
<urn:uuid:e821a53d-37b6-4942-b2ec-1e7e99e9851f>
|
{
"dump": "CC-MAIN-2014-15",
"url": "http://www.classicalmusicmagazine.org/2014/01/6358/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00154-ip-10-147-4-33.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9552114009857178,
"token_count": 233,
"score": 3.15625,
"int_score": 3
}
|
The ultrasound story begins in July 1955 when an obstetrician in Scotland, Ian Donald, borrowed an industrial ultrasound machine used to detect flaws in metal and tried it out on some tumours, which he had removed previously, using a beefsteak as the control. He discovered that different tumours produced different echoes. Soon Donald was using ultrasound not only for abdominal tumours in women but also on pregnant women. Articles surfaced in the medical journals, and its use quickly spread throughout the world.
The dissemination of ultrasound into clinical obstetrics is reflected in inappropriate statements made in the obstetrical literature regarding its appropriate use: “One of the lessons of history is, of course, that it repeats itself. The development of obstetric ultrasound thus mirrors the application to human pregnancy of diagnostic X-rays. Both, within a few years of discovery, were being used to diagnose pregnancy and to measure the growth and normality of the fetus. In 1935 it was said that “antenatal work without the routine use of X-rays is no more justifiable than would be the treatment of fractures“ (Reece, 1935: 489). In 1978: “It can be stated without qualification that modern obstetrics and gynecology cannot be practiced without the use of diagnostic ultrasound“ (Hassani, 1978). Two years later, it was said that “ultrasound is now no longer a diagnostic test applied to a few pregnancies regarded on clinical grounds as being at risk. It can now be used to screen all pregnancies and should be regarded as an integral part of antenatal care“ (Campbell & Little, 1980). On neither of these dates did evidence qualify the speakers to make these assertions.
It is not only doctors who have tried to promote ultrasound with statements that go beyond the scientific data. Commercial interests also have been actively promoting ultrasound, and not only to doctors and hospitals. As an example, an advertisement in a widely read Sunday newspaper (The Times, London) claimed: Toshiba decided to design a diagnostic piece of equipment that would be absolutely safe … .The name: Ultrasound. A consumer organization in Britain complained to the Advertising Standards Authority that Toshiba was making an untrue claim, and the complaint was upheld. In many countries, the commercial application of ultrasound scanning during pregnancy is widespread, offering “baby look“ and “fun ultrasound“ in order to “meet your baby“ with photographs and home videos.
The extent to which medical practitioners nevertheless followed such scientifically unjustified advice, and the degree to which this technology proliferated, can be illustrated by recent data from three countries. In France, in one year three million ultrasound examinations were done on 700,000 pregnant women-an average of more than four scans per pregnancy.
These examinations cost French taxpayers more than all other therapeutic and diagnostic procedures done on these pregnant women. In Australia, where the health service pays for four routine scans, in one recent year billing for obstetrical ultrasound was $60 million in Australian dollars. A 1993 editorial in U.S.A. Today makes the following statement: “Baby’s first picture-a $200 sonogram shot in the womb-is a nice addition to any family album. But are sonograms medically worth $1 billion of the nation’s scarce health-care dollars? That’s the question raised by a United States study released this week. It found the sonograms that doctors routinely perform on healthy pregnant women don’t make any difference to the health of their babies.“
After a technology has spread widely in clinical practice, the next step is for health policymakers to accept it as standard care financed by the official health sector.
Several European countries now have official policy for one or more routine ultrasound scans during pregnancy. For example, in 1980 the Maternity Care Guidelines in West Germany stated the right of each pregnant woman to be offered at least two ultrasound scans during pregnancy. Austria quickly followed suit, approving two routine scans. Do the scientific data justify such widespread use and great cost of ultrasound scanning?
When Is Ultrasound Helpful?
In assessing the effectiveness of ultrasound in pregnancy, it is essential to make the distinction between its selective use for specific indications and its routine use as a screening procedure.
Essentially, ultrasound has proven valuable in a handful of specific situations in which the diagnosis “remains uncertain after clinical history has been ascertained and a physical examination has been performed.“ Yet, considering whether the benefits outweigh the costs of using ultrasound routinely, systematic medical research has not supported routine use.
One of the most common justifications given today for routine ultrasound scanning is to detect intrauterine growth retardation (IUGR). Many clinicians insist that ultrasound is the best method for the identification of this condition. In 1986, a professional review of 83 scientific articles on ultrasound showed that “for intrauterine growth retardation detection, ultrasound should be performed only in a high-risk population.“ In other words, the hands of an experienced midwife or doctor feeling a pregnant woman’s abdomen are as accurate as the ultrasound machine for detecting IUGR. The same conclusion was reached by a study in Sweden comparing repeated measurement of the size of the uterus by a midwife with repeated ultrasonic measurements of the head size of the fetus in 581 pregnancies. The report concludes: “Measurements of uterus size are more effective than ultrasonic measurements for the antenatal diagnosis of intrauterine growth retardation.“
If doctors continue to try to detect IUGR with ultrasound, the result will be high false-positive rates. Studies show that even under ideal conditions, such as do not exist in most settings, it is likely that over half of the time a positive IUGR screening test using ultrasound is returned, the test is false, and the pregnancy is in fact normal. The implications of this are great for producing anxiety in the woman and the likelihood of further unnecessary interventions.
There is another problem in screening for IUGR. One of the basic principles of screening is to screen only for conditions for which you can do something. At present, there is no treatment for IUGR, no way to slow up or stop the process of too-slow growth of the fetus and return it to normal. So it is hard to see how screening for IUGR could be expected to improve pregnancy outcome.
We are left with the conclusion that, with IUGR, we can only prevent a small amount of it using social interventions (nutrition and substance abuse programs), are very inaccurate at diagnosing it, and have no treatment for it. If this is the present state of the art, there is no justification for clinicians using routine ultrasound during pregnancy for the management of IUGR. Its use should be limited to research on IUGR.
Once again it is interesting to look at what happened with the issue of safety of X-rays during pregnancy. X-rays were used on pregnant women for almost fifty years and assumed to be safe. In 1937, a standard textbook on antenatal care stated: “It has been frequently asked whether there is any danger to the life of the child by the passage of X- rays through it; it can be said at once there is none if the examination is carried out by a competent radiologist or radiographer.“ A later edition of the same textbook stated: “It is now known that the unrestricted use of X-rays through the fetus caused childhood cancer.“ This story illustrates the danger of assuming safety. In this regard, a statement from a 1978 textbook is relevant: “One of the great virtues of diagnostic ultrasound has been its apparent safety. At present energy levels, diagnostic ultrasound appears to be without injurious effect … all the available evidence suggests that it is a very safe modality.“
That ultrasound during pregnancy cannot be simply assumed to be harmless is suggested by good scientific work in Norway. By following up on children at age eight or nine born of mothers who had taken part in two controlled trials of routine ultrasound in pregnancy, they were able to show that routine ultrasonography was associated with a symptom of possible neurological problems.
With regard to the active scientific pursuit of safety, an editorial in Lancet, a British medical journal, says: “There have been no randomized controlled trials of adequate size to assess whether there are adverse effects on growth and development of children exposed in utero to ultrasound. Indeed, the necessary studies to ascertain safety may never be done, because of lack of interest in such research.“
The safety issue is made more complicated by the problem of exposure conditions. Clearly, any bio-effects that might occur as a result of ultrasound would depend on the dose of ultrasound received by the fetus or woman. But there are no national or international standards for the output characteristics of ultrasound equipment. The result is the shocking situation described in a commentary in the British Journal of Obstetrics and Gynaecology, in which ultrasound machines in use on pregnant women range in output power from extremely high to extremely low, all with equal effect. The commentary reads, “If the machines with the lowest powers have been shown to be diagnostically adequate, how can one possibly justify exposing the patient to a dose 5,000 times greater?“ It goes on to urge government guidelines on the output of ultrasound equipment and for legislation making it mandatory for equipment manufacturers to state the output characteristics. As far as is known, this has not yet been done in any country.
Safety is also clearly related to the skill of the ultrasound operator. At present, there is no known training or certification for medical users of ultrasound apparatus in any country. In other words, the birth machine has no license test for its drivers.
Looking Ahead: Ultrasound and the Future
Although ultrasound is expensive, routine scanning is of doubtful usefulness, and the procedure has not yet been proved to be safe, this technology is widely used, and its use is increasing rapidly without control. Nevertheless, health policy is slow to develop. No country is known to have developed policies with regard to standards for the machines, nor for the training and certification of the operators. A few industrialized countries have begun to respond to the data showing lack of effectiveness for routine scanning of all pregnant women. In the United States, for example, a consensus conference on diagnostic ultrasound imaging in pregnancy concluded that “the data on clinical effectiveness and safety do not allow recommendation for routine screening at this time; there is a need for multidisciplinary randomized controlled clinical trials for an adequate assessment.“
Denmark, Sweden, and the United Kingdom have made similar statements against routine screening. The World Health Organisation (WHO), in an attempt to stimulate governments to develop policy on this issue, published the following statement:
“The World Health Organisation stresses that health technologies should be thoroughly evaluated prior to their widespread use. Ultrasound screening during pregnancy is now in widespread use without sufficient evaluation. Research has demonstrated its effectiveness for certain complications of pregnancy, but the published material does not justify the routine use of ultrasound in pregnant women. There is also insufficient information with regard to the safety of ultrasound use during pregnancy. There is as yet no comprehensive, multidisciplinary assessment of ultrasound use during pregnancy, including: clinical effectiveness, psychosocial effects, ethical considerations, legal implications, cost benefit, and safety.
“WHO strongly endorses the principle of informed choice with regard to technology use. The health-care providers have the moral responsibility: fully to inform the public about what is known and not known about ultrasound scanning during pregnancy; and fully to inform each woman prior to an ultrasound examination as to the clinical indication for ultrasound, its hoped-for benefit, its potential risk, and alternative available, if any.“
This statement, sadly, is as relevant today. During the 1980s and early 1990s, a number of us were raising questions about both the effectiveness and safety of fetal scanning. Our voice of caution, however, was like a cry in the wilderness as the technology proliferated.
Then, during the course of one month in late 1993, two landmark scientific papers were published. The first paper, a largely randomized trial of the effectiveness of routine prenatal ultrasound screening, studied the outcome of more than 15,000 pregnant women who either received two routine scans at 15 to 22 weeks and 31 to 35 weeks, or were scanned only for medical indications.
Results showed that the mean number of sonograms in the ultrasound group was 2.2 and in the control group (for indication only) was 0.6. The rate of adverse outcome (fetal death, neonatal death, neonatal morbidity), as well as the rate of preterm delivery and distribution of birth weights, was the same for both groups. In addition, in the author’s words: “The ultrasonic detection of congenital abnormalities has no effect on perinatal outcome.“ At last we have a randomized clinical trial of sufficient size to conclude that there is no value to routine scanning during pregnancy.
The second landmark paper, also a randomized controlled trial, looked at the safety of repeated prenatal ultrasound imaging. While the original purpose of the trial was hopefully to demonstrate the safety of repeated scanning, the results were the opposite. From 2,834 pregnant women, 1,415 received ultrasound imaging at 18, 24, 28, 34 and 38 weeks gestation (intensive group) while the other 1,419 received single ultrasound imaging at 18 weeks (regular group). The only difference between the two groups was significantly higher (one-third more) intrauterine growth retardation in the intensive group. This important and serious finding prompted the authors to state: “It would seem prudent to limit ultrasound examinations of the fetus to those cases in which the information is likely to be of clinical importance.“ Ironically, it is now likely that ultrasound may lead to the very condition, IUGR, that it has for so long claimed to be effective in detecting.
Although we now have sufficient scientific data to be able to say that routine prenatal ultrasound scanning has no effectiveness and may very well carry risks, it would be naive to think that routine use will not continue.
Unfortunately, medical doctors are inadequately educated in the basics of scientific method. It will be a struggle to close the gap between this new scientific data and clinical practice.
- Beech, B. and Robinson, J. (1993). Ultrasound? Unsound. Association for the Improvement in Maternity Services Journal 5.
- Campbell, S. and Little, D. (1980). Clinical potential of real-time ultrasound. In M. Bennett & S. Campbell (Eds), Real-time Ultrasound in Obstetrics. Oxford: Blackwell Scientific Publications.
- Chassar Moir, J. (1960). The uses and values of radiology in obstetrics. In F. Browne & McClure-Brown (Eds), Antenatal and Postnatal Care (9th ed.). London: J. & A. Churchill.
- Cnattingius, J. (1984). Screening for Intrauterine Growth Retardation. Doctoral dissertation, Uppsala University, Sweden.
- Ewigman, B. G. et al. and RADIUS study group. (1993). Effect of prenatal ultrasound screening on perinatal outcome. New England Journal of Medicine 329(12).
- Hassani, S. (1978). Ultrasound in Gynecology and Obstetrics. New York: Springer Verlag.
- National Institutes of Health. (1984). Diagnostic ultrasound imaging in pregnancy. Consensus Development Conference Consensus Statement 5, No. 1. Washington, D.C.
- Neilson, J. and Grant, A. (1991). Ultrasound in pregnancy. In I. Chalmers et al. (Eds), Effective Care in Pregnancy and Childbirth. Oxford, England: Oxford University Press.
- Newnham, J. et al. (1993). Effects of frequent ultrasound during pregnancy: A randomised controlled trial. Lancet.
- Newnham, J. (1992). Personal correspondence.
- Oakley, A. (1984). The Captured Womb. Oxford, England: Blackwell Publishing.
- Reece, L. (1935). The estimation of fetal maturity by a new method of x-ray cephalometry: its bearing on clinical midwifery. Proc Royal Soc Med 18.
- Salmond, R. (1937). The uses and values of radiology in obstetrics. In F. Browne (Ed), Antenatal and Postnatal Care (2nd ed.). London: J. & A. Churchill.
- Salveson, K. et al. (1993). Routine ultrasonography in utero and subsequent handedness and neurological development. British Medical Journal 307.
- World Health Organisation. (1984). Diagnostic ultrasound in pregnancy: WHO view on routine screening. Lancet 2.
Excerpted and adapted from Pursuing the Birth Machine: The Search for Appropriate Birth Technology, copyright 1994 by Marsden Wagner, published by ACE Graphics. Available in the United States and Canada from the ICEA Bookcenter, (800) 624-4934; Fax (612) 854-8772.
|
<urn:uuid:c9753b60-d3a9-4b14-ac62-449f33433f83>
|
{
"dump": "CC-MAIN-2022-21",
"url": "https://www.midwiferytoday.com/mt-articles/ultrasound-harm-good/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00387.warc.gz",
"language": "en",
"language_score": 0.9445249438285828,
"token_count": 3558,
"score": 3.015625,
"int_score": 3
}
|
Ganesh was one of the Hindu gods. He has an elephant head, so he’s easy to spot in Indian art. Usually Ganesh also has a big belly and four arms. Sometimes Ganesh is dancing, or eating candy, or riding on a mouse. People in India first began to pray to Ganesh about 500 AD, during the Guptan period. (This was also around the time that Indians invented candy.) Ganesh quickly became very popular, and came to be one of the main Hindu gods, though not as powerful or scary as Shiva or Vishnu.
Ganesh’s name comes from two Sanskrit words, “isha”, meaning “Lord”, and “gana”, meaning “group”, so you might think of Ganesh as the god of sorting things out, or the god of many things. Nobody is quite sure what his name originally meant.
Another story about Ganesh
Most stories about Ganesh say that Ganesh was the son of Shiva and his wife Parvati. Ganesh was born looking like a boy and then when Ganesh interfered between his father Shiva and his mother Parvati, Shiva cut off Ganesh’s head. To make up for this, Shiva gave Ganesh the head of an elephant.
Ganesh was the god of obstacles – things that get in your way. When Ganesh was a good god, he helped people by moving obstacles out of their way. For instance, he helped students get a scholarship so they could go to school. But when Ganesh was in a bad mood, or people were undeserving, he could also put obstacles in their way. People thought of Ganesh as especially the god of students, literature, and learning.
|
<urn:uuid:81268141-4b66-4b8f-a104-479f54d70683>
|
{
"dump": "CC-MAIN-2018-47",
"url": "https://quatr.us/india/ganesh-hinduism-india.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742978.60/warc/CC-MAIN-20181116045735-20181116071735-00275.warc.gz",
"language": "en",
"language_score": 0.9921194911003113,
"token_count": 364,
"score": 2.875,
"int_score": 3
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.