Dataset Viewer (First 5GB)
text
stringlengths 189
621k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
By 2030, as many as 107 million workers, 12 million more than before the pandemic, may need to switch their occupational categories as COVID-19 has accelerated the transformation of digitization and automation, disrupting the world of work. To put that shift into context, it will have about the same impact on the workforce as the industrial revolution did. To prepare for this massive shift, organizations will have to put considerable effort and resources into reskilling their employees.
What is Reskilling?
Reskilling refers to the process of learning new skills so someone can do a different job, or training people to do a different job. Upskilling, reskilling, retraining, and even new-skilling are used interchangeably for the process. Training these workers requires teaching technical skills along with creativity, interpersonal skills, adaptability and the capacity to continue learning.
Organizations leading the way
- Amazon: Fulfillment-center employees can go through a 16-week certification program in classrooms located inside Amazon warehouses and, if the retailer hires them as data technicians, their wages will rise from an average $15 an hour to $30.
- AT&T: 180,000 employees so far have participated in its Future Ready program. Workers can assess their skills, then pursue short-term badges, nanodegrees taking up to a year to complete, or master’s degrees in fields like computer science and data science offered in partnership with institutions such as the Georgia Institute of Technology and the University of Notre Dame.
These companies are only the tip of the iceberg. 87% of executives are noticing significant gaps in the skills of their workers due to COVID-19 and the advancement of automation and digitization. The Harvard Business Review even posits that this shift will require adding a new role to the c-suite: a chief skills and learning officer (CSLO) in the same way that the role of chief technology officer became commonplace over the past two decades.
Want to learn more about how you can reskill your workforce? View our resources, or connect with one of our experts for a personalized, guided demo that will show you how easy it is to incorporate all of OpenSesame’s innovative tools and features into your training experience. With courses offered in multiple languages, and available on multiple devices OpenSesame helps companies like yours develop the world’s most developed and admired global workforces.
|
<urn:uuid:70fb03f1-2d68-4520-81cc-4a5d8939dab8>
|
{
"dump": "CC-MAIN-2023-14",
"url": "https://www.opensesame.com/site/blog/reskilling-your-workforce/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00766.warc.gz",
"language": "en",
"language_score": 0.9525315165519714,
"token_count": 491,
"score": 2.5625,
"int_score": 3
}
|
Objectives A nutrient dilution effect of diets high in added sugar has been reported in some older populations but the evidence is inconsistent. The aim of this study was to investigate the association between added sugar intakes (according to recommended guidelines) and nutrient intake, food consumption and Body Mass Index (BMI). Method A cross-sectional analysis of data collected in 2007-09 from participants of the Blue Mountains Eye Study 4 was performed (n = 879). Dietary intake was assessed using a semi-quantitative food frequency questionnaire. Added sugar content of foods was determined by applying a systematic step-wise method. BMI was calculated from measured weight and height. Food and nutrient intakes and BMI were assessed according to categories of percentage energy from added sugar (EAS%<5%, EAS%=5-10%, EAS% >10%) using ANCOVA for multivariate analysis. Results Micronutrient intake including retinol equivalents, vitamins B6, B12, C, E and D, and minerals including calcium, iron and magnesium showed a significant inverse association with EAS% intakes (Ptrend<0.05). In those people with the lowest intake of added sugars (<5% energy) intake of alcohol, fruits, and vegetables were higher and intake of sugar sweetened beverages was lower compared to other participants (all Ptrend <0.001). BMI was similar across the three EAS% categories. Conclusion Energy intake from added sugar above the recommended level of 10% is associated with lower micronutrient intakes, indicating micronutrient dilution. Conversely, added sugar intakes below 5% of energy intake are associated with higher micronutrient intakes. This information may inform dietary messages targeted at optimising diet quality in older adults.
|
<urn:uuid:1025e294-413f-49a9-bbf3-08a15971a5d2>
|
{
"dump": "CC-MAIN-2018-17",
"url": "http://ro.uow.edu.au/smhpapers/4076/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945637.51/warc/CC-MAIN-20180422174026-20180422194026-00168.warc.gz",
"language": "en",
"language_score": 0.9450018405914307,
"token_count": 354,
"score": 2.53125,
"int_score": 3
}
|
In order to clarify to what extent cultural interactions in African-Chinese relations create a “soft“ influence on new development concepts and how China presents itself by means of public diplomacy to African audiences, the project investigates Confucius Institutes in South Africa. Other than negative media reports suggest, Confucius Institutes act less political. Moreover, it seems inaccurate to describe them as an instrument of China’s policy of expansion. Confucius Institutes adapt themselves to local circumstances in Africa and communicate a rather selective picture of China which normally focuses on traditional notions of culture (calligraphy, tea ceremony) and tends to blind out current political and societal issues.
For Africans, Confucius Institutes are a major option to purify their university degrees and thereby to increase their chances on the job market. In this regard, China is a major option for students of African Confucius Institutes as a destination to study and work. Precisely this option is of interest because in China, it becomes obvious whether and how Confucius Institutes prepare their students for such a stay abroad.
|
<urn:uuid:b74a659d-06a2-4f5f-a56f-0fe0347abb54>
|
{
"dump": "CC-MAIN-2023-14",
"url": "https://www.afraso.org/en/content/s3-chinese-cultural-policies-and-confucius-institutes-africa",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00694.warc.gz",
"language": "en",
"language_score": 0.933531641960144,
"token_count": 217,
"score": 2.53125,
"int_score": 3
}
|
In Oregon, a battle has erupted between farmers growing genetically engineered sugar beets and organic farmers who worry about cross-pollination. The question is whether the farms can be good neighbors.
The Cold War phrase "peaceful coexistence" has been revived in a new context: as a potential solution in the clash between organic agriculture and genetic engineering.
This agricultural battle is global in scope, but one place where the tension is most tangible — and where its consequences are most concrete — is the valley along Oregon's Willamette River.
This valley is a wonderful place to grow things; the soil is fertile and the climate is mild. Settlers who arrived here via the Oregon Trail once called it "Eden."
Farmers here can grow almost any crop, and the valley has become a global center of seed production: Seeds for cabbage, spinach, Swiss chard, beets, grass and many other crops are harvested here and shipped all over the world.
Since seeds are genetic packages, it is perhaps unsurprising that a battle erupted when some of these farmers started growing genetically engineered sugar beets a few years ago. The beets have a new gene, created in the laboratory, which allows them to tolerate the weedkiller Roundup.
On one side of the battle is organic farmer Frank Morton, a relative newcomer to the Willamette Valley's farming community. He grew up in West Virginia, but moved to Oregon in the 1970s to go to college. "This valley is not big enough to have genetically engineered crops and normal crops growing together without cross contamination happening," he says.
On the other side is Tim Winn, who has lived and worked on the same farm his whole life, on the banks of the Willamette River just northeast of Corvallis. Winn says government scientists have concluded that there is nothing dangerous in the new gene, and thus no novel risk for Morton or his customers to worry about.
"We can invent a perceived risk in our mind; a lot of us do," Winn says. "And if the science doesn't support it, then it's not a risk. And I guess if [Morton] wants to stay in business with those customers, it would be in his interest to educate them."
The standoff between these two farmers raises a question: Can genetically engineered crops and organic farms can be good neighbors, no matter where they are grown?
Concerns Of Cross-Pollination
To understand why the tension exists, I visited the farm where Morton grows his organic seeds: Gathering Together Farm, outside the town of Philomath. Morton takes me on a tour of the fields, showing off enclosures for growing vegetables in winter, piles of compost, and fields of cabbage, arugula, turnips and kale. We stop and get out at a field of chard.
This chard, Morton explains, is actually the same species as beets. They're all Beta vulgaris, the way black Labradors and golden retrievers are all dogs. So anyone growing these plants for seed has a special concern: windblown pollen.
Those different plants will cross-pollinate, so if you want to produce high-quality chard seed, you do not want beet pollen blowing into your field, either from a neighbor's field or from stray plants along a nearby road. And pollen can blow for miles.
As it happens, there's a sugar beet seed grower straight across the fields a couple of miles away, Morton says. This has not, until now, become a problem.
"Apparently they aren't finding any of my red chard or golden chard seed in their sugar beets, and I'm not finding any of their genetics in mine. That I know of," Morton says. "There's always some question, and that's the problem — there's always some question."
The Willamette Valley Specialty Seed Association has a system for avoiding cross-pollination, and the approach is charmingly low-tech: just a map of the valley with a lot of pins stuck in it to show where each seed crop is planted. For farmers, it's first come, first served — if you "pin" a sugar beet field, nobody else is supposed to grow seed for Swiss chard within three miles.
George Burt, the former manager of the West Coast Beet Seed Co. in Salem, Ore., helped set up this system before he retired.
"You're really trying to minimize the risk," he says. "And you can get it down to the point where you're relatively sure that you're not hurting anybody else and nobody's hurting you."
But seed growers agree: It's almost impossible to guarantee that absolutely no cross-fertilization will ever happen.
Finding Common Ground
Organic grower Morton didn't worry about this until farmers in the valley started growing genetically engineered sugar beets. For him, those man-made genes are different and require different rules. Morton wants a guarantee that pollen from those genetically engineered beets will not fertilize his chard or red beets. If it did, he says, it would violate his organic principles — and it would destroy his business because his customers wouldn't buy his seeds anymore.
In fact, he says, just the possibility of contamination is starting to hurt.
"We think that buyers from overseas — organic seed companies — we think they have already started to avoid buying from us," he says.
So Morton, together with some environmental groups, went to court and won.
A federal judge banned the planting of "Roundup Ready" sugar beets until the USDA does an environmental impact study that examines the economic consequences of cross-pollination, especially for organic farmers. In a similar case, another judge demanded the same thing for genetically engineered alfalfa.
Listening to Morton and Winn, there doesn't seem to be an easy solution.
Morton says his business cannot survive the presence of genetically engineered crops, often called GMOs.
"It will be a valley fit for growing GMOs, but won't be a valley where people from Europe and Japan and Korea come to have seed grown," he says.
And on the other side, Winn says Morton's demands could unnecessarily cripple a valuable industry.
"Quite honestly, if you regulate this valley to the point where you don't have sugar beet seed production, or production of some other major commodities — that's a huge deal!" Winn says.
There is one voice calling for compromise: Secretary of Agriculture Tom Vilsack released an open letter last month calling for a "new paradigm of coexistence and cooperation" between the two sides. Giving in a little, Vilsack said, would be better than litigation that puts one side or the other out of business.
Copyright 2011 National Public Radio. To see more, visit http://www.npr.org/.
|
<urn:uuid:52630ee7-2de8-44df-89d7-aecfb493ffcb>
|
{
"dump": "CC-MAIN-2014-15",
"url": "http://www.scpr.org/news/2011/01/24/23352/a-tale-of-two-seed-farmers-organic-vs-engineered/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00645-ip-10-147-4-33.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.968562662601471,
"token_count": 1409,
"score": 2.765625,
"int_score": 3
}
|
An all too common and easy assumption made about studying is that the longer time you study for, the more you will learn and the more your overall results will subsequently improve. However, it's often the case that students will feel cheated when they study for 5 hours for an exam and a peer who only studied for 2 hours achieves a higher mark.
The question must then be asked: Why do we measure our level of work according to an amount of time when time clearly has no correlation to the effectiveness of our studying?
The fact is that 2 hours of distraction-free study, with the right focus and outcomes in mind, can lead to a better understanding than 10 hours of study with an unfocused mind. How then, can the time spent studying, be made most effective?
1. Healthy Body = Healthy Mind
Especially as students reach the last few years of their education, it can become extremely difficult to ensure a balance between the many elements of their life, however, it is more crucial than ever in these years to maintain a balance. Ensuring to sleep for at least 8 hours each night is crucial, especially depending on age. Students should find time for exercise every week and maintain a healthy diet. Especially minimising sugar intake, specifically before and during study periods.
It is also important to retain a work-life balance between studying and other past times. Social lives do not have to be entirely sacrificed for the sake of study. In fact, it is beneficial to both academic performance and mental wellbeing to maintain this balance as friends are often instrumental in providing support in times of stress and contribute to a healthy, happy and positive state of mind.
By student taking care of their bodies, and making sure that they are healthy and happy, the mind is able to maintain greater focus during study periods. As students begin to see their results improving through implementing this healthy work-life balance, studying becomes a more enjoyable experience and instills personal motivation and incentive for students to work harder to achieve their best.
2. Set S.M.A.R.T Goals:
Students should make a list of the things aimed to do in a day of study, in order of priority. Consider breaking large tasks down into smaller manageable parts. Aims for a day should adhere to the principle of S.M.A.R.T goals:
This means that students can hold themselves accountable for the work that is completed, ensuring they are being effective and efficient. An example of a S.M.A.R.T goal is "complete questions 1-6 in chapter 3". A goal without the S.M.A.R.T criteria, such as "do 2 hours of maths", or "revise algebra", can make it harder for students to achieve what they have set out to do. Goals of this kind are vague, causing students to spend excess time deciding what is to be done, rather than doing it.
3. Remove distractions:
Students should ensure their workspace is clean and free of distractions (putting phones in another room). Computers should only be used if needed, and if necessary, should only be used to access what is needed. If students are struggling to stay off Facebook and/or other distractions, they should consider installing an app such as Cold Turkey (https://getcoldturkey.com/ ) that will help to keep away distractions during study periods.
4. Take Regular Breaks & Reward Yourself:
The brain fatigues quickly. For the average student who has not practiced good study habits, approximately 30 minutes of uninterrupted study can lead to the brain being half as effective as it was when the student began studying. Over an hour or more of studying without breaks can lead to the brain being anywhere between 10% and 0% effective.
By taking regular breaks the brain is rewarded for the study. It associates the study with the reward and therefore is more likely to focus on the work at hand, enhancing retention and understanding.
To put this into practice, students should follow these steps:
- Set a timer for 25 minutes and work.
- When the timer goes off, take a 5-minute break. Return to your desk and repeat the process.
- Once you've completed all the work you have to or every 3-4 hours, take a longer more rewarding break.
- As you practice this, you will become more able to focus for longer amounts of time, and you can build up the time between breaks. After a couple of weeks at 25 minutes, try 30 minutes. Within a couple of months, you should be able to achieve productive study for about an hour without breaks.
|
<urn:uuid:27bd9149-540d-4abf-b6db-1bbca897c1b5>
|
{
"dump": "CC-MAIN-2019-09",
"url": "http://evolveeducationgroup.org/blog/work_smarter_not_harder",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249414450.79/warc/CC-MAIN-20190223001001-20190223023001-00514.warc.gz",
"language": "en",
"language_score": 0.9680556654930115,
"token_count": 942,
"score": 3.296875,
"int_score": 3
}
|
Learn something new every day More Info... by email
A credit analyst is an individual who reviews the creditworthiness of those applying for credit. The analyst may review the worthiness of either individuals or businesses. This individual will usually be trained on the job, but some credit analyst positions may require a college degree in finance, accounting, or a similar field. In many cases, the decision of the analyst will be the final decision on whether or not a loan is approved.
In addition to approving the loan, a credit analyst will also have a great deal of influence in the terms of the loan. The individual may set an interest rate based on certain risk factors. That rate will likely be lower for borrowers who pose a lower risk to the lender, and higher for those who do. The analyst may also set a minimum or maximum term of repayment. This may also be based on financial information received.
Typically, a credit analyst's day is filled with doing research about individuals who are applying for a loan product. This may be talking to employers to verify income. It will also likely involve pulling reports from credit reporting bureaus and looking at the borrower's FICO score to determine the risk the potential borrower may be. The overall goal is to find a solution that will sufficiently protect the lender, yet provide the borrower with the capital that is needed. This verification process may be done online and over the Internet.
After a decision is made, the credit analyst may then be responsible for transmitting that decision to the client. In many cases, this is done via a letter. Further, the analyst may be off site. If that is the case, the analyst will usually send the information on the decision to the personal banker. That banker will relay the decision to the applicant. Once the decision is made, the applicant may appeal in some situations. The burden of proof will be on the applicant to come up with legitimate reasons why the decision rendered was inappropriate. In most cases, those denied will simply seek loans from another company.
Credit analyst salaries are usually paid per hour. Thus, the earnings potential is based on whether the job is part time or full time. In many cases, an analyst will work full time hours. Entry-level positions may start out lower than someone with many years of experience may make slightly more.
@pleonasm - I don't mean to be cynical, but I wonder if they are asked to give loans specifically to people who look like they will take a long time to pay them back.
I mean, the person would still have to be trustworthy, but it is a legal contract, and now even declaring bankruptcy won't help in some cases.
A risk analyst might want the company to have a bunch of people who have to pay more over the long run because they aren't able to finish their payments in the short term.
It seems like they were giving out home loans left, right and center a few years ago, and ended up with payments for a while and eventually a house when people defaulted. So, it makes me wonder.
I guess credit risk analysts get given a list of things to look for when they start the job. Like a bad credit history, or defaults on payments, or a low paying job and then they have to match that against the payments.
I always wonder how much leeway they get to make the decision, although I imagine it depends on the company.
Do they look at, for example, the Facebook page of the applicant? If the Facebook page is filled with drunken pictures, would that make a difference?
It might make the person look less trustworthy, that's for sure.
One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
|
<urn:uuid:853f8801-5681-4765-b109-6a1926a7799d>
|
{
"dump": "CC-MAIN-2015-06",
"url": "http://www.wisegeek.com/what-does-a-credit-analyst-do.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115856041.43/warc/CC-MAIN-20150124161056-00195-ip-10-180-212-252.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.96251380443573,
"token_count": 788,
"score": 2.65625,
"int_score": 3
}
|
International Day of Indigenous Peoples
August 9 is declared by the UN to be the International Day for Indigenous Peoples. The online periodical ContraPunto provided a photogallery of members of El Salvador's indigenous peoples commemorating the date.
During the 20th century, El Salvador's indigenous communities were almost completely wiped out through massacres and repression. In 2010, Salvadoran president Mauricio Funes made an act of public apology for the country's treatment of indigenous peoples.
|
<urn:uuid:c9fc1cc3-6fc1-4efd-b07b-450db25b5b37>
|
{
"dump": "CC-MAIN-2023-06",
"url": "https://www.elsalvadorperspectives.com/2016/08/international-day-of-indigenous-peoples.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00553.warc.gz",
"language": "en",
"language_score": 0.9173774123191833,
"token_count": 95,
"score": 3.328125,
"int_score": 3
}
|
For the Architecture contest of the US Department of Energy Solar Decathlon, teams are required to design and build attractive, high-performance houses that integrate solar and energy-efficiency technologies seamlessly into the design. A jury of professional architects focuses on: Architectural elements Architectural elements include the scale and proportion of room and facade features, indoor/outdoor connections, composition, and linking of various house elements. Holistic design Holistic design is an architectural design that will be comfortable for occupants and compatible with the surrounding environment. Lighting The jury assesses the integration and energy efficiency of electrical and natural light. Inspiration Inspiration is reflected in a design that inspires and delights Solar Decathlon visitors. Documentation Documentation includes drawings, a project manual, and an audiovisual architecture presentation that accurately reflect the constructed project on the competition site.
Channel: Sci & Tech Channel
|
<urn:uuid:b11bb41a-1785-446d-9074-b1ad9e2f1187>
|
{
"dump": "CC-MAIN-2019-13",
"url": "http://video.territorioscuola.com/video/20352/team-new-jerseys-solar-decathlon-2011-architecture-audiovisual-presentation/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.7/warc/CC-MAIN-20190321213516-20190321235516-00160.warc.gz",
"language": "en",
"language_score": 0.8981589078903198,
"token_count": 173,
"score": 2.90625,
"int_score": 3
}
|
40 years ago, on 27 June 1980, Itavia Flight 870 (IH 870, AJ 421), a McDonnell Douglas DC-9 (I-TIGI) passenger jet en route from Bologna to Palermo, Italy, crashed into the Tyrrhenian Sea between the islands of Ponza and Ustica, killing all 81 people on board.
On 27 June 1980 at 20:08 CEST, the plane departed from Bologna for a scheduled service to Palermo, Sicily. With 77 passengers aboard, Captain Domenico Gatti and First Officer Enzo Fontana were at the controls, with two flight attendants. The flight was designated IH 870 by air traffic control, while the military radar system used AJ 421.
Contact was lost shortly after the last message from the aircraft was received at 20:37, giving its position over the Tyrrhenian Sea near the island of Ustica, about 120 kilometres (70 mi) southwest of Naples.
Floating wreckage and bodies were later found in the area. There were no survivors among the 81 people on board.
The disaster led to numerous investigations, legal actions and accusations, and continues to be a source of controversy, including claims of conspiracy by the Italian government and others.
The cause of the tragedy remains one of the Italy’s most enduring mysteries and there was a painful reminder recently that the case has still to be resolved when the stricken plane made its final journey back home to Bologna.
When the passenger jet crashed, the immediate theory was that it was a tragic accident caused by some kind of mechanical or structural failure.
Then there was the suggestion that terrorists could have planted a bomb, although that theory was rejected, and in 1999 an exhaustive investigation by Judge Rosario Priore, one of Italy’s most respected legal figures and an expert on terrorism cases, gave the definitive version of what happened. He concluded that the plane had probably been caught in a dogfight between NATO jetfighters (in the area there would be US, French, Belgian jets) and Libyan MiGs.
On 18 July 1980, 21 days after the Aerolinee Itavia Flight 870 incident, the wreckage of a Libyan MiG-23, along with its dead pilot, was found in the Sila Mountains in Castelsilano, Calabria, southern Italy.
On 23 January 2013, Italy’s top criminal court ruled that there was “abundantly” clear evidence that the flight was brought down by a missile during a dogfight between Libyan and NATO fighter jets, but the perpetrators are still missing.
Even today, 40 years after the accident, there is no official truth on how the events unfolded.
For more infos check the site (in italian): www.stragi80.it/
Full wikipedia article (in english): www.en.wikipedia.org/wiki/Itavia_Flight_870
|
<urn:uuid:f27dc290-cb0f-4ac3-83e3-268b8e04ec84>
|
{
"dump": "CC-MAIN-2020-29",
"url": "https://1buv.com/looking-for-the-truth-since-1980-itamilradar/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00151.warc.gz",
"language": "en",
"language_score": 0.9706996083259583,
"token_count": 606,
"score": 2.734375,
"int_score": 3
}
|
Fr.: pression de radiation
The → momentum carried by → photons to a surface exposed to → electromagnetic radiation. Stellar radiation pressure on big and massive objects is insignificant, but it has considerable effects on → gas and → dust particles. Radiation pressure is particularly important for → massive stars. See, for example, → Eddington limit, → radiation-driven wind , and → radiation-driven implosion. The → solar radiation pressure is also at the origin of various physical phenomena, e.g. → gas tails in → comets and → Poynting-Robertson effect.
solar radiation pressure
fešâr-e tâbeš xoršid (#)
Fr.: pression du rayonnement solaire
|
<urn:uuid:831a006b-c1b2-4335-915a-07d1a6a9042a>
|
{
"dump": "CC-MAIN-2020-10",
"url": "http://dictionary.obspm.fr/?showAll=1&formSearchTextfield=radiation+pressure",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143505.60/warc/CC-MAIN-20200218025323-20200218055323-00349.warc.gz",
"language": "en",
"language_score": 0.8429524898529053,
"token_count": 153,
"score": 3.28125,
"int_score": 3
}
|
Rumored Buzz on Hearing Aid Machine
Responsible for the ear, likewise referred to as BTE, hearing aids are by far one of the most generally used style of electronic hearing aid. These listening device are actually likewise what the majority of people photo when hearing assistances are mentioned. The electronic devices which create a BTE listening device functionality are housed in a plastic instance which matches behind the ear as well as possesses a cylinder that connects this to an ear mold and mildew which matches in the ear channel.
They are actually made to suit the whole entire sphere from hearing reductions, coming from the mild to the extreme. They are more noticeable after that listening to assistances that fit entirely in the ear canal, they possess an amount from perks that appeal to a wide range from hearing reduced people. Furthermore, BTE listening device been available in an amount of dimensions, shades as well as forms. Some responsible for the ear models are actually a lot less obvious then others.
Due to the fact that responsible for the ear hearing help are actually larger after that their entirely in the canal, or even CIC, counterparts, they can even more quickly house a bigger amp and a lot more powerful electric battery and also therefore might be particularly useful to people along with an extra intense hearing reduction. BTE listening device are actually likewise rather functional during that they happen in one of the most conventional analog design in addition to in the lately popularized digitally powered style of listening device.
When financial restraints are a problem, responsible for the ear tools certainly triumph over listening device which fit totally in the ear canal. Because of their larger size, other teams from individuals to which BTE listening device have even more i was reading this charm then CIC designs include the aged, joint inflammation victims as well as others with fine electric motor management handicaps as well as associated issues.
Because CIC models demand the wearing of a heavier tool in the canal after that just the light in weight ear mold fastened to BTE hearing help, there usually tends to be a lot less ear canal annoyance with the previous.
In the late 1800s the very first commercially created listening device were actually copyrighted as well as became on call to the community. The 1st behind the ear hearing help loomed over fifty years earlier.
Prior to this, hearing assistances were primarily amplifiers used someplace on the physical body and these were costly and also heavy, due partially to quick electric battery intake. With the arrival from the much smaller joint transistor in 1952, extensive BTE listening devices usage ended up being additional from a fact.
As a result of to improvements in the modern technology from circuitry,1964 saw one more upsurge in operation of BTE devices and also using body system used hearing assistances fell to a lot less at that point twenty per-cent. Through 1972 prototypes for hearing aids which might be actually programmed to a range from listening circumstances, were being actually made. The complying with twenty years showed ongoing improvements as well as innovations in electronic hearing aid technology.
Amount managements were actually contributed to the majority of behind the ear gadgets in the 1990s and digital hearing help started appearing in the mid nineties. There has actually been carried on new kid on the blocks in the listening devices planet ever since such as remanufactured listening devices, disposable hearing help and over-the-counter listening device. Which understands just what the future of responsible for the ear hearing help modern technology keeps, the options are never-ending
Behind the ear, additionally known as BTE, listening to assistances are much and also away the very most commonly utilized style of hearing help. These hearing assistances are likewise exactly what most individuals photo when listening to aids are pointed out. The electronic devices which create a BTE hearing help functionality are housed in a plastic situation which matches responsible for the ear and also has a tube that links this to an ear mold and mildew which fits in the ear canal.
There has been proceeded brand-new landings in the hearing assistance planet because after that such as remanufactured hearing help, non reusable hearing help as well as over the counter hearing help.
|
<urn:uuid:049d66d8-7ac2-4442-81d3-41a200ee7260>
|
{
"dump": "CC-MAIN-2018-47",
"url": "http://riveraypgs.alltdesign.com/rumored-buzz-on-hearing-aid-machine-8883955",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744320.70/warc/CC-MAIN-20181118073231-20181118095231-00147.warc.gz",
"language": "en",
"language_score": 0.9738895893096924,
"token_count": 837,
"score": 2.53125,
"int_score": 3
}
|
Its is bad design….
How can we design for something we cannot measure? A wireless design needs to account for signal attenuation, one of the big design requirements is to account for the loss caused by walls, windows etc. We do this by measure the signal loss and accounting for it in our designs.
How would one measure this the objects that are located in a false ceiling…? Well it would be extremely difficult to safely, accurately and not to mention costly to measure the HVAC ducts etc located in false ceilings, especially when the everything is already in place, as well how do you measure the impact of reflections etc???
Impact to coverage and efficiency of the WLAN….
In a typical corporate office multi level building the false ceilings can contain, HVAC ducting, water pipes, metal cable trays etc, increasing reflections and sufficiently reducing signal propagation.
Reduced coverage will result in extra AP’s having to be used to account for the coverage holes ,which result in an increase to overall cost.
Increased multi path caused by reflections can decrease overall throughput for less capable devices. Can cause AP radios to reduce power to account for the reflected signal being detected by the radio elements. Not good when design is based on RRM
AP’s can produce a great deal of heat (Cisco 4802) false ceilings are usually hot, contain dust and other material when combined can become a fire hazard.
As mentioned above AP’s and false ceilings can be rather hot, when AP’s are installed in false ceiling it is usually on some metal frame or structure, the increase heat can cause AP’s to overheat or fail.
|
<urn:uuid:13829308-4ae9-4ea8-9e15-76003c4a331f>
|
{
"dump": "CC-MAIN-2022-49",
"url": "https://wifiburns.com/2019/12/17/reason-why-we-do-not-recommend-installing-aps-in-false-ceilings/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.70/warc/CC-MAIN-20221202215443-20221203005443-00097.warc.gz",
"language": "en",
"language_score": 0.9548325538635254,
"token_count": 348,
"score": 2.78125,
"int_score": 3
}
|
Protesters in Wisconsin torched the inventory of a used car lot, destroyed a municipal truck and smashed the doors and windows of a public library, to protest the death of a man who resisted the police and tried to open the door to his car while being pursued. Protesters in Portland have continued to throw rocks at police, vandalize property, burn down buildings and block public access to streets for months. On 16 August, they attacked a vehicle whose driver was lawfully operating on the street, chased it and kicked the driver in the face after he crashed. Similar violence against property and people has taken place in Seattle, Chicago, Lafayette and across the country. There has been rioting for months.
Many people do not want to acknowledge the extent of the looting and rioting. Many media outlets support the George Floyd protests and do not want to risk discrediting the cause of defunding or reforming the police. Others fear being labeled racist or right-wing. But the refusal to condemn rioting is also born of the sincere belief that destroying property is a justifiable—or, at least, excusable—response to police brutality.
Many have cited Martin Luther King’s remark that “rioting is the language of the unheard.” Some argue that, since people are dying, we shouldn’t be concerned about property damage. “These protests are more than catharsis; they are an imperfect expression of grief,” Minneapolis teacher Christopher Mah comments in the Minnpost.
But these justifications fall apart upon closer analysis. The Choi family, who operate a jewelry store in Miami, had nothing whatsoever to do with the killing of George Floyd or anyone else, and yet their store was burgled and trashed. One tragedy does not diminish another tragedy. Hurley Taylor, operator of Tar Heel Sneakers, has every right to be angry that his merchandise has been stolen, along with his collection of jerseys. The fact that George Floyd died doesn’t make the damage to so many people’s livelihoods caused by rioters any less of a problem for those people and their cities. Nor does attacking an uninvolved person help bring about justice.
Apologists for rioting often dismiss it as mere violence against property—as if people did not need to work and earn money in order to afford the food and shelter that sustains them, all of which is made more difficult when one’s place of business has been destroyed. Most of the small businesses that were targeted either lack insurance or have insufficient insurance to cover the damage.
And the violence is not limited to property damage. As Portland Mayor Ted Wheeler commented, “When you commit arson with an accelerant in an attempt to burn down a building that is occupied by people who you have intentionally trapped inside, you are not demonstrating, you are attempting to commit murder.” The violence, moreover, is increasing in severity. Over the last week of August, street fights involving guns between protesters and counterprotesters resulted in three homicides.
Supporters of the rioters have justified vandalism as a way of venting frustrations and dismissed looting as unimportant. They will now have to justify pulling people out of cars and kicking them in the face. They seem to think that people on the street are incapable of controlling themselves or acting in accordance with social and moral standards.
These apologetics for social deviancy are happening across American society and across the political spectrum. The social justice activists who justify resisting arrest and stealing from small businesses are on the left, but the right is justifying obstructing justice, lying to law enforcement officers and looting the Treasury for the personal benefit of the politicians they happen to support.
Depravity is a bipartisan affair. The Democrats have their Bill Clintons and their campaigns to broaden sexual norms. The Republicans have their Donald Trumps, who have affairs with porn stars. And Jeffrey Epstein liked to hang out with both.
Many Americans have too much tolerance for immoral behavior. Some excuse college-age rapists because “boys will be boys.” Some justify assault and battery, even the taking of a taser by a drunk-driving suspect on the grounds that no one is perfect. This is symptomatic of social decay.
America’s moral decay has many causes. Two of the most important of these are the ideas that no one should be judged and that everyone’s opinion—even on matters of fact—is equally valid. It is fashionable to stigmatize successful people with expertise as elites—particularly if they rightly believe that they are more qualified to offer opinions on certain issues than ordinary people.
Much of this anti-intellectualism has developed, ironically, in academia. As Kurt Andersen writes in Fantasyland: How America Went Haywire, cultural relativists “convinced [themselves] that all knowledge and especially science are merely self-serving opinions or myths,” while “among the gatekeepers in academia and media and government and politics charged with determining what’s factually true and what’s iffy and false, there has been much more capitulation, voluntary surrenders to the barbarians at their gates.”
The tearing down of scientific and moral standards impacts America’s body politic, both our political parties and public health. Much of the focus has been on trite culture wars issues that impact very few people. (Less than 1 percent of Americans identify as trans, despite the controversies on daytime talk shows and internet forums, but 10 percent of Americans think vaccines cause autism, a belief that causes real sickness and death.)
During the 2016 presidential campaign, supporters of Donald Trump chanted mindless, authoritarian slogans like lock her up at rallies and wore T-shirts with slogans like Trump that bitch! Yet those who pointed out that some Trump supporters (perhaps a “basket” of them) were hateful and uncivil were derided by the mainstream media—the supposed gatekeeper that has capitulated to mass populism—and political activists as haughty and evil for judging people. At the time, conservatives relished attacking liberals for upholding their values in opposition to Trump supporters who assaulted people at rallies, but now that very same reluctance to uphold moral values is preventing Americans from condemning criminals and rioters.
Noncompliance with police orders does not warrant a death sentence, we are told, and nor does property violence. But should defending one’s property—refusing to walk away and let gangsters steal everything—warrant a death sentence for the owner of a business? An elderly man was attacked in Wisconsin just for trying to put out a fire at one of the stores the protesters had looted.
Apologists often point out that suspects are unarmed, though police do not always know that at the time. But some apologists are also calling for armed suspects to be able to run free. In Lafayette, a criminal who was carrying a knife and causing a public disturbance in a parking lot was shot as he was entering a gas station, knife in hand.
Time magazine, which claims his death sparked outrage, describes the situation:
Lafayette Police Department officers responded to a disturbance and found a man, 31-year-old Lafayette resident Trayford Pellerin, allegedly armed with a knife in a convenience store’s parking lot, the state police said. The officers tried to approach Pellerin, but he ran away, and the officers followed him on foot as he walked from the convenience store, which was at a Circle K gas station, to a Shell gas station, according to the Lafayette Daily Advertiser. The officers shot him with tasers, but didn’t manage to stop him, the State Police said …
A woman who filmed the shooting, Rickasha Montgomery, 18, confirmed to the Lafayette Daily Advertiser that she watched the officers tase the man, and that he had been carrying a knife. The officers had ordered the man to get to the ground, and shot him when he tried to go inside the convenience store, Montgomery said.
If you were working in that gas station, would you want a violent criminal with a knife running around?
When we evaluate what we want from our government, law enforcement, society and the country at large, we must consider what the average upstanding, moral citizen would want. We must try to create a better society with higher values, not dumb down our norms and values to meet the standards of deadbeats, criminals and tyrants.
Blameless people want to have the right to drive on their roads and fill their cars with gas, not be blocked by economic terrorists surrounding a privately owned gas station in a self-styled form of protest. They want to be able to walk the streets without fear of either being beaten, murdered or infected with coronavirus by science-denying lunatics who refuse to wear a mask because they believe masks are effete or linked to Bill Gates. Enlightened people want a president who makes handwritten notes on his carefully crafted speeches, not a president who rants wildly about ramps, showerheads and injecting oneself with disinfectant and whose contributions to the written record include angry all-caps diatribes full of misspellings.
These rational people should not be shamed for their reasonable desires and expectations.
Some might disagree with me for condemning people who engage in violence and refuse to follow just laws. But in tearing down all standards of moral and honorable behavior, we deny ourselves the standards by which to condemn racism, corruption and other immoral acts. This is one of the reasons Donald Trump was elected in the first place.
This hurts all of us.
|
<urn:uuid:571e6922-b33b-4931-bd2f-d7c8fcead219>
|
{
"dump": "CC-MAIN-2021-17",
"url": "https://areomagazine.com/2020/09/03/the-bipartisan-decay-of-american-culture/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039610090.97/warc/CC-MAIN-20210422130245-20210422160245-00623.warc.gz",
"language": "en",
"language_score": 0.9710913300514221,
"token_count": 1963,
"score": 2.546875,
"int_score": 3
}
|
The SAFEWATER research programme seeks to tackle a global challenge by looking at clean water solutions and the development of smart devices to quickly tell if water is safe to drink.
£4.7 million of the funding will be provided by the Global Challenges Research Fund (GCRF) Research Councils UK Collective Fund, and the project will see Ulster University join forces with other partners across the globe to conduct the research. This includes academics in South America and NGOs already working in Colombia and Mexico.
Lead researcher, Professor Tony Byrne from Ulster University, said:
“This is a very significant project which will play a critical role in helping to address one of the greatest global issues the developing world is facing today.”
“In the developed world, we take it for granted that our drinking water is safe yet nearly 25 per cent of the global population drink water that is not safe because of contamination that can cause deadly disease. Clean water saves lives and while we know how to make water safe to drink the cost of doing so may be too high as nearly half the world’s population live on less than £2 per day.”
“Ulster University will lead on this cutting-edge research which will form part of the SAFEWATER project. It will involve academics from the University of Sao Paulo Brazil and the University of Medellin Colombia, along with the NGOs Fundacion Cantaro Azul Mexico and CTA Colombia who are already working with, and trusted by, the local people.
“Through the NGOs, local people will be involved in the development of clean water solutions from the beginning of the project so the technologies will meet their needs. The project will make a real impact on the ground by bringing direct benefits to the lives of people living in developing countries.”
“Ulster University is a world leader in research which delivers across a number of priority disciplines. This work will further develop our links with international partners and reinforce our ability to deliver research that makes a tangible impact to society, both locally and internationally.”
|
<urn:uuid:0fd70b4a-e3ac-44c9-babb-576c4410aaf7>
|
{
"dump": "CC-MAIN-2023-23",
"url": "https://www.ulster.ac.uk/news/2017/july/6-million-research-project-will-pioneer-safe-water-solutions-for-the-developing-world",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656788.77/warc/CC-MAIN-20230609164851-20230609194851-00666.warc.gz",
"language": "en",
"language_score": 0.9409650564193726,
"token_count": 419,
"score": 2.703125,
"int_score": 3
}
|
We have a poorly nurse in the building today. Our very own Steph is complaining (sorry I mean airing her views) about erosion on her teeth. Erosion is usually seen on the inside surfaces of our front teeth. There are several different factors that contribute towards tooth erosion, and I definately know what is causing Stephs……………………. Fizzy pop!! Erosion is almost always associated with high acidity. Watching what we eat and drink can help reduce tooth erosion dramatically. Often, frequent sipping of carbonated drinks or regular consumption of citrus fruits/drinks can cause tooth loss. Another factor is acid reflux. Regurgitation of stomach acids can cause the tooth to wear, sometimes without the patient being very aware of it. The features of tooth erosion can be a smooth, polished appearance to the tooth, or the shape of the tooth is lost or shallow depressions appear.
We are all guilty of “grazing” on our can of fizzy drink at our desks, and children and teenagers are especially prone to sipping on the fizzy stuff, so please be aware of what you, friends and family are doing simply from drinking. If you can’t break the habit or change routines, good old H2o is great for rinsing out after consuming acidic or fizzy food/drinks. The water helps neutralise the mouth. Sugar free gum is also great for this.
If you want to discuss Tooth erosion with your dentist, Hygienist or one of our nurses then just ask! We are here to help 😉
|
<urn:uuid:6cfa586c-3f79-4935-9f34-0c0bbf1c3c2d>
|
{
"dump": "CC-MAIN-2019-39",
"url": "https://www.woodboroughhouse.com/blog/acid-attack/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00048.warc.gz",
"language": "en",
"language_score": 0.9532368183135986,
"token_count": 322,
"score": 3.21875,
"int_score": 3
}
|
This is something you are likely to hear from your children a number of times over the course of the summer holidays. I think children today more than any other generation before are so used to structured after school activities or sitting indoors in front of a screen that they literally don’t know how to keep themselves busy.
Parents mistakenly think it is their job and responsibility to keep their kids happy and occupied but children have to learn how to entertain themselves. Unstructured or down time is actually very important for children’s mental health.
Your main role as a parent is to set your child up for life as an adult and prepare them to be part of society. Children need to learn to do things for themselves and when they become adults will need to be skilled at managing their time (including leisure time) and pursuing interests that contribute to their happiness. The learning starts now in childhood.
Why is unstructured time so important?
- Kids learn how to be alone and actually have time to just be, think and contemplate life so they are comfortable in their own company.
- Creativity and imagination is more likely to develop if they are not in front a screen and need to occupy themselves. For my younger 2 daughters despite having lots of toys and games at home, their favourite activities are role play (mums and dads, doctors, restaurants) and drawing.
- It’s a great advantage for children to have at least one passion, something they have chosen that they really enjoy and work hard at. Without unstructured time they have little opportunity to develop this. For my eldest child it’s music, my second art and the third gymnastics. My youngest has yet to find hers.
- Being bored encourages self -reliance
How can parents respond when their child is ‘bored’?
- Firstly establish whether they are saying this because they are craving your attention. Make sure you are scheduling bursts of ‘special time’ with each child and opportunities to connect with them.
- Encourage your child to brainstorm activities they can do when they feel bored and perhaps contribute your own ideas. If they are old enough encourage your child to write the ideas down so they can refer to them next time they are bored.
- Limit screen time so they have to get into the habit of finding something else to do. Screens and particularly games on screens are purposely designed to be really engaging and addictive, providing a boost of dopamine so children want to use them and every other activity seems less exciting.
- For younger children I love Sean Covey’s book the 7 Habits of Happy Kids with the story about Sammy squirrel and it being his responsibility to make fun for himself.
Just to be clear it is mainly your child’s responsibility to keep themselves busy and not yours. You need to give them the experience of being bored and the skills and ideas to handle that feeling rather than just trying to avoid it.
|
<urn:uuid:a9287a12-e49b-4121-a84c-f3fa27fa183f>
|
{
"dump": "CC-MAIN-2018-22",
"url": "http://www.educatingmatters.co.uk/blog/handling-boredom/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864725.4/warc/CC-MAIN-20180522112148-20180522132148-00245.warc.gz",
"language": "en",
"language_score": 0.9731622934341431,
"token_count": 603,
"score": 3.25,
"int_score": 3
}
|
As actuaries, we are concerned with assessing different risks and quantifying the possible effects. We often hear about the risks involved in carrying out a certain action, or the risks associated with a new business strategy. But what about the wider-scale risks that have the potential to affect everyone? The World Economic Forum (WEF) published a report at the start of the year detailing the major global risks of 2014.
These risks spanned from environmental and climate change issues to global economic collapses. We have picked out a few that are relevant to the insurance industry and brought in some external research that we have carried out on the matters.
Technological – Cybergeddon
The growing dependence on the internet has stemmed a new global threat with the potential of mass systemic failure, the so-called 'cybergeddon'. The threat of a cyber breach appears to be spiralling upward on a daily basis. Technology that provided the perpetual increase in productivity could also be the very poison to destroy it. News reports are showing that anyone from online shoppers to global corporations are being targeted.
Research from Zurich’s Beyond Data Breaches report state that the year 2013 was the worst thus far in terms of cyber attacks, with 740 million data files potentially stolen or viewed globally. One of the most serious cyber attacks of this year was the Heartbleed bug that was discovered in April 2014. Heartbleed is found in software called Open SSl, and allows hackers to steal data from computers. Another major cyber incident was the CryptoLocker virus, that was first seen in 2013. This virus is a sophisticated piece of ransomware that targets computers running Microsoft Windows via infected email attachments or websites. The virus encrypts the files on the computer, destroying all data. Newly launched antivirus programmes now detect the virus, but users should nonetheless maintain wariness.
A cyber threat must not be narrowly viewed as the theft of customer data. The threat stretches far wider to a global domino effect caused by the interconnectedness of the internet. Almost everything from infrastructure to supply chains are dependent on the internet and therefore a global shock on the scale of the 2008 collapse of the financial system is a possibility. If a major cloud data storage provider were to fail analogous to that of Northern Rock, the knock-on effects could be immense.
The insurance industry is very much aware of this issue and is developing new products to meet the cyber protection needs of the market. The cyber insurance market is still an immature market however, and with limited historical data it is difficult to assess how protected society truly is from a cybergeddon.
Economic - Fiscal Crises
The risk that is most likely to have a global scale impact in the next ten years, according to the WEF report, is the growing income gap between the richest and poorest citizens. Fiscal crises and unemployment are also among the highest rated risks in the report, all of which tend to signal an economic recession. This can cause a whole range of issues for the insurance industry.
Recession sparks an increase in fraudulent insurance claims. The ABI detected a record total of £1.3 billion of insurance fraud in 2013, which is a rise of 18% from 2012. These fraudulent claims range from entirely spurious incidents to exaggerated sustained injuries and are seen within any sector, from pet insurance to property or motor insurance. Organised gangs are often behind these events, which cost each UK household an extra £50 in premiums per year.
An increase in demand for catastrophe bonds is also seen during times of poor market conditions, as investors look to improve returns. Near zero interest rates, volatile markets and the global economic slowdown all force investors to seek alternative forms of investment. Catastrophe bonds are a form of insurance-linked security that transfer the risk related to certain natural disasters from an issuer to the investors. This form of investment is relatively 'recession proof' in the sense that the occurrence of a natural catastrophe is uncorrelated to the performance of the economy. 2013 saw a 22% increase in the global issuance of catastrophe bonds, to $7.1 billion, according to Fitch. The return obtained on catastrophe bonds has, however, fallen recently as a result of the increase in demand.
Environmental - Natural catastrophes
Natural catastrophes are a phenomenon that have been occurring throughout history. From cyclones to droughts these events have been affecting mankind on a major scale. The daunting prospect with regards to natural disasters is that they have the potential to wipe out intelligent life eternally, and there is little that humanity can do to prevent them. The WEF report cites the 2011 tsunami at the Fukushima nuclear power plant in Japan as an example. An earthquake triggered the tsunami that resulted in a meltdown of three of the plant’s nuclear reactors, resulting in a large release of radioactive materials. The clean-up process is expected to take decades. The disaster was a combination of natural and man-made forces, and the argument within the report is that a combination on this scale could conceivably trigger existential risks for life on Earth.
The magnitude of the losses incurred from natural disasters is difficult to predict and therefore the insurance industry has a vital role to play in aiding the recovery of communities. The insurance industry can help to advise regions on how to prepare for and even prevent an anticipated event. Global insured losses from catastrophes in 2013 were $45 billion, according to research by Swiss Re. Large contributions of this were the severe storms in Europe and super-typhoon Haiyan in the Philippines.
The risks faced by society today may require greater attention due to the interconnectedness of the world that we live in. Unlikely incidents, such as 1-in-200 year events are no longer that remote or unimaginable. Individuals may need to be more creative when envisioning the effects of combinations of events occurring simultaneously. Insurance companies will certainly play a role in managing or preventing risks for their clients and their products will have to evolve to meet the market needs for the ever more complex risks.
|
<urn:uuid:9d26c87c-4e6e-4ac7-b5e1-12dd5e67753e>
|
{
"dump": "CC-MAIN-2021-49",
"url": "https://www.barnett-waddingham.co.uk/comment-insight/blog/cybergeddon-fiscal-crises-and-natural-catastrophes/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00143.warc.gz",
"language": "en",
"language_score": 0.9523943662643433,
"token_count": 1220,
"score": 2.53125,
"int_score": 3
}
|
Any self-respecting pediatric nutritionist will tell you that breast milk is a key ingredient to having a healthy baby. While some may want to ignore this advice most just do not understand the “why” behind the advice. Understanding breast milk itself can go a long way in improving breastfeeding patterns all around the world.
With the introduction of baby formula and other toddler foods most mothers assume that this is a perfect replacement to breast milk. Dead wrong. The truth is that there is nothing quite like breast milk. From how it is made to how it affects the baby’s development, breast milk is quite the a fascinating toddler’s food. All mothers need is to understand breast milk both for their well being and that of their kids.
1. How breast milk is made in the breasts
To produce breast milk, mothers melt their own body fat. Are you with me? We literally dissolve parts of ourselves, starting with gluteal-femoral fat, aka our butts, and turn it into liquid to feed our babies.
But breast-feeding is more than being a good mom. And breast milk is much more than food: its potent medicine and, simultaneously, a powerful medium of communication between mothers and their babies. It’s astonishing. And it should be—the recipe for mother’s milk is one that female bodies have been developing for 300 million years.
2. Some kids need it more than others
Many new mothers have wondered exactly how long it is that they should breast feed and the simple answer to that is:
Breast-feeding leads to better overall health outcomes for children, which is why the World Health Organization and the American Academy of Paediatrics recommend that babies be exclusively breast-fed for a minimum of six months.
Those outcomes, though, are relative: A premature infant in the neonatal intensive-care unit or a baby growing up in a rural African village with a high rate of infectious disease and no access to clean water will benefit significantly more from breast milk over artificial milk, called formula, than a healthy, full-term baby born in a modern Seattle hospital.
It is therefore quite clear that breast milk is a necessity for your toddler rather than an option. The world would be a healthier place if all mothers would breastfeed their babies for the recommended duration after birth.
3. Breast milk bolsters your kid’s IQ
Does it really lead to better health and higher intelligence quotient for your kids?
We’re also told that breast-feeding leads to babies with higher IQs and lower rates of childhood obesity than their formula-fed counterparts. I understand why people find this appealing, but I don’t plan to raise my daughter to understand intelligence in terms of a single test score, or to measure health and beauty by body mass index.
Surprisingly more and more mothers are breastfeeding just because of these two facts. While no one would blame them for it, mothers should be more objective about breastfeeding that they currently are.
More compelling to me are the straightforward facts about breast milk: It contains all the vitamins and nutrients a baby needs in the first six months of life (breast-fed babies don’t even need to drink water, milk provides all the necessary hydration), and it has many germ- and disease-fighting substances that help protect a baby from illness. Oh, also: The nutritional and immunological components of breast milk change every day, according to the specific, individual needs of a baby. Yes, that’s right, and I will explain how it works in a minute. Not nearly enough information is provided by doctors, lactation counselors, or the internet about this mind-blowing characteristic of milk.
This lack of information is partially why a lot of the mothers today are so clueless about breast milk. More needs to be done to get all this information out there for all mothers to consume and hopefully put to good use.
What vital nutrients are contained in breast milk.
so what exactly goes into breast milk that makes it so nutritious?
Very little is understood about the composition of breast milk by the producer of the milk. Sure scientists and professionals have all the facts. These facts do not really help the mother or the baby if the mother is not aware of them. You can’t really use information you don’t have. This is probably why some women think of breast milk and baby formula as being equal. This could not be further from the truth. While manufacturers (bless their entrepreneurial souls) try to mimic breast milk, they are just not there yet. Whether they will ever get there is another question altogether
The nutritional value of breast milk has been perfected over millions of years (if you believe in evolution) to meet baby’s every need. To expect scientist to replicate this in a few years is plain insanity.
Nutritionally, breast milk is a complete and perfect food, an ideal combination of proteins, fat, carbohydrates, and nutrients. Colostrum, the thick golden liquid that first comes out of a woman’s breasts after giving birth (or sometimes weeks before, as many freaked-out moms-to-be will tell you) is engineered to be low in fat but high in carbohydrates and protein, making it quickly and easily digestible to newborns in urgent need of its contents. (It also has a laxative effect that helps a baby pass its momentous first poop, a terrifying black tar-like substance called meconium.)
Mature breast milk, which typically comes in a few days after a woman has given birth, is 3 to 5 percent fat and holds an impressive list of minerals and vitamins: sodium, potassium, calcium, magnesium, phosphorous, and vitamins A, C, and E. Long chain fatty acids like DHA (an omega-3) and AA (an omega-6)—both critical to brain and nervous-system development—also abound in mother’s milk.
Breast milk is very much alive
This may sound far fetched but it does have some tiny and interesting microbes n it.
And speaking of microbes, there’s a ton of them in breast milk. Human milk isn’t sterile—it’s very much alive, filled with good bacteria, much like yogurt and naturally fermented pickles and kefir, that keep our digestive systems functioning properly. So mother’s milk contains not only the bacteria necessary to help a baby break down food, but the food for the bacteria themselves to thrive. A breast-feeding mother isn’t keeping one organism alive—but actually hundreds of thousands of them.
What it says about the mother
You may be surprised to find out that breast milk actually depend on the mother, and not all milk will be the same. This goes down to the flavor and texture. Don’t believe me? What do the experts say?
Like a glass of red wine, breast milk has a straightforward color and appearance, but it possesses subtleties in flavor that reflect its terroir—the mother’s body. And it turns out that like any great dish of food, mother’s milk holds a variety of aromas, flavors, and textures.
The flavors of breast milk are as dynamic as a mother’s diet. In the 1970s, researchers at the University of Manitoba obtained samples of breast milk from lactating women and had them evaluated by a trained panel for taste, quality of sweetness, and texture. There were variations across all samples in all categories, most notably that the milk of a woman who had recently eaten spicy food was described by tasters as being “hot” and “peppery.”
It is medicinal
When you tell most new moms that their breast milk can be of medicinal value to their kids, they will often give you a blank stare. Some may even tell you straight to your face, “look I get you are trying to prove that it is for the good of the kid but I think you are getting carried away here.” But is it really getting carried away or can breast milk really cure and if so how exactly does this happen?
According to Hinde, when a baby suckles at its mother’s breast, a vacuum is created. Within that vacuum, the infant’s saliva is sucked back into the mother’s nipple, where receptors in her mammary gland read its signals. This “baby spit backwash,” as she delightfully describes it, contains information about the baby’s immune status. Everything scientists know about physiology indicates that baby spit backwash is one of the ways that breast milk adjusts its immunological composition. If the mammary gland receptors detect the presence of pathogens, they compel the mother’s body to produce antibodies to fight it, and those antibodies travel through breast milk back into the baby’s body, where they target the infection.
Women should save all that money they spend on baby formula and go for the good stuff that they get for free. While you may be worried about boobs sagging, don’t put that on the baby. Age and gravity will get you eventually. It will feel better knowing that you did all you could for your baby in the end.
Did you enjoy this article? Read our other articles on breastfeeding:
|
<urn:uuid:8099319e-68ef-4235-8159-a7e7922f106e>
|
{
"dump": "CC-MAIN-2017-22",
"url": "http://www.babycaremag.com/health-and-nutrition/7-amazing-facts-about-breast-milk-every-mom-needs-to-know/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607849.21/warc/CC-MAIN-20170524173007-20170524193007-00223.warc.gz",
"language": "en",
"language_score": 0.9603585600852966,
"token_count": 1924,
"score": 2.75,
"int_score": 3
}
|
SHAIK LALBEE, 29 August, 2020
Capital punishment, also named death penalty, execution of a convicted inmate sentenced to death upon prosecution by a court of law. Capital punishment should be differentiated from extrajudicial killings that are carried out without adequate legal procedure.
The word death sentence is often used interchangeably with capital punishment, but execution is not necessarily accompanied by enforcement of the sentence (even though it is enforced on appeal), leading to the likelihood of moving to life imprisonment.The word “capital punishment” reflects the severest type of punishment. While the meaning and nature of these offences vary from country to nation, state to state, age to age, the death penalty has often been the result of capital punishment. Capital punishment implies a death penalty by popular use in jurisprudence, criminology and penology.
Human civilisation history reveals that capital punishment has not been discarded as a form of punishment during any period of time. Throughout the 7th century BCE, capital punishment for assassination, insurrection, robbery, and adultery was commonly used in ancient Greek legislation, though Plato maintained that it could be reserved exclusively for the incorrigible. This was often employed by the Romans for a wide variety of offences while civilians were removed during the empire for a brief period. Capitalpunishment is an anciently practiced punishment. Literally, there is no country without the practice of capital punishment.
CAPITAL PUNISHMENT UNDER INDIAN LEGAL SYSTEM:
“Capital Penalty” or “Death Penalty” is the highest standard of punishment given for upholding law and order in any community or democracy. But murdering another human being is no better than killing others in the name of justice. We will concentrate on doing away with the crime, not the criminal.This is awarded in India for the most serious of crimes. This is rewarded for misconduct and grievous offences. Article 21 states that no individual shall be deprived of the ‘right to life’ that is guaranteed to all Indian people. Article 72 of the Constitution of India authorizes the President to grant pardon, reprieves, parole or revocation of imprisonment or to revoke, remand or modify the sentence of any person convicted of an offence.Only the President is empowered to grant discretion in death penalty situations. If a prisoner is sentenced to death by the Sessions Court in a case, the High Court must confirm the conviction. If the convict’s appeal to the Supreme Court fails then he can lodge a ‘mercy petition’ with the Indian President.
DEATH PENALITY CRIMES:
The crimes such as
- Murder, this is punishable by death as provided for in section 302 of the Indian Penal Code, 1860.
- Rape, A person who inflicts injuries in a death-causing sexual assault or is left in a “persistent vegetative state” can be punished with death under the 2013 Criminal Law Act.
- Kidnap or abduction, Section 364A of the Indian Penal Code, 1860, specifies that abduction that does not result in murder is a crime punishable by execution.
- Treason, Any person who declares or tries to wage war against the government and encourages officers, troops or members of the Navy, Army, or Air Force to commit a mutiny shall be given the death penalty.
- Drug dealing, dacoity with murder, espionage and military offense, a person who commits murder during an armed robbery is also punishable to the death penalty. If the murderer is killed, the kidnapping of the murderer for the money is punishable with death penalty. The involvement with organized crime, whether it leads to murder, is punishable by execution. Committing or helping another person to execute Sati is often punishable by the death penalty.
The procedure of execution can be two types Hanging and Shooting.Only after a death sentence has been issued by a session trial court must a decision be drawn up by a High Court to finalize it. When certified by the High Court, the convicted person has the right to appeal the Supreme Court. If this is not practicable, or if the Supreme Court dismisses the appeal or declines to hear the petition, the person convicted may send a ‘Petition for Mercy’ to the President of India and the Governor of the State. According to India’s constitutional system, mercy petitioning the President is the last constitutional resort a prisoner may take when convicted by the law court.
CLASS OF OFFENDERS EXCLUDED FROM CAPITAL PUNISHMENT:
According to Indian law, there is no execution of a minor who is under 18 at the time of committing a felony, pregnant ladies, intellectually handicapped persons are excluded from capital punishment.
IS CAPITAL PUNISHMENT CONSTITUTIONAL?
India’s constitution grants every person a basic right to life subject to deprivation by the process defined by the statute, it has been claimed by abolitionists that the death penalty in its present nature breaches the right of the individual to live. There are several moral luminaries that contend that under Indian criminal laws the death penalty being maintained being detrimental to one’s right to live.
In the case of Jagmohan Singh v State of U.P.the very first time the constitutionality of Capital Punishment has been questioned. Here the plaintiff opposed the death penalty on the grounds of violations of Sections 19 and 21.The Supreme Court held that the selection of death sentences is made in accordance with the law-setting procedure. It is observed that on the basis of circumstances and facts, nature of crime recorded during trial, the judge makes the choice between capital sentence and imprisonment of life.
In Rajendra Prasad v. State of U.P.,the judge ruled that capital punishment should not be permissible until it had been established that the convict was harmful to society. The educated judge pleads the removal of the death penalty and has said that it will not be maintained for “white collar offences.” It was also established that the death sentence for the murder crime given under section 302 of the I.P.C. did not infringe the fundamental provision of the constitution.
In Bachan Singh v. State of Punjab,it was clarified that the constitutional bench of the Supreme Court, in keeping with a just, fair and equitable process laid down by statute, accepted Article 21 as the privilege of the State to deprive a citizen of his freedom. In fact, the fundamental structure of the Constitution has not been abused by the death sentence for the murder offence provided under Section 302 I.P.C.
The Supreme Court’s decision in Mithu vs. State of Punjab 1988 (3) SCC 607, notes that mandatory death sentence is unconstitutional.
The Supreme Court, in the case of Machhi Singh v State of Punjab1983 SCR (3) 413, ruled that “the death penalty has a deterrence impact and serves a social function” and claimed that the death penalty is legally legitimate.
It is obvious from a review of the above scenario that death penalty is deemed lawful in India, given the failure of many legislative efforts to abolish the death penalty in India, and is prevalent in India to this day, as is evident from the recent scenario of Ajmalameerkasab, which was executed in 2012.
When the offender gets a death sentence it is more than just a retribution, in the name of justice and rule we destroy or kill a human. Killing an individual is unethical, and demonstrates a lack of regard for human existence. Yet seeking the death penalty doesn’t mean anyone loves the murderer. When a death sentence is enforced, it removes the possibility of progress that could have improved a person’s existence. Sometimes capital punishment is morally right, because it helps the government. It protects the public. The country’s people will cry out about their rights. Capital punishment is conducted in a civilized manner. This is the only penalty the offences done are comparable.
Indian penal code 1860, section 302.
1973 SCR (2) 541
1979 SCR (3) 78
(1980) 2 SCC 684
|
<urn:uuid:a189aeac-ffb3-4942-9da4-ef25cc86b379>
|
{
"dump": "CC-MAIN-2022-33",
"url": "https://sociallawstoday.com/capital-punishment-in-india/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00756.warc.gz",
"language": "en",
"language_score": 0.9511258602142334,
"token_count": 1708,
"score": 3.796875,
"int_score": 4
}
|
The Portland Observatory
Learn about the history and use of this Portland landmark. Lessons explore history, communications, science, and museum visits, among other topics. Come inside!
Longfellow & the Forging of American Identity
Thirty teachers from Maine and Massachusetts have undertaken an intensive two year study of Henry Wadsworth Longfellow's life and poetry through a program created by the Maine Humanities Council with funding from the National Endowment for the Humanities.
The program–Longfellow and the Forging of American Identity–is designed to bring the life and work of Maine's Henry Wadsworth Longfellow back into the curriculum–in English, Social Studies, American Studies, Art, Music, and other subjects. See their work!
Using Source Documents in the Classroom
This lesson plan introduces teachers how to use a source document and the Maine Memory Network in classrooms. It can be used in any grade and will require one or more source documents, which can be found by searching the Maine Memory Network for the topic of your choice.
Download the lesson plan (PDF, 16K)
Mapping Portland, 1690 - 1900
Historical maps, like all historical documents, can be interpreted in many ways. This lesson plan uses five maps to trace the development of Portland from its earliest settlements.
Interested in sharing a lesson plan that you've created?
|
<urn:uuid:bc8b13bb-4f6e-4141-9d4a-11991dbe37e0>
|
{
"dump": "CC-MAIN-2016-44",
"url": "http://www.mainememory.net/schools/schools_lessonplans.shtml",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718296.19/warc/CC-MAIN-20161020183838-00441-ip-10-171-6-4.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.8861379027366638,
"token_count": 277,
"score": 3.34375,
"int_score": 3
}
|
THE HOME OF MARY PHILIPSE, IN WHOM GEORGE WASHINGTON WAS INTERESTED
At first glance one would not think that the name Yonkers was derived very directly from the name of the first settlers of the region, de Jonkheer Adriaen Van der Donck. When, in 1646, he secured a large tract of land bounded by the Hudson, the Bronx, and Spuyten Duyvil Creek, this was called ” Colen Donck ” (Donck’s Colony) or ” De Jonkheer’s” (the Young Lord’s). As the Dutch “j” is pronounced “y,” the transition from Jonkheers to Yonkers was easy.
On September 29, 1672, after the death of the original owner, 7,708 acres of the princely estate were sold to three men, of whom Frederick Philipse (originally Ffreric Vlypse) was one. A few years later Philipse bought out the heirs of the other two purchasers, and added to his holdings by further purchases from his countrymen and from the Indians. On June 12, 1693, he was permitted to call himself lord of the Manor of Philipsburgh. From that day the carpenter from Fries-land, who had grown so rich that he was called ” the Dutch millionaire,” lived in state in the house he had begun in 1682.
This lord of the manor became still more important in consequence of the acceptance of his offer to build a bridge over Spuyt-den-duyvil, or ” Spiting Devil ” Creek, when the city declined to do so for lack of funds. The deed given to him stated that he had ” power and authority to erect a bridge over the water or river commonly called Spiten devil ferry or Paparimeno, and to receive toll from all passengers and drovers of cattle that shall pass thereon, according to rates hereinafter mentioned.” This bridge, which was called Kings-bridge, was a great source of revenue until 1713, when it was removed to the present site. Then tolls were charged until 1759, or, nominally, until 1779.
Part of the Manor House was used as a trading post. Everything Philipse handled seemed to turn into gold. All his ventures prospered. It was whispered that some of these ventures were more than a little shady, that he had dealings with pirates and shared in their ill-gotten gains, and that he even went into partnership with Captain Kidd when that once honest man became the prince of the very pirates whom the Government had commissioned him to apprehend. And Philipse, as a member of the Governor’s Council, had recommended this Kidd as the best man for the job! It is not strange that the lord of the manor felt constrained to resign his seat in the council because of the popular belief in the statement made by the Governor, that ” Kidd’s missing treasures could be readily found if the coffers of Frederick Philipse were searched.”
Colonel Frederick Philipse, the great-grandson of Captain Kidd’s partner, enlarged the Manor House to its present proportions and appearance. He also was prominent in the affairs of the Colony. He was a member of the Provincial Assembly, and was chairman of a meeting called on August 20, 1774, to select delegates to the county convention which was to select a representative to the First Continental Congress. Thus, ostensibly, he was taking his place with those who were crying out for the redress of grievances suffered at the hands of Great Britain. Yet it was not long until it was evident that he was openly arrayed with those who declined to turn from their allegiance to the king.
The most famous event that took place in the Philipse Manor was the marriage, on January 28, 1758, of the celebrated beauty, Mary Philipse, to Colonel Roger Morris. A letter from Joseph Chew to George Washington, dated July 13, 1757, shows thatin the opinion of the writer, at leastthe young Virginian soldier was especially interested in Mary Philipse. In this letter, which he wrote after his return from a visit to Mrs. Beverly Robinson in New York, the sister of Mary Philipse, he said :
” I often had the Pleasure of Breakfasting with the Charming Polly, Roger Morris was there (Don’t be startled) but not always, you know him he is a Lady’s man, always something to say, the Town talk’t of it as a sure & settled Affair. I can’t say I think so and that I much doubt it, but assure you had Little Acquaintance with Mr. Morris and only slightly hinted it to Miss Polly, but how can you be Excused to Continue so long in Phila. I think I should have made a kind of Flying March of it if it had been only to have seen whether the Works were sufficient to withstand a Vigorous Attack, you a soldier and a Lover, mind I have been arguing for my own Interest now for had you taken this method then I should have had the Pleasure of seeing youmy Paper is almost full and I am Convinced you will be heartily tyred in Reading ithowever will just add that I intend to set out tomorrow for New York where I will not be wanting to let Miss Polly know the Sincere Regard a Friend of mine has for herand I am sure if she had my Eyes to see thro would Prefer him to all others.”
While it is true that George Washington went to New York to see the charming Polly, there is no evidence that he was especially interested in her.
Colonel Morris later built for his bride the Morris-Jumel Mansion, which is still standing near 160th Street. Mrs. Morris frequently visited at the home of her girlhood. The last visit was paid there during Christmas week of 1776. Her father, who had been taken to Middletown, Connecticut, because of his activities on the side of the king, was allowed to go to his home on parole.
In 1779 the Manor House and lands were declared forfeited because the owner refused to take the oath of allegiance to the Colonies, and Frederick Philipse, III, went to England.
The property was sold in 1785. Until 1868 it was in the hands of various purchasers. To-day the Manor House is preserved as a relic of the days when Washing-ton visited the house, when loyalists were driven from the doors, and when it was the centre of some of the important movements against the British troops.
|
<urn:uuid:88e47482-0705-4e90-abae-d0f5d6835d98>
|
{
"dump": "CC-MAIN-2015-48",
"url": "http://travel.yodelout.com/the-philipse-manor-house-yonkers-new-york/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398452385.31/warc/CC-MAIN-20151124205412-00220-ip-10-71-132-137.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9891726970672607,
"token_count": 1396,
"score": 2.96875,
"int_score": 3
}
|
Oliver Joseph Lodge
|Sir Oliver Joseph Lodge|
Vanity Fair cartoon
|Born||June 12, 1851|
|Died||August 22, 1940|
Wiltshire Lake, Wiltshire
|Occupation||Physicist and inventor|
Sir Oliver Joseph Lodge (June 12, 1851 – August 22, 1940) was a pioneer in the science and technology that led to the development of radio. Besides his work in physics and radio technology, he was known for his interest in the paranormal, and he wrote a book about communicating with his son who perished in World War I. In later life, he lectured widely on the existence of the spiritual world.
Born in Penkull, Staffordshire, Lodge was the eldest of eight children of Oliver Lodge, a vendor of supplies to the local pottery industry, and Grace Heath. Among his brothers and sisters were the historian, Sir Richard Lodge; Eleanor Constance Lodge, historian and principal of Westfield College, London; and Alfred Lodge, a mathematician.
Lodge attended Adams' Grammar School, but his interest was sparked when, during a visit to London, he was encouraged to attend lectures on scientific subjects. Some of these were given at the Royal Society of London by John Tyndall, the renowned physicist. When Lodge was 16, he enrolled in educational courses in South Kensington, and succeeded in obtaining the highest grade in his class. When it became apparent that he excelled in scientific subjects, his father gave up the idea of having his son work for him, and Lodge was allowed to pursue a career in science. He obtained a scholarship to the Royal College of Science in London, where he studied from 1872 to 1873.
In 1873, he entered the University College London, where the curriculum included advanced mathematics. Lodge was inspired by the electrical theories of James Clerk Maxwell, who demonstrated theoretically that light is a form of electromagnetic radiation. During this period Lodge had attended lectures by Maxwell, and would later conduct a brief correspondence with the famous scientist. Lodge graduated in 1875, and was awarded his doctorate in 1877. Lodge then married Mary Marshall, who over the years would bear him six boys and six girls. At this time, he supported himself and his wife by serving as a research assistant at University College, and by giving lectures in physics at a nearby college.
In 1881, he was appointed professor of physics and mathematics at University College, Liverpool. Lodge then traveled to Europe to procure equipment for a new laboratory, and there he met Heinrich Hertz, who was at that time an assistant to the famous physicist Hermann von Helmholtz. Hertz would become the first scientist to publish successful results on the production and detection of electromagnetic waves. It was during this period that Lodge developed an interest in paranormal phenomena and spiritualism, which he was to pursue throughout his lifetime.
Proving Maxwell's theories
After completing his doctorate, Lodge worked with the Irish physicist George Francis Fitzgerald to clarify the implications of Maxwell's theory of electromagnetism, and to explore the way in which electromagnetic waves could be generated from circuitry. At that time, however, Fitzgerald did not believe such waves could be produced, and Lodge, in deference to Fitzgerald's judgment, temporarily gave up his attempt to produce them. In 1883, Fitzgerald reversed his own position and calculated the energy of the waves that could be generated by electromagnetic oscillations.
In the late 1880s, Lodge became interested in lightning, and believed that lightning rods would fail to work because of a phenomenon called inductance, which opposes the unfettered conduction of electricity in even good conductors such as copper. As a result, he insisted that a lightning bolt will not always take the path of least electrical resistance offered by a lightning rod. He experimented with the leyden jar, a simple device that holds a static electric charge, and compared its discharge in the form of a spark with lightning. While some of his ideas in this regard proved to be mistaken, they led to his discovery of electromagnetic waves.
Discovery of radio waves
During a series of lectures on lightning he gave in 1888, Lodge realized that he could create what are called standing electromagnetic waves along a wire in much the same way as a single note and its overtones are produced in a musical instrument. These were radio waves, which were like light waves but of a much lower frequency.
In July of 1888 Lodge submitted his results for publication in the form of a paper titled "On the Theory of Lightning Conductors," in which he clearly discusses the velocity, frequency, and wavelength of electromagnetic waves produced and detected in a circuit. Before the paper went to print, however, he discovered that Hertz had already published a memoir in which he described his efforts to generate and detect waves transmitted across space. Lodge credited Hertz in a postscript to his own paper, which was published later that year.
In a well-publicized lecture in 1894 on the work of Hertz, who had passed away earlier that year, Lodge demonstrated the possibility of using electromagnetic waves as a medium of communication. He then formed a partnership with Alexander Muirhead, an electrical engineer, to develop commercial applications for his discoveries.
Lodge the businessman
Lodge, alone and in conjunction with Muirhead, patented several inventions relating to radio communication in Great Britain and in the United States. The two men formed the Muirhead Syndicate in 1901 to manufacture radio equipment, but in 1911, their patents were bought out by radio pioneer Guglielmo Marconi and the partnership was dissolved. In 1943, the United States Supreme Court relieved Marconi of some of his U.S. patents in favor of Lodge and other early inventors of radio technology.
In 1900 Lodge moved from Liverpool back to the Midlands and became the first principal of the new Birmingham University, remaining there until his retirement in 1919. Lodge was awarded the Rumford Medal of the Royal Society in 1898 and was knighted by King Edward VII in 1902.
In 1917 and 1918, Lodge engaged in a debate with Arthur Eddington over the validity of Albert Einstein's theory of relativity. Lodge proposed his own theory, called the "electrical theory of matter," by which he hoped to explain relativistic phenomena such as the increase of mass with velocity.
Lodge continued to write and lecture in the remaining years of his life, particularly on life after death. He died on August 22, 1940, and is buried at St. Michael’s Church, Wilsford (Lake), Wiltshire.
To create a detector of radio waves that was more sensitive than a spark gap, Lodge improved a device invented by Edouard Branly. It is called a coherer because it relies on the fact that iron filings enclosed in a glass tube will clump together in the presence of radio waves and conduct electricity. Lodge devised a "trembler," which dislodged clumped filings and reset the device. The coherer served as an on-and-off switch triggered by radio waves, making it possible to transmit alphabetic characters in code.
On August 14, 1894, Lodge made what is often considered to be the first demonstration of broadcasting radio signals at the annual meeting of the British Association for the Advancement of Science, at Oxford University. This was two years before Marconi's first broadcast of 1896. Lodge patented the moving-coil loudspeaker and the variable tuner and other devices he had invented in pursuit of perfecting radio technology in the latter part of the decade.
The spark plug
Lodge also made a major contribution to automotive engineering when he invented the electric spark plug for the internal combustion engine, called the "Lodge Igniter." Later, two of his sons developed his ideas and in 1903 founded Lodge Bros., which eventually became known as Lodge Plugs Ltd.
Electric theory of matter
Lodge generally opposed Einstein's special and general theories of relativity, and proposed his own, which he called "The electrical theory of matter." Through this theory, Lodge attempted to explain the deviations of Mercury's orbit around the Sun from what is predicted by Newton's theory. Lodge attributed the discrepancy to a kind of inertial drag generated by motion relative to the "ether," the hypothetical medium in which electromagnetic waves are propagated.
Lodge is also remembered for his studies of life after death. He first began to study psychical phenomena (chiefly telepathy) in the 1880s through the Society for Psychical Research. In the 1890s, Lodge participated in seances. He wrote several books based on his experiences with the paranormal, including one in 1890 in which he analyzed 22 sittings with a Mrs. Piper, an American psychic and spiritual medium. After his son, Raymond, was killed in World War I in 1915, Lodge visited several psychics and wrote about the experience in a number of books, including the best-selling Raymond, or Life and Death (1916). Altogether, he wrote more than 40 books on topics including the afterlife, aether, relativity, and electromagnetic theory.
Lodge was a member of the Society for Psychical Research and served as its president from 1901 to 1904. He was also a member of the British Association for the Advancement of Science.
In 1889, Lodge was appointed President of the Liverpool Physical Society, a position he held until 1893. The society still runs to this day, though under a student body.
Lodge was an active member of the Fabian Society and published two Fabian Tracts: Socialism & Individualism (1905) and Public Service versus Private Expenditure which he co-authored with Sidney Webb, George Bernard Shaw, and Sidney Ball.
In 1898 Lodge was awarded the Rumford Medal of the Royal Society of London. King Edward VII of Great Britain knighted Lodge in 1902.
Sir Oliver Lodge's letters and papers were divided after his death. Some were deposited at the University of Birmingham and the University of Liverpool and others at the Society for Psychical Research and the University College London. Lodge, who lived a long life, was a prolific letter writer and other letters of his survive in the personal papers of other individuals and in several other universities and other institutions.
Publications by Lodge
- Electric Theory of Matter (Oneill's Electronic Museum). Retrieved June 20, 2007.
- The Work of Hertz and Some of His Successors, 1894
- Relativity: A Very Elementary Exposition, 1925
- Ether, Encyclopedia Britannica, thirteenth edition, 1926.
- Ether and Reality
- Phantom Walls
- Past Years: An Autobiography
ReferencesISBN links support NWE through referral fees
- Burns, Russell W. 2003. Communications: An International History of the Formative Years. London: Institution of Electrical Engineers. ISBN 0863413277
- Coe, Lewis. 1996. Wireless Radio: A Brief History. Jefferson, N.C.: McFarland, p. 5. ISBN 0786402598
- Davis, E. A., ed. 1997. Science in the Making. London: Taylor and Francis. ISBN 0748406425
- Eisenstaedt, Jean, and Kox, Anne J. 1992. Studies in the History of General Relativity. New York: Birkhäuser, pp. 62–65. ISBN 0817634797
- Garratt, G. R. M. 1994. The Early History of Radio: From Faraday to Marconi. London: Institution Electrical Engineers. ISBN 0852968450
- Hunt, Bruce J. 2005. The Maxwellians. Ithaca: Cornell University Press. ISBN 0801482348
- Oppenheim, Janet. 1985. The Other World: Spiritualism and Psychical Research in England, 1850–1914. New York: Cambridge University Press. ISBN 0521265053
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.
|
<urn:uuid:9d98e533-6c3a-4767-9506-86f1209a7bd2>
|
{
"dump": "CC-MAIN-2023-50",
"url": "https://www.newworldencyclopedia.org/entry/Oliver_Joseph_Lodge",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100739.50/warc/CC-MAIN-20231208081124-20231208111124-00338.warc.gz",
"language": "en",
"language_score": 0.9685529470443726,
"token_count": 2580,
"score": 2.765625,
"int_score": 3
}
|
When choosing a grow light it is extremely important to remember that light is as important as water to your plants. It helps them produce chlorophyll and supports photosynthesis. Growing outside in natural sunlight plants receive a “full-spectrum” of light. Reds and blues are the colors at the ends of the spectrum that are key to plant health. Blue light directly affects chlorophyll production, creating strong leaves and stems. Red light is responsible for fruit and flower production and is especially important in the beginning stages of plant growth - seed and root development. Indoor growers seek to create a varied spectrum of lighting as optimally as possible to support their plants at different stages of growth. It’s all about choosing the right lighting for your grow.
There are many grow light choices available and several things to consider when you're choosing a grow light. Some of the things you need to consider are: Which one is best for you? Start off by considering a few key things such as your overall operating budget. How much money are you prepared to spend, both in the initial investment, and then the day to day operation? How much area are you lighting? How long will your lights be used? Are you planning a year-round garden? Will you have a perpetual (clone, veg, and flower) grow? What kind of plants are you growing?
As you can see, you need to put some thought into choosing a grow light.
This is a breakdown of some of the most common lighting options available and by no means a complete list of everything out there. We will discuss the different lighting types, their uses, and their pros and cons to help you narrow the field and determine which one is best for you.
T5 Fluorescent Tubes
This type of lighting is best for seedlings, clones and small plants, they run cool.
T5s typically come in two different lengths, 2 and 4 foot, and have 1, 2, 4, 6, 8 or 12 bulb fixture configurations. Obviously, if you’re growing a few small plants, you’ll be able to use a smaller set up (shorter length, 1 or 2 bulb fixture).
Color temperature is also a factor you’ll want to consider. As a general guideline when plants are in the vegetative stage use a 6500K bulb, which emits a bright light like on a summer day. In the flowering stage use a 3000K bulb, which is more like the light at the end of the day - sunset.
- Less expensive than most other types of grow lights.
- Covers a large area of lighting.
- Longer life in comparison to other types of light (+20,000 hours with very little decrease in efficiency).
- Runs cool. eliminating the need for ducting and ventilation.
- Not as effective in the vegetative and flowering stage as the other types of lights.
High-Intensity Discharge (HID)
This is a category of lighting that includes: Metal Halide (MH), High-Pressure Sodium (HPS), Double-Ended (DE), and Ceramic Metal Halide (CMH) lamps. These lights require a reflector/fixture, a ballast, and a lamp (often referred to as a bulb). These components are sold separately or purchased as a kit. Buying as a kit ensures compatibility. These lights emit the brightest light and produce high amounts of heat.
One of our favorite ballasts is the SolisTek 600/400W SE/DE Digital Ballast 120/240V.
- Lamps are dimmable in many instances so you can regulate and customize your lighting.
- A digital ballast will operate many types of HID lamps.
- Life span of these bulbs can range between 10,000-24,000 hours depending on the type.
- Produces much more usable light than a fluorescent.
- Lamps produce high heat - you’ll want ducting and ventilation.
- Higher energy costs.
- Requires additional equipment. Ballast and reflector. You may need different ballasts with different types of lamps (MH and HPS).
- Lamp effectiveness and efficiency lessen over time.
Metal Halide (MH)
- Emit a high output/high quality of light on the blue end of the spectrum (like hot summer sun).
- Especially effective in the vegetative stage in plant development.
- Lamps themselves are relatively inexpensive.
- Shortest life span of all HIDs at 10,000-20,000 hours.
- Do not emit the red portion of the light spectrum that plants need to flower. Plants may still flower but the yield will not be as high.
- Burn hot.
- Quality and quantity of the light lessens with use.
High-Pressure Sodium (HPS)
- Emits light on the red end of the spectrum (like in the fall), triggering the flowering stage of plant
- Life span somewhat longer than MH at 12,000 - 24,000 hours.
- No emission of the blue light that plants need in the vegetative stage of development
- Burns hot
- Quality and quantity of light lessens with use
If you are growing plants that flower and fruit you may want to use MH and HPS together to get the benefits of the full-range lighting.
DE lighting is a type of HPS that has come onto the market in the last few years. It connects to a ballast at each end (kind of like a fluorescent tube), unlike the single-ended (SE) HPS mentioned above. As with all HIDs, you need to ensure that the reflector/fixture, ballast, and lamp are compatible.
- DE lighting has a longer life span and remains efficient longer than the single-ended HPS. Although the SE loses much of its efficiency around 10,000 hours, DE can retain as much as 90% of its efficiency at that same age.
- Because DE lamps are thinner, they cast more light over the grow space than SE lamps at the same height. This makes their use more efficient in terms of lighting and energy use.
- Emits more ultraviolet and infrared light than SE which, for some plants, increases potency and oil production.
- DE lighting produces more heat than SE lighting, leading to possible heat damage to plants.
- DE lighting does not tolerate direct contact with air blowing from circulation fans. It will cause a decrease in efficiency.
Ceramic Metal Halide (CMH) or Light Emitting Ceramics (LEC) [used interchangeably]
We offer complete CMH fixtures such as the SolisTek C2+ 630W Dual CMH Complete Fixture 120/240/277V is a fantastic complete fixture which is controller ready, guarantees high output & consistent spread, has incredible PAR output which replaces 1000W HID systems and comes with 2 x SolisTek CMH lamps.
Additional CMH grow lights can be found here.
- Life-span is twice as long as MH or HPS.
- Energy efficiency. Costs less to run. Some areas offer incentives to offset purchase costs because of the energy efficiency rating.
- Burns hotter than MH, emitting light more like natural sunlight but has an insulating value that decreases the heat output which is less likely to burn plants than MH and HPS.
- Light quality supports both vegetative and flower stage of plant growth. Both UV and infrared rays creating light close to that of natural sunlight.
- Steadier beam. Plants receive more light.
- Less electromagnetic interference (EMI) than with other digital ballasts because of the special ballast required to run them.
- Only magnetic ballasts can be used.
- Positional. They cannot be placed on an angle. They can only vertically or horizontally.
- Protection is needed from the UV light. Wear long sleeves and protective sunglasses, or work during a dark cycle with the use of a green LED headlamp.
Light Emitting Diodes (LED)
Light Emitting Diode (LED) grow lights are a type of energy efficient lights that are an option for indoor growers. Unlike other types of grow lights, LEDs do not burn a filament but instead pass light through semiconductors to create their spectrum. LED grow lights can be used as the sole means of lighting your grow, as a supplement to natural light, or paired with other types of grow lights.
Unlike other types of grow lights, light emitted by LEDs can be focused, so no light is dispersed or lost between the bulb and the canopy of the plants.
We offer several LED grow lights and are adding new ones all the time. Click here check out our LED grow lights.
- Long life-span. They can last over 50,000 hours.
- Energy-efficient. Cheap to run.
- Generally, produce more light per watt than fluorescents and HIDs.
- Run cool. Produce little to no heat.
- Good for small growing spaces.
- No ballasts required. Just plug into standard outlets.
- Spectrum of light is wide and customizable depending on your growing needs.
- Wide array of lights with many features and benefits available on the market.
- Considered the easiest light to grow with.
- Cost. The purchase price is much higher compared to other lighting choices.
- May not emit as much light as expected. Always make sure the emitted light is at least 2.0 micromoles/watt of energy.
- May not be effective for the flowering stage (depending on the manufacturer). Do your research based on what you are growing.
Bottom Line: When choosing a grow light consider what you are growing, your growing needs, the space you are growing in, and the budget you are operating within. There’s quite an array to choose from today.
|
<urn:uuid:60bf0b20-fa43-4ebf-afff-de6d12582744>
|
{
"dump": "CC-MAIN-2020-24",
"url": "https://yourgrowdepot.com/blogs/gardening-tips/choosing-a-grow-light",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413551.52/warc/CC-MAIN-20200531151414-20200531181414-00094.warc.gz",
"language": "en",
"language_score": 0.9312686324119568,
"token_count": 2046,
"score": 2.65625,
"int_score": 3
}
|
School Food Waste
A few years ago WRAP estimated that 72 grams of compostable waste was generated per pupil per day in primary schools and 42 grams per pupil per day secondary schools. If you have contacts with a school that wishes to reduce its total waste and compost the organic portion (including cooked food) a waste audit will enable identify the size of the problem and enable the selection of a composter(s) of a size appropriate to the waste produced.
It is possible to get a general idea of the waste produced using an audit conducted on a single day and the Eco-Schools Waste Audit for Schools, to be conducted by pupils, is based on a one-day audit. However if the audit is to provide a estimate on which the selection of the appropriate composter is to be purchased an audit over the five day period of a typical term-time week is recommended.
This audit also offers the opportunity to provide a baseline against which a food waste reduction programme can be measured. To enable identification of the success of the food waste reduction programme and the composting project it is recommended that during the waste audit the food corrected from the catering areas is separated into:
- uncooked kitchen scraps e.g. peelings,
Cooked food & dairy
- Plate scrapings
- Leftover unsold food
- Sandwiches, bread,
- Soups. Yogurt and dairy products
There is scope for improvement and an opportunity to involve the next generation in dealing with this problem we have dumped on them
|
<urn:uuid:c546ca54-b946-46af-8544-3dc0ddc0794a>
|
{
"dump": "CC-MAIN-2018-34",
"url": "http://www.carryoncomposting.com/142941483/4463100/posting/school-food-waste",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210413.14/warc/CC-MAIN-20180816034902-20180816054902-00012.warc.gz",
"language": "en",
"language_score": 0.9521176815032959,
"token_count": 309,
"score": 3.21875,
"int_score": 3
}
|
processors in demanding
applications? This is a vital
question, and its answer shines
light on why FPGAs are
increasingly proliferating in
electronic equipment these days.
For starters, programmable
logic devices are capable of
working at much higher clock
rates than eight- and even 32-bit
FPGAs easily clock at 200 to 300
MHz, and more specialized
devices such as the UltraSCALE™
products from Xilinx (see Figure
2) can handle clock frequencies
close to a GHz. This brings with
it raw power which is even more
magnified when one realizes that
FPGAs do not require so many
clock cycles to execute
algorithms as processors do.
Instead, they are simply an
interconnection of gates,
transforming bits and bytes on-the-fly in real time. Thus,
complicated algorithms that require many instructions can
be profitably implemented in an FPGA, where processing
is only limited by the propagation delay of logic gates (and
a small overhead from gate interconnections). These
features can and do make FPGAs blazingly fast when
compared to microcontrollers.
Another advantage that FPGAs have over
microcontrollers is in the sheer profusion of I/O pins they
have. Whereas a typical microcontroller may give you 20
to 30 I/O pins, even a low-end FPGA typically provides
around 50 I/O pins. Larger FPGAs sport hundreds of pins
(see Figure 3), so you almost never run out of
connectivity with the external world.
As an aside, the large number of pins means that
FPGAs come in advanced multi-pin packages such as high
pin count quad flat packs and dense ball grid arrays. This
makes it difficult to solder them to PCBs (printed circuit
boards), but as we will see in Part 2, there are ways
Added to that is the flexibility with which one can use
the I/O pins. Thus, one set of pins can handle signals at
3.3V logic levels, whereas another set handles signals at
2.5V or even 1.8V. You can mix and match I/O banks
such that it is easy to perform logic level translation from
one standard to another. Note that 5V logic levels are
usually not supported on current FPGA chips.
Yet another attractive feature of FPGAs is the large
amount of logic resources they make available at very
reasonable costs. Even a small FPGA costing about $5
comes with the logic equivalent of tens of thousands of
gates. Larger devices easily give
you access to millions of gates
inside a single chip. The logic
resources can, of course, be
interconnected (configured) in
any way you like to realize
almost any digital function; be it
as simple as a basic AND-OR-NOT logic expression or an
entire Ethernet controller.
What is more, given the large
number of I/O pins and logic
elements, you can put a lot of
completely different digital
designs in a single FPGA. This is
very useful where one wants to
consolidate a number of different
digital ICs into a single physical
package. This approach saves
money and space, making
systems smaller, lighter, and
cheaper. Logic consolidation is a
big reason for the increasing
sales of FPGAs for commercial,
industrial, and military products.
Finally, over time, FPGAs have grown into veritable
SoCs so that other useful functions (besides just an array
of uncommitted logic elements) now come integrated on
the chip. Commonly available features include phase
locked loops (PLLs) for generating almost any frequency
on a chip, embedded memory blocks, fixed (and even
floating point) multipliers, and analog-to-digital converters
High-end FPGAs also feature integrated high speed
serial data transceivers. Using these integrated functional
blocks with user defined logic, one can build extremely
complex digital systems on a single chip.
When first introduced, FPGAs may look daunting.
How do you go about building useful systems with
something that is basically just an array of logic elements?
It turns out that this is simpler than one might imagine.
You can certainly use a gate-level circuit diagram for
simple circuits, but that will probably not end up using
even 1% of the logic resources available in modern
FPGAs. Proper utilization of the power of FPGAs demands
a radically different approach to describing circuit
The secret lies in using a hardware description
language (HDL) to describe the desired functionality.
There are several of these around but two — VHDL and
Verilog — are, by far, the most common HDLs; they are
employed by 99% of FPGA users around the world. Here
is the fun part — an HDL allows a developer to describe
the desired system in a high-level form without worrying
about its exact implementation in terms of logic gates.
40 October 2017
FIGURE 3. Top and bottom of a ball grid
array (BGA) FPGA package showing large
numbers of mainly I/O pins.
(Courtesy of Lattice Semiconductor Corporation.)
|
<urn:uuid:edb7a428-c6bb-4a24-879c-544d66234fcd>
|
{
"dump": "CC-MAIN-2020-16",
"url": "http://nutsvolts.texterity.com/nutsvolts/201710/?pg=40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00417.warc.gz",
"language": "en",
"language_score": 0.8969808220863342,
"token_count": 1148,
"score": 3.90625,
"int_score": 4
}
|
Energy and Power
If an embedded system is to make anything happen in the world, it will have to deploy energy to effect this. Moreover an essential part of the specificiation of an embedded system is that it should cause events to happen at the right time. So the energy required to effect the change will have to be deployed sufficiently fast, that is the embedded system will have to have the appropriate amount of power available. Now operating at low power is a key ingredient of modern processors, so typically there will not be enough power available at the output terminals of a processor to cause the required effects of the system on the world. Thus an embedded system will typically need to make use of power-amplifiers. Likewise, the inputs of most processors are designed to accept digital signals. Many of the devices that an embedded system can use to sense the state of the world do not directly produce digital signals compatible with a processor. So an embedded system may need to embody signal-processing devices to convert the raw output from sensors to a suitable form for input to a processor.
Since a processor is electronic in its nature, we will need to have a firm understanding of electricity. But first we must begin by refreshing our knowledge of some basic concepts of physics.
The scientific unit of time is the second. For many years scientists measured time in terms of the earth's rotation relative to the "fixed" stars. However the earth's rotation is actually slowing down, and astronomers' clocks have now become good enough to notice. Consequently the time-standard is now atomically based.
In our electronic systems things happen rather fast, so it's desirable to subdivide the second.
Time can be measured with great precision, a fact that can be very useful in electronics. The best clocks in use by astronomers today have a precision better than one part in 1012. To put it another way, such a clock would lose or gain a few seconds in a million years. While we will not need such precision, it's worth remembering that the most precise electronic component we can buy for a modest couple of dollars is a crystal oscillator that provides an excellent time-reference.
A mnemonic for getting a feel of the newton as a unit is that one newton is the weight of an apple - think of it as the force exerted on the palm of your hand by an apply lying on it.
The standard physical unit of energy is the joule. It is the energy released when a force of one newton acts through a distance of one metre. Being a mechanical unit, it is quite convenient when we come to think about how electricity can cause mechanical effects, such as turning an electric motor.
However it is also basic to understanding the electrical unit the volt, as we shall see.
A homely way of thinking about a joule is that it is roughly the energy released when an apple falls one metre.
Heat energy is usually measured in terms of the calorie, or the big calorie (Cal) which is the amount of energy required to raise the temperature of one kilogram of water by one degree centigrade.
Power is the rate of transfer of energy. The standard physical unit of power is the watt which is a rate of energy transfer of one joule per second.
Energy is conserved. That is to say the total amount of energy in a closed system remains constant (unless we're doing high-energy physics experiments). This means that energy is a very useful unifying concept for thinking about systems, such as embedded systems, which have both electrical and mechanical components.
Just as money is the universal currency in economic systems, energy is the universal currency of physical systems - electrical, mechanical, hydraulic etc.. Unless you grace the tables of very fancy establishments, you will always have a good idea how much your lunch ought to cost, and be outraged if you are presented with a bill for 50$ when you expected something around 5$. Likewise you should get in the habit of estimating the energy cost of performing a given action (for example switching the track-switches for the train).
Analysing engineered systems in terms of the flow of energy provides a unifying principle. Generally any electronic system can be improved by careful attention to energy-flow. A system that wastes energy is likely to be larger and more expensive than one which conserves it. Nowhere is this more apparent than in computation, where the amount of energy required to represent a bit of information in any form has plummeted over the last 40 years.
Thus the early vacuum-tube computers consumed several watts of energy to represent the state of one bit of information in a CPU. Modern processor-chips operate at levels of a microwatt per bit. In achieving such high performance, power density, that is the power consumed per unit of volume, is a crucial factor - a modern chip would be instantly vaporised if individual bits-of-memory operated at 1 watt each!
If energy is conserved, why is energy supply an issue? The answer to this question is that energy, while it is conserved, is inexorably converted from a more useful form to the less useful form of heat. Generally, if a circuit is producing a lot of heat this is an indication that it might be improved by redesign, or that it has been wrongly designed or built. Beware! components that have been wrongly inserted in a circuit often get hot enough to burn you, so feel your components with circumspection.
It is sometimes helpful to think of the flow of electricity as being like the flow of a fluid, say water. While it is important not to take this analogy too far, it can often help us visualise the rather intangible phenomena associated with electricity in more tangible terms.
Electrons are the "atoms" of electricity. In fundamental physics the electron carries -1 units of electric charge, just as the hydrogen atom is one unit of mass for chemists
However, one electron is a rather small amount of charge to be working with. The macroscopic unit of charge is the coulomb. 1 coulomb is minus the charge carried by 0.624142x1019 electrons. [Why a negative charge on the electron?]
Thus the coulomb is the basic measure of quantity of electricity, as the litre or gallon is a measure of quantity of water.
Electric charge is conserved - the total charge in a closed region of space remains constant. Indeed, in any work we shall be doing, the total number of electrons in a closed region of space remains constant [exceptions?].
Of course, positively charged particles exist. The protons that are a major constituent of the atomic nucleus are positively charged. However, the solid state is characterised by the immobility of atomic nuclei. Most of the devices we shall encounter use matter in the solid state.
In anything we do in this class, positive and negative charges will always be very nearly in balance. This arises from the fact that like charges repel each other, while opposite charges attract. This can be understood (in terms of classical physics) as the existence of a force between any two electrons which acts along the line joining them and which is inversely proportional to the square of the distance between them.
So, getting together a large number of like charges costs a lot of energy. This also accounts for the fact that the analogy between the flow of electricity and the flow of water breaks down. Water molecules attract each other moderately (moderately that is compared with the attraction between the atoms that make up the water molecule). So it doesn't take a lot of energy to collect a lot of water molecules together - in normal conditions water vapour will condense to form liquid water. On the other hand it would take an enormous amount of energy to bring a coulomb of electrons together into (say) a litre space that was devoid of positive charge.
Current is the rate of flow of electric charge. One ampere(A), or amp, is a flow of charge of one coulomb per second.
One milliamp is a thousanth of an amp 1mA = 10-3A. One microamp is a millionth of an amp 1µ A = 10-6A. We also speak of nanoamps 10-9A. and pico 10-12A. amps.
Ordinary matter consists of electrons (negatively charged) and atomic nuclei (positively charged). We can safely regard nuclei in a solid as remaining fixed in position and unchanged in nature. Almost all of the mass of matter is attributable to its nuclei. By contrast, some electrons in some solid forms of matter can move over long distances; in doing so, they constitute an electric current. Whether or not some electrons in a given solid material can move over long distances is a fundamental property of the material.
A metal is a material in which every atom has some electrons that are so loosely bound to the nucleus that they are free to move. Such electrons are called conduction electrons.
An insulator is a material in which all electrons are tightly bound to one specific atomic nucleus, so electric current cannot flow.
In a semiconductor electrons can be made to flow in particular circumstances, which property allows engineers to construct all manner of clever devices. Modern electronics depends greatly on the properties of semiconductors.
Since electrons repel each other, if a piece of material has excess electrons, energy is required to add even more electrons. Likewise, if a piece of material has a deficit of electrons, energy is required to take electrons away from it. We say that two pieces of metal A and B have a potential difference of one volt (1V) between them if it would require one joule to move a charge of one coulomb from A to B, or more precisely the energy cost of moving a small amount of charge from A to B would scale up to requiring one joule to move a charge of one coulomb.
Returning to our water analogy, we can think of a piece of metal as being like a trough of water. If we have two troughs A and B, with B being at a higher level, it costs energy to move a quantity of water from A to B.
It follows that when a current of I amps flows through a potential difference of V volts, the power required is given by:
This power may appear in the form of heat, light, or mechanical power. Or it may be stored to be released in electrical form later.
In a piece of metal free of outside influences the conduction electrons adjust themselves so that the potential difference across any two points is zero.
We can think of this in our hydraulic analogy by observing that if we have a trough of water free of outside influence, the water in the trough will be level - that is the energy cost of moving an infinitesimal quantity of water from one place in the trough to another is zero.
In the case of an insulator or semiconductor, we still have the concept of electric potential. But for a given piece of insulating material, the potential can vary widely across the surface, so we must always think of the potential at a point in the surface. For example, standard photocopying technology involves creating a pattern of charge on a drum coated with semiconductor material which exactly matches the pattern of black-and-white on the page being copied.
We have millivolts(mV), microvolts (µ V) in common use, and kilovolts (kV) too (but not exposed in our lab!).
Current goes through a wire or component Voltage exists between two points in a circuit, We also say voltage across two points. Voltage at a point means voltage between that point and "ground" "Ground" can be chosen arbitrarily; it is not necessarily related to the actual earth.
In electronics, information is carried in a signal, which is a time-varying value of some physical variable, usually voltage, sometimes luminous flux (in optoelectronics). It always requires energy to establish a signal at a point or transmit it to a remote point, because energy is always required to cause a change in any physical variable.
Power is the rate of transfer of energy. Generally, if we are processing information we want signals to have the lowest power consistent with their reliable transmission and processing. However, if we want a signal to make things happen in the big outside world, we will usually need to raise its power level by some kind of amplifier.
Among the factors putting a lower limit on usable power is noise, which can be thought of as an unwanted signal arising from sources internal to the circuit (e.g. thermal noise arises from that vibration of atoms which is thermal energy) or external (e.g. signals picked up from radio transmitters or the mains wiring).
Electronic circuits are classified as:
Kirchoff's Current Law states that in any electrical circuit, the sum of the currents flowing into a node is zero. This is really a statement of the conservation of electrical charge - the number of electrons flowing into a junction must be the same as the number flowing out, so, if we are careful about using the right sign when referring to our currents, the law can be related to basic physics.
Kirchoff's Voltage law states that the sum of the voltages taken round any cycle in a circuit-graph is zero. Or, equivalently, suppose A and B are two conductors in a circuit. Then the potential difference between A and B is the sum of the potential differences across device terminals along any path linking A to B.
It should be noted that Kirchoff's Laws, while they form the basis for analysis of circuits, are not perfectly exact, since they assume that the devices of which a circuit is made up are connected by perfect conductors which offer no resistance to the passage of electricity and have no capacity to store electrons. For most purposes this is an adequate assumption, though for very high speed circuits we need a more complete model.
One approach to dealing with this is to treat conductors themselves as being components in a more complicated circuit.
In particular, the voltage law will fail to be true if the circuit as a whole is exposed to intense and rapidly changing magnetic fields.
We build an electronic circuit out of conductors and devices. The simplest devices have two terminals. Two-terminal devices are characterised by the relationship between voltage V across the device and current I flowing through it. Resistors have the current proportional to the voltage Capacitors have the current proportional to rate of change of voltage. Diodes allow current to flow in only one direction. Inductors have the voltage proportional to rate of change of current. Photoresistors are light-dependent resistors.
A resistor is a device in which the voltage across the device is proportional to the current through it. This is usually written:
V = IR
Here, R is the resistance in ohms. The equation is known as Ohm's Law. It is also at times to think of a a resistor as a device in which the current through the device is proportional to the voltage across it. This can be written:
I = VC = V(1/R)
From the equations:
we get two more convenient forms by eliminating V and I respectively:
Resistors are the cheapest electronic component. Of all components, they are readily available in the widest range of values, from 1 ohm up to ten megohms, (10million ohms), in a range of preferred values which are chosen to divide each decade into values in each successive value is near to being a fixed multiple of the previous one.
The most commonly used resistor is a carbon film resistor capable of dissipating a maximum of 0.25W, with a value accurate to 5%. (Horowitz and Hill have a discussion on the extent to which you can believe in this accuracy). We have available in the lab a range of resistors with standard values from 10 ohms to 1 megohm, together with a supply of 10 megohm resistors. The nominal value of a carbon film resistor (in ohms) is indicated by coloured bands on the body of the resistor. Digits are colour-coded as follows:
Black 0 Brown 1 Red 2 Orange 3 Yellow 4 Green 5 Blue 6 Violet 7 Grey 8 White 9You can make a model in your head to remember this if you see that the middle digits (2-7) are the colours of the rainbow and then try to hang the whole thing on the idea of a body heating up (when it's cold it's black, when it's very hot it's white). The explanation isn't systematic, but it serves quite well as a mnemonic. The colour code is sometimes used for other devices. It's also worth remembering if you are making decisions about wiring where there is an obvious number associated with some conductors. For example, the 4-bit parallel interface used in the model-train system periphals makes use of 6-wire cable intended to interface modular-telephone style plugs and sockets. The conductor colours are thus pre-determined, but the convention I have adopted is
Black ground Red Digit 0 Yellow Digit 1 Green Digit 2 Blue Digit 3 White + 5 V powerIt's easy to remember this convention if you think of the colour-coding convention. The value of the resistor is encoded as D1 D2 E where D1 and D2 are the first two digits of the value of the resistor, and E is the number of zeros which follow. For example a resistor of 4700 ohms is encoded as yellow, violet, red. In addition resistors have a band which indicates its precision. These days this is almost certainly a gold band indicating 5% precision.
From suppliers, a range of 0.125W resistors is also available - it is primarily of interest for those who are building circuits with a high density of components - to exploit such resistors we would need to be designing professionally made printed circuit boards.
It is also possible to purchase a full range of 0.5W resistors. Generally, if you find that a circuit design calls for such a resistor, it's worth taking a look to see if it can be redesigned to use a lower value, or possibly to use another kind of component entirely. You can also obtain a very limited range of 1W and 10W resistors. Generally, if your circuit requires a low power resistor you can reckon to find just the right value of resistor to meet your requirements, whereas if you need a high power resistor you may have to adjust your circuit to fit the available values.
The stability of a resistor is also an important consideration. Resistance varies with temperature (increasing with temperature in the case of metal and carbon resistors). Soldering a resistor in a circuit can cause a permanent change in its value.
High precision resistors are available at a price from specialist suppliers. However if you need a resistance whose value can be specified to better than 5%, the combination of a fixed resistor with a "trimmer" potentiometer (q.v.) is often the best option. The time that you will really need to use a precision resistor is when you need a resistor whose value changes very little with temperature or ageing. For example, if you are wanting a precision voltmeter, you will typically be able to buy a 200mv meter quite cheaply, which can be adjusted to register in the range you require by using precision resistors.
Let's now consider what happens when we form simple combinations of resistors.
Firstly, suppose R1 and R2 are two resistors connected in series, as shown in the diagram.
In this configuration the current I through each resistor is the same, while if V1 and V2 are the potential differences across R1 and R2 respectively, from Kirchoff's Voltage Law, the potential difference across the two resistors in series is given by
V = V1 + V2
From the application of Ohm's Law, we have V1 = IR1 and V2 = IR2 . Hence
Now let's consider the case in which the two resistors are connected in parallel. In this case, the voltage V across each resistor is the same, but by Kirchoff's current law, the current I into the network of 2 resistors is equal to the sum of the currents through each resistor.
From Ohms law, we have V = I1R1, V = I2R2 so that
R1R2 R = ------- R1 + R2One can sum up these results by saying that for resistors in series the total resistance is the sum of the individual resistances, while for resistors in parallel the total conductance is the sum of the conductances.
Assuming we draw no current from the output, the current through the divider is given by
Thus the potential divider acts as a circuit which scales down a signal by the above factor. By itself it is of limited use as a signal processor, since it only works as calculated if an insignificant current is taken from the output, which means in effect that it has can only drive a load with insignificant power.
What does "insignificant" mean here? It can best be related to the accuracy with which the circuit works. It's fairly clear, for example, that if we want 1% accuracy from the circuit we'd better draw from the output less than 1% of the current flowing through the two resistors.
When we come to consider power amplifying devices such as transistors, we'll see how we can buffer a circuit like this one so that its output-voltage is almost independent of load.
For example, suppose you want a resistance of 1K ohms with a 1 percent accuracy. An economical way to do this is to put a 1.2K ohm resistor in parallel with a 10K potentiometer. To set the circuit up, the potentiometer can be adjusted until the resistance of the two in parallel is equal to 1K.
Why not use just the potentiometer? Well, the above arrangement requires you to set the potentometer with an accuracy of 10%, which is a great deal easier than setting it to 1%. And it's hard to be sure that a cheap potentiometer will hold its setting to 1%, while you can be reasonably sure of 10%.
In general, the idea of combining a fixed but inaccurate component with a "trimming" device is of considerable utility in electronics, though of course it's important to have only those adjustments that are strictly necessary in a circuit.
Another variation on this theme is the use of potentiometers as position transducers. For this application it is possible to purchase high-accuracy potentiometers.
A perfect voltage source is a 2 terminal "black box" that maintains a fixed voltage drop across its terminals, regardless of how much current is drawn out of it. Real voltage sources have a limit on maximum current, and behave like a at best like perfect voltage source with a small resistor in series.
Any combination of voltage sources and resistors can be reduced to a single voltage source in series with a single resistor. This resulting simple circuit is called "Thevenin's equivalent circuit".
D-size flashlight cell - 1.5volts, 1/4 ohm resistance 10,000 joules (watt-seconds). At end of life, 1.0volt, resistance = several ohms.
So we can regard resistors as devices which have a fixed small signal resistance.
A diode lets current flow in one direction only. There is a small forward voltage drop, usually around half a volt for general purpose silicon diodes.
Usually a diode is realised as a cylindrical body with a lead emerging at each end. The direction in which current flows through such a diode is indicated by a band around the body which is nearest to the negative end of the diode when current is flowing through it. You can remember this as "the bar on the package is the bar in the diode-figure".
Small signal diodes can typically capable of passing 100mA in the forward direction. One Amp diodes are quite cheap, and about the size of a 1/4 watt resistor. Power diodes can handle thousands of amps, for example to rectify AC current for full-size trains.
In the non-conducting direction, diodes are limited by their peak reverse voltage abbreviated PRV. A PRV of 80V is common for run-of-the-mill diode, but hundreds of volts of PRV are possible. Exceed the PRV and you knacker your diode. Notice that the PRV is the PEAK reverse voltage, which is higher than the Root Mean Square voltage that is usually quoted for alternating current supplies.
A diode may allow current to flow in the reverse direction. This current is typically very small. Any diode that passes a reverse current you can measure with an ordinary instrument is knackered.
Diodes also have a (variable) capacitance when biased in the reverse direction.
These are clever diodes that let current go through backwards while maintaining a near-constant voltage across them. So, if you want to build an electronic voltage-source, a zener diode is likely to be the component which provides a reference voltage. You could think of a Zeer diode as being like a weir with water.
They also vary with temperature. The most temperature-stable zener diodes are those around 5-6 volts. Given the limitations of zener diodes, they are seldom used directly as voltage sources, but are incorporated in circuits where they are fed with a reasonably constant current. Amplification of what is in effect a signal from the zener is used to provide as much current as needed from the output of the electronic power source. It's also possible design a circuit which uses a 5-6 volt zener independent of the required output voltage. We'll see how to do this when we learn about the magic of transistors.
Electrically these are diodes with a rather high forward voltage drop (say 2V). If you pass current through them (in the forward direction), by magic, they emit light! The rule for the forward drop is is that the bluer the light the higher the forward voltage - blue LED's may take as much as 5V to drive them.
There are various versions - arrays of LED's are used to form displays, but usually with limited resolution. Another configuration that is seen is to have two LED's back-to-back in a single package - pass the current through in one direction and you get red light, in the other direction you get green light. You can get any colour between red and green by alternating the direction at a speed the eye can't follow (say 100hz).
You should then try to orient your components so that the most positive terminal is highest, with the least positive terminal being lowest.
This can be done for the "DC analysis" of the circuit, that is an estimation of the average voltage levels. Of course, when a circuit is doing something, typically levels will be changing as signals pass through it.
It would be a mistake to follow this "highest = most positive" rule slavishly, because it may result in circuit drawings which are unnecessarily large. But it's a good rule to follow in general, and certainly at first, until you are used to reading circuit drawings.
Only a limited range of metals can be soldered. You can't solder stainless steel or aluminium with standard soldering techniques, because these metals have a thin coating of oxide which gives them their bright appearance. You can solder copper, tinned copper, brass, iron, most steels.
It is vital when soldering to heat up both surfaces to be joined to the melting point of solder, and then feed fresh solder from the reel into the joint. Do not attempt merely to transfer solder to the joint using the iron. The solder we use appears to be a thin solid wire. Actually it has a core of flux which helps to clean the metal to be joined.
However, if you're soldering a component such as a transistor or resistor directly into the circuit, you should beware of overheating it. We do have available little heat-sinks that can be clamped on the leads of a component to protect it during soldering. Typically only a few seconds of heat need to be applied.
While most electronic components and all printed circuit boards have small amounts of metal to be joined, if you are trying to solder heavy conductors or other objects, you will need to feed much more heat in the joint to make it "take".
In a properly soldered joint, the solder wets the metal being joined. You can tell that the metal is wetted by the shape of the meniscus - if the solder looks like a little spherical blob on the metal, the joint is a bad "dry" joint. If the surface of the solder is concave as it joins the metal, then you have a good joint. Bad joints are a source of intermittent circuit failures which are hard to find.
Some magnifying lenses are available in the lab to allow you to examine soldered joints carefully.
Both solid and flexible wire can be soldered. Solid wire is probably easier to work with when making connections in a circuit board. It should not be used for any connection that is subject to flexing, so should not be used for connecting between circuit boards (unless these are rigidly mounted in some kind of card-cage). For solid wire subject to flexing will eventually fail by metal fatigue, giving rise to an intermittent connection that causes malfunctions that are hard to locate.
Flexible wire can be used for wiring up a circuit board. A thin gauge is most suitable. The most satisfactory way to make a joint is to tin the end of the wire (which may need to be twisted to make it suitably compact) with solder before attempting to make the joint. If there's a fat blob of solder left at the very end, just snip it off.
When you solder flexible wire, the end of the wire in effect becomes rigid. Where the remaining flexible part joins this rigid part the wire is likely to fail, so it must be supported in some way to ensure that most flexing takes place well away from the soldered joint. A short length of heat shrink tubing over the end of the wire is usually enough to prevent this happening. It also serves as insulation if needed.
Metal to be soldered should be clean. It can be worth it to burnish up a printed circuit board with a fine-grade abrasive paper before soldering. Old flexible (multi-strand) wire may not take solder easily, even if it was originally tinned. Typically you will have to get it hotter than you might expect for the solder to "take".
Care is needed when soldering, because the solder is hot enough to burn you.
The most useful instrument for checking your work is the Digital Volt Meter (DVM). Ours are bright yellow. Set the dial to the continuity/diode check position.
You should also do a basic check before you apply power to a circuit. Most semiconductor devices will die instantly if you connect the power the wrong way round. One former student in this course destroyed 80$ worth of components by making a wrong connection. Note also that the larger kind of capacitors (above 1 microfarad) are polarised - they also will die if you connect them wrongly. Watch out for excessive current consumption by the circuit. This is usually indicative of a component wrongly wired. Such components can get quite hot enough to burn you, so be careful...
It's difficult to remove devices that have many legs, for example most integrated circuits. For this reason, you should always put integrated circuits in sockets when making an experimental board. And you should regard all boards built in this course as experimental. Sockets also make it easier to debug a board, because you can check that you have provided the right "living conditions" for an integrated circuit before you put it in the board, and if necessary you can pull it out and check again, or perhaps replace it if you have knackered it.
Occasionally you may need to increase the temperature of the iron to melt enough solder to modify a board as required.
Don't forget that desoldering may heat components up enough to knacker them.
Using desoldering wick is generally less satisfactory than using a pump. It's difficult to get it hot enough to work, and it's liable to leave odd bits of wire around to cause shorts, or to solder itself to the circuit.
When making a board it's desirable to require each connector to be different from the others on the board, so that the board cannot be accidentally connected wrongly, possibly resulting in the total destruction of the components on the board, or even of the board itself. This is something like a hardware equivalent of type-declarations when writing software, with the difference that if you get a type error your program doesn't have its functions shrivel up and die.
Unfortunately when we are making prototypes it's difficult to find many different connectors that can be used on a board without drilling a distinctive pattern of holes. Our general purpose boards from Radio Shack are drilled with a pattern of holes on 0.1" centres: most connectors won't fit this pattern. So typically we use male and female headers to connect to boards. Probably the easiest system to work with is to put female headers on the board, and use male headers as plugs to fit into them. You can wire up these male headers as described in the Handyboard manual. Sometimes, particularly with double-row male headers which you'll need for the SSI interface, it's worth putting a blob of epoxy to hold all the pins in position once the basic wiring is done, since the pins of a male header are likely to slip through the plastic.
We can try to make these connectors distinctive by using ones that have spare pins and sockets. If one of these spare pins is snipped off and put in the corresponding socket, you've created a connector that will only go in the right way round. With a bit of care we can create conventions which make it difficult to plug a system up wrongly.
In some assignments the connections with the circuit board will be specified precisely. You must follow this specification, since this should ensure that boards built by different groups are interchangeable. Specifying "pin-out" is as much a part of specifying hardware as specifying a ".h" file is in specifying C, or specifying an interface is in Java.
|
<urn:uuid:67c512ca-aaaf-4f00-b915-2fdd2872ca91>
|
{
"dump": "CC-MAIN-2014-42",
"url": "http://www-robotics.cs.umass.edu/~grupen/503/SLIDES/electricity.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646352.2/warc/CC-MAIN-20141024030046-00033-ip-10-16-133-185.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9494128227233887,
"token_count": 7164,
"score": 3.875,
"int_score": 4
}
|
November 8, 2014
Sometimes, words just complicate things. What if our brains could communicate directly with each other, bypassing the need for language?
University of Washington researchers have successfully replicated a direct brain-to-brain connection between pairs of people as part of a scientific study following the team’s initial demonstration a year ago. In the newly published study, which involved six people, researchers were able to transmit the signals from one person’s brain over the Internet and use these signals to control the hand motions of another person within a split second of sending that signal.
At the time of the first experiment in August 2013, the UW team was the first to demonstrate two human brains communicating in this way. The researchers then tested their brain-to-brain interface in a more comprehensive study, published Nov. 5 in the journal PLOS ONE.
“The new study brings our brain-to-brain interfacing paradigm from an initial demonstration to something that is closer to a deliverable technology,” said co-author Andrea Stocco, a research assistant professor of psychology and a researcher at UW’s Institute for Learning & Brain Sciences. “Now we have replicated our methods and know that they can work reliably with walk-in participants.”
Collaborator Rajesh Rao, a UW associate professor of computer science and engineering, is the lead author on this work.
The research team combined two kinds of noninvasive instruments and fine-tuned software to connect two human brains in real time. The process is fairly straightforward. One participant is hooked to an electroencephalography machine that reads brain activity and sends electrical pulses via the Web to the second participant, who is wearing a swim cap with a transcranial magnetic stimulation coil placed near the part of the brain that controls hand movements.
Using this setup, one person can send a command to move the hand of the other by simply thinking about that hand movement.
The UW study involved three pairs of participants. Each pair included a sender and a receiver with different roles and constraints. They sat in separate buildings on campus about a half mile apart and were unable to interact with each other in any way — except for the link between their brains.
Each sender was in front of a computer game in which he or she had to defend a city by firing a cannon and intercepting rockets launched by a pirate ship. But because the senders could not physically interact with the game, the only way they could defend the city was by thinking about moving their hand to fire the cannon.
Across campus, each receiver sat wearing headphones in a dark room — with no ability to see the computer game — with the right hand positioned over the only touchpad that could actually fire the cannon. If the brain-to-brain interface was successful, the receiver’s hand would twitch, pressing the touchpad and firing the cannon that was displayed on the sender’s computer screen across campus.
Researchers found that accuracy varied among the pairs, ranging from 25 to 83 percent. Misses mostly were due to a sender failing to accurately execute the thought to send the “fire” command. The researchers also were able to quantify the exact amount of information that was transferred between the two brains.
Another research team from the company Starlab in Barcelona, Spain, recently published results in the same journal showing direct communication between two human brains, but that study only tested one sender brain instead of different pairs of study participants and was conducted offline instead of in real time over the Web.
Now, with a new $1 million grant from the W.M. Keck Foundation, the UW research team is taking the work a step further in an attempt to decode and transmit more complex brain processes.
With the new funding, the research team will expand the types of information that can be transferred from brain to brain, including more complex visual and psychological phenomena such as concepts, thoughts and rules.
They’re also exploring how to influence brain waves that correspond with alertness or sleepiness. Eventually, for example, the brain of a sleepy airplane pilot dozing off at the controls could stimulate the copilot’s brain to become more alert.
The project could also eventually lead to “brain tutoring,” in which knowledge is transferred directly from the brain of a teacher to a student.
“Imagine someone who’s a brilliant scientist but not a brilliant teacher. Complex knowledge is hard to explain — we’re limited by language,” said co-author Chantel Prat, a faculty member at the Institute for Learning & Brain Sciences and a UW assistant professor of psychology.
Other UW co-authors are Joseph Wu of computer science and engineering; Devapratim Sarma and Tiffany Youngquist of bioengineering; and Matthew Bryan, formerly of the UW.
The research published in PLOS ONE was initially funded by the U.S. Army Research Office and the UW, with additional support from the Keck Foundation.
- Rajesh P. N. Rao, Andrea Stocco, Matthew Bryan, Devapratim Sarma, Tiffany M. Youngquist, Joseph Wu, Chantel S. Prat. A Direct Brain-to-Brain Interface in Humans. PLoS ONE, 2014; 9 (11): e111332 DOI: 10.1371/journal.pone.0111332
|
<urn:uuid:3fac1f53-9348-4b90-b5f5-2620beefd42b>
|
{
"dump": "CC-MAIN-2021-31",
"url": "https://scienceofsingularity.com/2014/11/08/direct-brain-interface-between-humans/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154500.32/warc/CC-MAIN-20210804013942-20210804043942-00043.warc.gz",
"language": "en",
"language_score": 0.953724205493927,
"token_count": 1103,
"score": 3.359375,
"int_score": 3
}
|
Copperheads prefer terrestrial to semi-aquatic habitats, which include rocky-forested hillsides and various wetlands. They have also been known to occupy abandoned and rotting slab or sawdust piles.
Overall, the species inhabits the Florida panhandle north to Massachusetts and west to Nebraska. The Northern Copperhead (A. c. mokasen) inhabits northern Georgia and Alabama north to Massachusetts and west to Illinois. The Southern Copperhead (A. c. contortrix) inhabits the Florida panhandle north to Southern Delaware and west to SE Missouri, SE Oklahoma and E Texas.
Georgia Wildlife Web
Reptiles and Amphibians of Eastern/Central North America. A Peterson Field Guide. Roger Conant and Joseph Collins 3rd Edition. Houghton Mifflin, New York, NY. 1998.
|
<urn:uuid:582d9942-b7aa-4afa-9ae6-ec5916a21435>
|
{
"dump": "CC-MAIN-2014-52",
"url": "http://www.fernbank.edu/STT/VertBio/pages/Reptilia/copperhead.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802778013.38/warc/CC-MAIN-20141217075258-00002-ip-10-231-17-201.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.863051176071167,
"token_count": 170,
"score": 2.78125,
"int_score": 3
}
|
A schematic of a launch loop as imagined by Lofstrom.
Advantages of launch loops
Unlike conventional rockets, launch loops can have many launches per hour, independent of weather, and are not inherently polluting. Rockets create pollution such as nitrates in their exhausts due to high exhaust temperature, and can also create greenhouse gases depending on propellant choices. Launch loops require power in the form of electricity and as such it can be clean. For example it can run on geothermal, nuclear, wind, solar or any other power source, even intermittent ones, as the system has huge built-in power storage capacity. Additionally, launch loops would be quiet in operation, and would not cause any sound pollution, unlike rockets.
Launch loops are also intended for human transportation. It gives a safe 3g acceleration which the vast majority of people would be capable of tolerating well, and would be a much faster way of reaching space than space elevators.
Unlike space elevators which would have to travel through the Van Allen belts over several days, launch loop passengers can be launched to low earth orbit, which is below the belts, or through them in a few hours. This would minimize radiation doses and keep them within safe levels.
Launch loops wouldn’t be subjected to the risks of space debris and meteorites, unlike space elevators. This is because they are to be situated at an altitude where orbits are unstable due to air drag. Therefore damage or collapse of loops in this way is expected to be extremely rare. Even if an accident does occur the consequences would be much less catastrophic than with a space elevator. If the launch loop is built over an uninhabited area such as a desert or an ocean, then there should be very little risk to human life in case of failure.
Finally, their low payload costs of about 3 USD per kg will open up large-scale commercial space tourism and even space colonization. Also the initial costs of construction of 10 billion USD are quite low compared to the other non-rocket spacelaunch methods. This is roughly equivalent to the cost of 20 space shuttle launches.
Difficulties of launch loops
A running loop would have an extremely large amount of energy in the form of linear momentum (about 1.5×1015 joules or 1.5 petajoules). If a major failure did occur the energy release would be approaching a nuclear bomb explosion (or 350 kilotons of TNT equivalent), although not emitting any nuclear radiation. This is why the magnetic suspension system of the launch loop would be highly redundant, with failures of small sections having essentially no effect at all.
Even if the safeties fail and this large amount of energy is released, it is unlikely that it would destroy very much of the structure due to its very large size. Also most of the energy would be deliberately dumped at preselected places when the failure is detected. Steps might need to be taken to lower the cable down from 80 km altitude with minimal damage, such as parachutes.
A few technical issues involving the cable’s instability problems would have to be solved before constructing a launch loop. Lofstrom’s design requires electronic control of the magnetic levitation to minimize power dissipation and to stabilize the otherwise under-damped cable.
In conclusion, the launch loop concept does have a few technical difficulties, but if those can be solved, the advantages of such a spacelaunch method are great. Lowering the costs of space travel and opening the final frontier to tourism and eventually colonization are just a few of them.
Links to the other articles in this series
- Non-Rocket Spacelaunch – Space Elevator
- Non-Rocket Spacelaunch – Space Elevator Safety Issues
- Non-Rocket Spacelaunch – Extraterrestrial Space Elevator Concepts
- Non-Rocket Spacelaunch – Space Elevators in Fiction
- Non-Rocket Spacelaunch – Tether propulsion
- Non-Rocket Spacelaunch – Tether satellite missions
- Non-Rocket Spacelaunch – Tether propulsion safety issues
- Non-Rocket Spacelaunch – Tether propulsion in fiction
|
<urn:uuid:70b02a5c-c353-460a-9c15-c04436274cc3>
|
{
"dump": "CC-MAIN-2017-09",
"url": "http://astroblog.cosmobc.com/2011/03/19/non-rocket-spacelaunch-advantages-and-difficulties-of-a-launch-loop/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174167.41/warc/CC-MAIN-20170219104614-00281-ip-10-171-10-108.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9454928040504456,
"token_count": 845,
"score": 3.859375,
"int_score": 4
}
|
You are one of two people on a malfunctioning airplane with only one parachute.
you refuse the parachute because you might die in the jump anyway.
you refuse the parachute because people have survived jumps just like this before.
you play a game of Monopoly for the parachute.
you order them to conduct a feasibility study on parachute use in melti-engine aircraft under corde red conditions.
you charge one parachute for helping them sue the airline.
you tell them you need to run more tests, then take the parachute in order to make your next appointment.
you sell them the parachute at top retail rates and get the names of their friends and relatives who might like one too.
Internal Revenue Service
you confiscate the parachute along with their luggage, wallet, and gold fillings.
you strip-tease while singing that what they need is a neon parachute with computer altimeter for only $39.99.
you make them another parachute out of aisle curtains and dental floss.
you give them the parachute and ask them to send you a report on how well it worked.
you refuse to accept the parachute without proof that it will work in all cases.
you ask how they know the parachute actually exists.
you explicate simile and metaphor in the parachute instructions.
Literature: you read the parachute instructions in all four languages.
you design a machine capable of operating a parachute as well as a human being could.
you plot a demand curve by asking them, at regular intervals, how much they would pay for a parachute.
you ask them what the shape of a parachute reminds them of.
you tie them down so they can watch you develop the character of a person stuck on a falling plane without a parachute.
you hang the parachute on the wall and sign it.
as you jump out with the parachute, you tell them to work hard and not expect handouts.
you ask them for a dollar to buy scissors so you can cut the parachute into two equal pieces.
After reminding them of their constitutional right to have a parachute, you take it and jump out.
you tell them not to worry, since it won't take you long to learn how to fix a plane.
you issue a warning that skydiving can be hazardous to your health.
Association of Tobacco Growers
you explain very patiently that despite a number of remarkable coincidences, studies have shown no link whatsoever between airplane crashes and death.
National Rifle Association
you shoot them and take the parachute.
you beat them unconscious with the parachute.
you refuse to use the parachute unless it is biodegradable.
your only rational and moral choice is to take the parachute, as the free market will take care of the other person.
you start betting on how long it will take to crash.
as long as you are looking at the plane engine, it works fine.
|
<urn:uuid:3342004f-3cc1-4ec4-a532-e539a3da4efb>
|
{
"dump": "CC-MAIN-2023-23",
"url": "https://www.janko.at/Humor/Gesellschaft/Das%20Fallschirm-Paradigma.htm",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656788.77/warc/CC-MAIN-20230609164851-20230609194851-00394.warc.gz",
"language": "en",
"language_score": 0.9160186052322388,
"token_count": 694,
"score": 2.734375,
"int_score": 3
}
|
Until 1772 Abgunstkampe was located in what was known as Royal Prussia (also known as Polish Prussia) in the Kingdom of Poland. The First Partition of Poland in 1772 resulted in the creation of a new province in 1773, called West Prussia, in which Abgunstkampe was located. The village was situated in the district (Kreis) of Marienburg until the establishment of the Free City of Danzig in 1920. The village came under the control of Nazi Germany during World War II until February 1945, when it was occupied by Soviet forces and returned to Poland.
In 1820 Abgunstkampe had 14 inhabitants, of which nine were Mennonite.
Mennonites who were residents of Abgunstkampe were members of the Tiegenhagen Mennonite Church.
Wolf, Hans-Jürgen. "Familienforschung in Westpreußen." Web. 21 February 2013. http://www.westpreussen.de/cms/ct/ortsverzeichnis/details.php?ID=8412.
|Author(s)||Richard D Thiessen|
|Date Published||February 2013|
Cite This Article
Thiessen, Richard D. "Abgunstkampe (Pomeranian Voivodeship, Poland)." Global Anabaptist Mennonite Encyclopedia Online. February 2013. Web. 6 Mar 2015. http://gameo.org/index.php?title=Abgunstkampe_(Pomeranian_Voivodeship,_Poland)&oldid=79136.
Thiessen, Richard D. (February 2013). Abgunstkampe (Pomeranian Voivodeship, Poland). Global Anabaptist Mennonite Encyclopedia Online. Retrieved 6 March 2015, from http://gameo.org/index.php?title=Abgunstkampe_(Pomeranian_Voivodeship,_Poland)&oldid=79136.
©1996-2015 by the Global Anabaptist Mennonite Encyclopedia Online. All rights reserved.
|
<urn:uuid:fdb0596e-7e05-4596-9a5d-4809a78edec3>
|
{
"dump": "CC-MAIN-2015-11",
"url": "http://gameo.org/index.php?title=Abgunstkampe_(Pomeranian_Voivodeship,_Poland)&direction=prev&oldid=94015",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936465456.40/warc/CC-MAIN-20150226074105-00310-ip-10-28-5-156.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9148427844047546,
"token_count": 458,
"score": 2.578125,
"int_score": 3
}
|
What is in this article?:
- Over the long-run will supply out-pace demand in the years ahead?
- Crop agriculture may once again face long periods of low prices.
- Times of excess production could result in low price problems.
In the absence of two critical issues, farm bill commodity programs would make little sense. Issue one: crop production can change considerably from year to year due to weather and disease, and issue two: over the long-haul supply increases faster than demand.
Governments as far back as second millennium BC Egypt and late first millennium B.C.-early first millennium AD China have recognized that crop failures can have a negative impact on their citizens in terms of food availability and prices and disruption to the economy. China also recognized that times of excess production could result in low price problems for its farmers.
With the opening up of the Western Hemisphere to European markets, the problem of supply increasing at a faster rate than demand began to rear its head in the U.S. and elsewhere in the New World. At first, the rate of increase of new agricultural land being brought into production was the cause of this rate differential. By the twentieth century, investment in agricultural research, education, and extension—much of it by governments—became significant factors allowing supply to increase faster than demand and leaving farmers to face long periods of low prices.
The recent drought has reminded us that issue one is still in play. But what about issue two? Over the long-run will supply out-pace demand in the years ahead, as it has typically done for centuries? In recent columns, we have described developments that suggest that despite this year’s massive drought in the U.S. and maybe as a consequence of the resulting high prices, crop agriculture may once again face long periods of low prices. But some other analysts see the future differently.
In a paper delivered at a Farm Science Review on September 18, 2012, in Ohio, one of our ag econ colleagues, Luther Tweeten, argued that “the era of falling real prices of food is over.” Tweeten wrote, “two ‘megatrends’ are underway, one on the food supply side and another on the food demand side.”
Tweeten begins by looking at food supply, writing that “U.S. excess production capacity totaled 6 percent in 1962 and averaged near that proportion throughout the 1960s. In sharp contrast, excess production capacity in U.S. agriculture today is near zero.” While seeing the 2012 drought as “transitory,” he says if global warming is underway, we may see “unusual weather events such as storms and drought.”
|
<urn:uuid:c63fc689-5b5c-4e53-9002-01afa38cbe30>
|
{
"dump": "CC-MAIN-2015-11",
"url": "http://southwestfarmpress.com/management/commodity-programs-not-needed-if-demand-outpaces-supply?page=1",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463093.76/warc/CC-MAIN-20150226074103-00016-ip-10-28-5-156.ec2.internal.warc.gz",
"language": "en",
"language_score": 0.9596503376960754,
"token_count": 557,
"score": 3.296875,
"int_score": 3
}
|
In a New Zealand first, breweries Lion and DB will add nutritional information panels to beer bottles, cans and packaging.
These labels will give consumers information about the sugar, calories, dietary fibre, protein and carbohydrate content of their beers.
Throughout the year 450 million nutritional labels will be rolled out onto packaging on a wide range of beers including Steinlager, Macs, Speights, Tui and Monteiths.
It's the first step of a Brewers Association promotion campaign called 'Beer the Beautiful Truth' that aims to "bust myths and communicate nutritional facts about beer".
Brewers Association external relations director Kevin Sinnott said while DB and Lion are on board so far, they welcome other New Zealand breweries to join the campaign.
He says the initiative will help keep consumers informed about beer. "Beer gets a bad rap at times, often due to misconceptions around what's in it.
"With nutrition information panels, everything is out in the open. And we think people might be pleasantly surprised by the beautiful truth about beer."
A press release from the campaign was keen to push the fact that beer is low sugar.
"Most people don't know that most beer is 99 percent sugar free. Malted barley is one of the key ingredients in beer, and in the brewing process the starch from malted barley is converted into sugar. This sugar, or sugar from any other ingredient such as wheat, other grains, or natural cane sugar, is then converted into alcohol by the yeast during the fermentation process. As a result, the finished beer is very low in sugar."
Janet Weber, senior lecturer at Massey University's institute of food and science, said that focussing on low sugar is "very misleading to the consumer".
"Obviously they've chosen to focus on the sugar, it's the buzz thing now isn't it - everyone wants sugar free. But they're sort of ignoring the fact that alcohol provides quite a lot of kilojoules."
Ms Weber says it's important for people to think about the total kiloujoules in what they are drinking, in addition to the food they're eating.
"Alcohol itself contributes more kiloujoules per gram than sugar does."
She says "it seems a bit unusual" that nutrition labels weren't listed on beers already, but thinks the labels will be useful for people to compare across brands.
"Alcohol is very important to our society but it is not actually safe at the levels we drink. We can't make it healthy by saying it's low in sugar".
All the nutritional information included on participating beers is also available to view on the website beerthebeautifultruth.co.nz.
|
<urn:uuid:0ae65658-b515-4c33-910c-2a6556b64583>
|
{
"dump": "CC-MAIN-2017-39",
"url": "http://www.newshub.co.nz/home/health/2017/02/beer-to-get-nutritional-labels.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685850.32/warc/CC-MAIN-20170919145852-20170919165852-00042.warc.gz",
"language": "en",
"language_score": 0.954749345779419,
"token_count": 553,
"score": 2.5625,
"int_score": 3
}
|
What is the meaning of radioactive dating
This document discusses the way radiometric dating and stratigraphic principles are used by looking at the ratio between the original radioactive isotope and. Learn about different types of radiometric dating, such as carbon dating understand how decay and half life work to enable radiometric dating play a game that tests your ability to match the percentage of the dating element that remains to the age of the object. Radiometric dating methods are the strongest direct evidence that meaning particles that can appear in radioactive-series dating could be open to. Learn more about radioactive decay what is meant by the half life of a radioactive isotope how is carbon- 14 dating hands on model to demonstrate the meaning. Radioactive dating definition, radiometric dating see more (rā'dē-ō-mět'rĭk) a method for determining the age of an object based on the concentration of a particular radioactive isotope contained within it. But the radioactive atoms used in dating techniques have been subjected to heat, cold, pressure, vacuum, acceleration, and strong chemical reactions to the extent. Freebase (000 / 0 votes) rate this definition: radiometric dating radiometric dating is a technique used to date materials such as rocks, usually based on a comparison between the observed abundance of a naturally occurring radioactive isotope and its decay products, using known decay rates. Radioactive meaning, definition, what is radioactive: having or producing the energy that comes from the breaking up of atoms: learn more.
Start studying radioactive dating learn vocabulary, terms, and more with flashcards, games, and other study tools. Radioactive dating definition chemistry carbon-14 36cl has removed anthony scaramucci from oxford dictionaries all radioactive dating uses of fossil. Radioactive dating is a method for calculating the age of rocks and fossils by considering the concentrations of certain elements.
Looking for the meaning or definition of the word radioactive dating here are some definitions. The nitty gritty on radioisotopic dating radioactive decay radioisotopic dating relies on the process meaning that they are very confident that the true. Definition of radioactive dating in the financial dictionary - by free online english dictionary and encyclopedia what is radioactive dating meaning of radioactive dating as a finance term.
Radiometric dating is a means of determining the age of a mineral specimen by determining the relative amounts present of certain radioactive elements by age we mean the elapsed time from when the mineral specimen was formed radioactive elements decay (that is, change into other elements) by. Radiometric dating is used to estimate the age of rocks and other objects based on the fixed decay rate of radioactive what is radioactive dating - definition.
Radioactive definition is — of, caused by, or exhibiting radioactivity of, caused by, or exhibiting radioactivity how to use radioactive in a sentence. Definition of radiocarbon dating in the audioenglishorg dictionary meaning of radiocarbon dating what does radiocarbon dating mean proper usage of the word radiocarbon dating. Radioactive dating works by measuring the percentage of a radioactive isotope present in a sample if 50% of the radioactive isotope is left one half life has passed.
What is the meaning of radioactive dating
Radiocarbon dating, also known as the c14 dating method, is a way of telling how old an object isit is a type of radiometric datingthe method uses the radioactive isotope carbon-14. This document discusses the way radiometric dating and stratigraphic by looking at the ratio between the original radioactive (by large i mean. What is the meaning of radioactive carbon dating things absorb c14 at a certain rate, abd they stop when they die what is is radioactive dating based on.
Half-life: half-life,, in dating: principles of half-life is defined as the time period that must elapse in order to halve the initial number of radioactive. Radioactivity is the emission of radiation from an unstable atomic nucleusthis emission of energy is called radioactive decaythe radiation can be emitted in the form of a positively charged alpha particle (α), a negatively charged beta particle(β), or gamma rays (γ). Radioactive dating definition, meaning, english dictionary, synonym, see also 'radioactive decay',radioactive series',radioactive tracer',radioactive waste', reverso dictionary, english definition, english vocabulary. Definition of radioactive dating in the definitionsnet dictionary meaning of radioactive dating what does radioactive dating mean information and translations of radioactive dating in the most comprehensive dictionary definitions resource on the web.
Radioactive dating no problem for the bible the result would mean nothing every radioactive date has to be interpreted before anyone can say what it means. In the classroom home students gain a better understanding of radioactive dating and half-lives what does it mean when we say an atom has “decayed”. How to use radioactive in a sentence example sentences with the word radioactive radioactive example sentences.
|
<urn:uuid:62dc6216-3668-4d48-abb7-a721345fb6d2>
|
{
"dump": "CC-MAIN-2018-43",
"url": "http://erhookuponlinelgos.theatre-tak.info/what-is-the-meaning-of-radioactive-dating.html",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512693.40/warc/CC-MAIN-20181020101001-20181020122501-00525.warc.gz",
"language": "en",
"language_score": 0.9017165899276733,
"token_count": 1000,
"score": 3.1875,
"int_score": 3
}
|
Starting at $75
About Craniosacral Therapy
Craniosacral Therapy was first developed in 1970, during a neck surgery in which he was assisting, that osteopathic physician John E. Upledger first observed the rhythmic movement of what would soon be identified as the craniosacral system. None of his colleagues nor any of the medical texts at the time could explain this discovery, however.
His curiosity piqued, Dr. Upledger began searching for the answer. He started with the research of Dr. William Sutherland, the father of cranial osteopathy. For some 20 years beginning in the early 1900s, Sutherland had explored the concept that the bones of the skull were structured to allow for movement. For decades after, this theory remained at odds with the beliefs of the scientific and medical communities. Dr. Upledger believed, however, that if Sutherland’s theory of cranial movement was in fact true, this would help explain, and make feasible, the existence of the rhythm he had encountered in surgery.
It was at this point that Dr. Upledger set out to scientifically confirm the existence of cranial bone motion. From 1975 to 1983 he served as clinical researcher and Professor of Biomechanics at Michigan State University, where he supervised a team of anatomists, physiologists, biophysicists and bioengineers in research and testing. The results not only confirmed Sutherland’s theory, but led to clarification of the mechanisms behind this motion — the craniosacral system. Dr. Upledger’s continued work in the field ultimately resulted in his development of CranioSacral Therapy.Few structures have as much influence over the body’s ability to function properly as the brain and spinal cord that make up the central nervous system. And, the central nervous system is heavily influenced by the craniosacral system – the membranes and fluid that surround, protect and nourish the brain and spinal cord.
Every day your body endures stresses and strains that it must work to compensate for. Unfortunately, these changes often cause body tissues to tighten and distort the craniosacral system. These distortions can then cause tension to form around the brain and spinal cord resulting in restrictions. This can create a barrier to the healthy performance of the central nervous system, and potentially every other system it interacts with.
Fortunately, such restrictions can be detected and corrected using simple methods of touch. With a light touch, the CST practitioner uses his or her hands to evaluate the craniosacral system by gently feeling various locations of the body to test for the ease of motion and rhythm of the cerebrospinal fluid pulsing around the brain and spinal cord. Soft-touch techniques are then used to release restrictions in any tissues influencing the craniosacral system.
By normalizing the environment around the brain and spinal cord and enhancing the body’s ability to self-correct, CranioSacral Therapy is able to alleviate a wide variety of dysfunctions, from chronic pain and sports injuries to stroke and neurological impairment.
Jennifer Bull, Michon Lowe, Jennifer Rowe, Amber Gilderoy, and Colette French are the Practitioners at Unlocking The Body who utilize Craniosacral Therapy in their sessions. Each Provider has completed a different level in their Craniosacral training. For more information check out the FAQ's below or read the Bio Page of each Therapist, then email your favorite at the address provided on their page.
FAQ's about Craniosacral Therapy:
In what way is Craniosacral Therapy different from other forms of Massage?
CranioSacral Therapy (CST) is a gentle, hands-on approach that releases tensions deep in the body to relieve pain and dysfunction and improve whole-body health and performance. It was pioneered and developed by Osteopathic Physician John E. Upledger after years of clinical testing and research at Michigan State University where he served as professor of biomechanics. Using a soft touch which is generally no greater than 5 grams – about the weight of a nickel – Practitioners release restrictions in the soft tissues that surround the central nervous system. CST is increasingly used as a preventive health measure for its ability to bolster resistance to disease, and it''s effective for a wide range of medical problems associated with pain and dysfunction.
How long are the sessions?
Each session is typically 1 hour. However, clients may be seen from ½ hour up to 2 hours.
What is a typical session like?
First, a history is usually taken and the client’s condition is reviewed by the Pracititoner. The client stays fully clothed and usually lies on a treatment table. The Pracitioner then evaluates the client by placing their hands on various points on the body to locate the areas of restrictions. Once the specific areas are located the Pracitioner will begin working on one of the places of greatest restriction. During the session, the Pracitioner may work exclusively in this one area or move around the body as required to release and ease the competing tensions and restrictions within the body.
How does it feel, should I feel anything?
During the session clients may feel nothing except the therapist’s hands on their body, or they may feel like things are moving inside their body. They may also feel tingling, pains arising in different parts of their body that quickly disappear, heat, pulls within the body, or pulses. Following the session some clients feel a deep sense of relaxation while others may find immediate relief of pain. Others may at times feel nothing or an increase of pain for a period of time up to 24-48 hours, after which a relief of pain or discomfort improves significantly.
What conditions can Craniosacral Therapy help with?
- Migraines and Headaches
- Chronic Neck and Back Pain
- Stress and Tension-Related Disorders
- Motor-Coordination Impairments
- Infant and Childhood Disorders
- Brain and Spinal Cord Injuries
- Chronic Fatigue
- TMJ Syndrome
- Central Nervous System Disorders
- Learning Disabilities
- Post-Traumatic Stress Disorder
- Orthopedic Problems
- And Many Other Conditions
How many treatment sessions will I need?
Response to CST varies from individual to individual and condition to condition. Your response is uniquely your own and can''t be compared to anyone else''s — even those cases that may appear to be similar to your own. The number of sessions needed varies widely — from just one up to three or more a week over the course of several weeks. Your Practitioner will discuss your specific plan with you at the end of your first session.
What should I feel after a CST treatment session?
Just as individual experiences can differ, so can the immediate results. This relaxed state may cause some to sleep for many hours after a session. Others may experience an increase in energy. Some people report that they feel as if somebody had moved things about in their body. Reduction of pain or an increase in function may occur immediately after the session or it may develop gradually over the next few days. For some there may be a reorganization phase as the body adapts to the release of previously held patterns. Occasionally, certain CST techniques will dredge up “old emotions, memories, or pains” that had gone away. This is a good thing! The old emotions or pains hadn't disappeared; they were simply dormant and waiting to reappear at another time. Our bodies help us by storing away traumatic events so that they can be processed at a safe time later on. By allowing these old emotions and pain to resurface opens the path to healing and release function.
Here is a video about brain trauma and the benefits of Craniosacral Therapy in NFL players specifically.
For More Information About Craniosacral Therapy Check Out This Link to the Upledger Institute's Website: www.upledger.com
|
<urn:uuid:e0d62764-12d3-4c7b-a60c-212f9753504c>
|
{
"dump": "CC-MAIN-2018-43",
"url": "https://www.unlockingthebody.com/our-services/craniosacral-therapy/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583517628.91/warc/CC-MAIN-20181024001232-20181024022732-00553.warc.gz",
"language": "en",
"language_score": 0.947897732257843,
"token_count": 1647,
"score": 3,
"int_score": 3
}
|
Hearing problems are grouped in a number of ways. The exact section of the auditory system affected is what determines the categorization. The hearing loss may be sensorineural, conductive, central, functional or mixed. The starting point in designing a treatment plan is to accurately diagnose the kind of hearing impairment.
Sensorineural hearing loss
Sensorineural hearing loss is responsible for more than 90 percent of the cases in which a hearing aid is used. It is the result of damage in the interior of the ear or to the acoustic nerve, which keeps sound signals from reaching the brain. Also referred to as nerve deafness or retrocochlear hearing loss, the damage is generally speaking permanent, though advancements in technology have enabled some formerly untreatable cases to see some improvement.
The most common reasons behind sensorineural hearing loss are aging, extended exposure to noise, problems with circulation of blood to the interior of the ear, fluid disturbance in the inner ear, medications that cause injury to the ear, some diseases, genetics and issues with the auditory nerve.
Hearing aids are satisfactory for most people who have this type of hearing loss, but in more severe cases, a cochlear implant can help bring back hearing to those individuals for whom a conventional hearing aid is insufficient.
Conductive hearing loss
When sound waves are not completely conducted to the interior of the ear through the structures of the outer and middle ear, conductive hearing loss occurs. Conductive hearing loss is quite widespread and could be caused by a buildup of ear wax, an accumulation of fluid in the eustacian tube, which keeps the eardrum from moving properly, a middle ear infection, a perforated eardrum, disease of the tiny bones of the middle ear and other obstructions in the ear canal.
The majority of cases of this type of hearing loss are reversible, assuming there isn’t any permanent damage to the structures of the middle ear, and with treatment the trouble usually resolves in a short amount of time. In some instances surgery can help to correct the problem or a hearing aid may be fitted.
Central hearing loss
Central hearing loss occurs when an issue in the central nervous system prevents sound signals from being processed and interpreted by the brain. The person affected can ostensibly hear perfectly well, but can’t decode or decipher what is being said. Many cases involve a problem with the individual’s capacity to adequately filter rival sounds. For example, most of us can have a conversation while there is traffic noise in the background, but people with this problem have a really hard time with this.
Functional hearing loss
An infrequent situation, this type of hearing loss does not have a psysiological explanation. Functional hearing loss is caused by psychological or emotional problem in which the person’s physical ability to hear is found to be normal, but they do not seem to be able to hear.
Mixed hearing loss
As the term suggests, mixed hearing loss is a combination of different types of hearing loss, in this case the combination of sensorineural and conductive hearing loss. Though there are a few other types of hearing loss, the combination of these two is most frequent.
|
<urn:uuid:736e775b-4082-4c96-bacb-18ebf599a780>
|
{
"dump": "CC-MAIN-2024-10",
"url": "https://www.physicianhearingcenters.com/hearing-loss-articles/summary-of-the-5-major-forms-of-hearing-loss/",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.26/warc/CC-MAIN-20240227121318-20240227151318-00774.warc.gz",
"language": "en",
"language_score": 0.9497189521789551,
"token_count": 653,
"score": 3.796875,
"int_score": 4
}
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- -