text
stringlengths 208
322k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
1. Sexually Transmitted Diseases
2. What's the Story
Regular STD (Sexually Transmitted Disease) screening, detecting STDs early, and seeking consistent treatment for STDs are all essential to maintaining sexual health. Some of the most common STDs are gonorrhea, syphilis, and chlamydia. Many STDs are asymptomatic, but when left untreated, they can cause serious health problems.
3. By the Numbers
Black-Americans have the highest prevalence of gonorrhea, syphilis, and chlamydia. Young people from ages 15-24 and LGBTQ people also have disproportionately high rates of STDs.
4. Did you know?
Chlamydia, when left untreated, can cause infertility in both men and women. It can also pose health complications for infants during birth. A chlamydia infection Increases suscpetibility to more serious infections like HIV.
5. To Learn More
Progress Toward Goal
* Rate per 100,000 population
Only one year of data currently available
Exceeding Goal
At/Making progress toward Goal
Making less progress toward Goal than expected
Not progressing toward Goal
Negative progression toward Goal
Last Reviewed: 1/26/2017
|
<urn:uuid:62dc65de-3276-433f-870c-b3e16a04e17b>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.546875,
"fasttext_score": 0.05061250925064087,
"language": "en",
"language_score": 0.9062483310699463,
"url": "http://nj.gov/health/chs/hnj2020/chronic/std/"
}
|
Archive | History RSS feed for this section
John Lennon and Yoko Ono Start Their “Bed-In”
bed in
John Lennon and Yoko Ono in their hotel room at the Amsterdam Hilton Hotel.
On March 25, 1969, John Lennon and his new wife, Yoko Ono, staged their first “Bed-In For Peace.” These “Bed-Ins,” based on sit-in protests, were meant to be experimental tests to promote peace and protest war. Lennon and Ono spent their honeymoon in Amsterdam, 5 days after their wedding, sitting in their hotel room bed, discussing peace while the press was allowed to come in to their room to ask questions and take photographs of the famous couple.
The couple knew their marriage would be a high profile event that the press would latch on to, so they took this publicity opportunity to convey to the world their thoughts on peace. Starting on March 25, and lasting an entire week until March 31, Lennon and Ono took up residence in the Amsterdam Hilton Hotel’s Room 902, spending the entire time in their bed and allowing press to visit from 9 AM – 9 PM daily. Because Lennon and Ono were known for previous lascivious public images of themselves they had used as promotional material, most of the press expected something lewd upon visiting the hotel room of the two stars. Instead, they found Lennon and Ono in their pajamas, comfortably sitting up in their hotel bed with signs that read “Hair Peace” and “Bed Peace” above them. The two discussed their visions of world peace with the press and their opposition to the Vietnam War and the Cold War.
Most of the press that covered this protest/performance “peace” did not take it seriously, but Lennon insisted that that was exactly what Ono and he wanted. “It’s part of our policy not to be taken seriously. Our opposition, whoever they may be, in all manifest forms, don’t know how to handle humour. And we are humorous,” said Lennon.
Seven days later, the couple flew to Vienna, Austria where they held a press conference to discuss Bagism, which was a term created by Lennon and Ono to satirize prejudice and stereotyping. Bagism literally involved encapsulating oneself in a bag, so that no judgement about the outward appearance of a person could be made, and people could only judge someone by the vocal messages they conveyed. It was viewed as a form of total communication.
The Amsterdam Bed-In was not the only one performed by Lennon and Ono. In May of 1969, the couple again reenacted their previous peaceful form of protest in the Queen Elizabeth Hotel in Montreal. It was here that they recorded “Give Peace a Chance” with other notable individuals. Later that year, they further publicly spread their message of peace by displaying on billboards in 11 major American cities, “WAR IS OVER! If You Want It – Happy Christmas From John and Yoko”. A documentary film made of the two’s Bed-Ins can be watched here.
The impact made by Lennon and Ono’s Bed Ins has been seen in the several popular culture references made about the peaceful protests, and protest groups and artists around the world have reenacted the famous “peace” since the 1960s.
Sources: Wikipedia, TIME, The Guardian, NPR
Read full storyComments { 0 }
Bob Dylan Releases Debut Album
bob dylanOn March 19, 1962, American folk singer Bob Dylan released his first album titled Bob Dylan. Dylan’s now famous first album was very different than any pop music at the time. Little did critics at the time know that Dylan would help to popularize and define folk music of the time.
In the early 1960s, “The Twist” was at the height of its popularity, with many charted songs at the time honing in on this dance craze and using it as the focal point of their songs. The Beach Boys had also started to peak in popularity with their charged surfer rock tunes. The Kingston Trio was the most well known folk group at this time, and Dylan sounded nothing like them. He had been performing in coffee shops in New York City for the past year, singing traditional folk songs in his nasal voice, which most didn’t believe would be plausible for radio.
Legendary talent scout John Hammond saw great potential in the young singer after he met Dylan at a recording session for Carolyn Hester in which Dylan was playing harmonica. Shortly afterward, Dylan received a rave review from music writer Robert Shelton in the New York Times. Upon seeing this, Hammond signed Dylan to a five-year contract and a month later, they were in the studio recording. Dylan’s whole album only took six hours to cut and cost $402.
The album contained a variation of old traditional folk songs which were standard in Dylan’s live sets at the time. The only two songs on the album that were original songs written by Dylan were “Talkin’ New York” and “Song to Woody,” which was a tribute to one of Dylan’s biggest inspirations and favorite folk singers, Woody Guthrie. Dylan later reported that he wrote the song a few weeks after moving to New York. Dylan made the trip to New York in part to meet his musical hero (Guthrie), who was living at Greystone Park Psychiatric Hospital in New Jersey.
Dylan’s first album was the only not to make it on to the Billboard charts, and some in the record industry began referring to Dylan as “Hammond’s Folly.” Though the album only sold 5,000 copies in the first year, Hammond was not discouraged and soon brought Dylan back into the studio to begin recording his second album. At this point, Dylan had more original songs under his belt and had shifted to writing about political topics. His songs spoke of the social unrest of the world, and Dylan became a cultural figurehead of the 1960s, chronicling the historical and political happenings of the time in his lyrics.
Sources: Wikipedia, Rolling Stone, The Guardian
Read full storyComments { 0 }
Julius Caesar Dies
On March 15, 44 BC, Julius Caesar, Roman Consul, statesman, general, and Latin prose author was assassinated. He played a crucial role in the events leading up to the fall of the Roman Republic and the subsequent rise of the Roman Empire.
During Caesar’s time, Romans were reluctant to give praise to a king. Caesar was a powerful member of the Roman senate, and although he turned down the idea of kingship when it was presented to him, he held steady in the position of “dictator for life.” This action is what turned many against Caesar and plots for his assassination began to take hold. More disdainful feelings started to brew in the minds of many when Caesar’s face appeared on Roman coinage. This angered many because that honor was usually only given to deities.
The conspirators behind the attack on Caesar were called “the liberators.” At the head of this group was Marcus Brutus, who was somewhat torn with his relationship with Caesar. Caesar had spared the life of Brutus and promoted him in office even though Brutus had fought against Caesar in the Roman civil war. Brutus’s family, however, was known for defying those who were power hungry, and thus Brutus’s animosity towards Caesar grew.
Cassius Longinus was also a main conspirator and worked to get Brutus to join him in plotting against the “dictator for life.” Caesar was scheduled to leave Rome on March 18 to begin help fighting a battle, so the conspirators knew they had to work fast. Upon entering a Senate meeting, Caesar was apparently handed a note, warning him of his fate, but he failed to read it. He was soon surrounded by senators holding daggers, and was stabbed 23 times. In all, there were 60 conspirators involved in the attack.
The “Ides of March” has been marked in history as the famous day when Caesar met his demise.
Sources: Wikipedia,, National Geographic
Read full storyComments { 0 }
The Cat in the Hat Published
Sources: Wikipedia, PBS, Seussipedia, NPR
Read full storyComments { 0 }
Barbie Day
barbie calendarOn March 9, 1959, the first Barbie doll was released at the American Toy Fair in New York City. Barbie was the first mass produced doll made in the United States with adult features and has since become a cultural icon as well as a subject of much controversy.
The idea for Barbie came from the mind of Ruth Handler, who co-founded Mattel, Inc. with her husband in 1945. She noticed that her young daughter had stopped playing with her baby dolls in lieu of playing with paper dolls that looked like adults. Handler realized that this specific niche of dolls modeled after adults was something that had yet to be tapped into. Playing with these adult dolls allowed children to imagine the future of themselves as grown-ups.
Barbie’s design and inspiration came from a German doll named Lilli. Lilli was a comic strip character, who was turned into a doll meant to be sold as a gag gift for men sold in tobacco shops. Lilli unexpectedly became a popular toy with children, and Mattel bought the rights to her so that they may create their own version. The name “Barbie” came from Handler’s little girl who was named Barbara. In 1955, Mattel became the first toy company to broadcast commercials targeted at children due to their sponsorship from The Mickey Mouse Club.
After her introduction at the American Toy Fair, and the new use of commercial ads, the popularity of Barbie skyrocketed. The demand for the doll was so great that soon Handler created a boy version of the doll and named it Ken after her son Kenneth.
Along with Barbie’s popularity came a significant amount of controversy. Some thought that Barbie’s mass amounts of material items – her dream house, her multiple cars, and her huge closet of “designer” outfits gave children the idea that being materialistic was a normal and good thing. The thing about Barbie that caused the most outrage though was the size of her waist and breasts which scores of people thought gave children negative views on body image, equating skinny with pretty.
Even with all this criticism though, Barbie has remained a popular and well-known figure in the world of children’s toys and her impact on the toy market is one that will and has go down in history. She has now become a global phenomenon.
Sources:, Wikipedia,
Read full storyComments { 0 }
Fight of the Century
life-frazier-cover frank sinatraOn March 8, 1971, two World Heavyweight Champion boxers, Joe Frazier and Muhammad Ali faced off at Madison Square Gardens in the “Fight of the Century” to determine the true world champion.
At the time of the fight, both boxers had a legitimate claim to the title “World Heavyweight Champion.” Ali won the title in 1964 from Sonny Liston with an undefeated record, but had been stripped of the title when he refused to register in the draft in 1967. He won an appeal for his conviction and 5-year prison sentence in front of the Supreme Court in 1971 and returned to fighting. During Ali’s hiatus, Frazier had fairly won the title, and soon a match between the two champions received considerable hype and was billed the “Fight of the Century.” Surprisingly, the fight lived up to its name.
Ali had become well-known over the years for his speed and dexterity despite his large size. Frazier was known for his unmatched left hook and the way he would ferociously attack his opponent’s body. At a time when the country felt divided, the two fighters came to represent the two politically and socioeconomic sides of America. Ali represented the anti-establishment left-wing liberals, while Frazier was seen as a symbol for the blue collar pro-war conservatives. This parallel symbolism of the two fighters added to the hype of the highly anticipated fight.
ali calendarBoth fighters were guaranteed a $2.5 million purse for the fight, which was a record for any single prizefight at the time. Madison Square Garden had a raucous atmosphere on the night of the highly publicized fight with tons of police officers on hand to keep the crowd under control, and countless celebrities in attendance. Among them were Norman Mailer, Woody Allen, and Frank Sinatra, who was there taking photos for Life magazine because he was unable to obtain a ringside seat.
Unexpectedly, the fight lasted a full 15 rounds. Ali was on top for the first three rounds, delivering several quick jabs to Frazier’s face, causing it to welt up. Things turned around at the end of round three though, when Frazier struck Ali’s jaw with one of his famous hooks, causing Ali’s head to snap backwards. Frazier followed up by ferociously attacking Ali’s body as he was stunned. The bodily blows wore out Ali, and Frazier began to dominate the match in the fourth round.
By the sixth round, Frazier had attacked Ali with a flurry of his famous left hooks and Ali began to look noticeably run down. Ali still had a speed and combo advantage that kept the match close until the eleventh round. In the eleventh round, Frazier cornered Ali and pummeled him with another one of his left hooks which nearly floored Ali. Ali survived the round and the next three, though Frazier was in the lead for all of them. At the beginning of round 15, Frazier once again struck Ali with a left hook, sending him to the floor on his back. Refusing to give up, Ali stood up with a swollen jaw and lasted the rest of the round despite the terrific amount of blows issued by Frazier. The judges unanimously declared Frazier the winner, and Ali faced his very first loss.
The fight no doubt lived up to its name, and is still considered one of the greatest boxing matches in the history of the sport.
Sources: Wikipedia, LIFE, ESPN Boxing
Read full storyComments { 0 }
“Minnie the Moocher” Recorded
Cab Calloway performing.
On March 3, 1931, jazz singer and bandleader Cab Calloway recorded his hit song “Minnie the Moocher.” The song was the first jazz recording to sell over 1 million copies.
Calloway was one of the United States’ most popular big band leaders in the 1930s and 1940s and was well-known for his scat singing prowess. “Minnie the Moocher” is a sordid tale revolving around Minnie, a character who later made appearances in other Calloway songs. Minnie is a girl referred to in the song as a “red hot hoochie coocher.” The hoochie coochie was a type of belly dancing considered provocative and scandalous at the time, thus Minnie’s character was known for her raucous and enticing behavior. Minnie falls into some trouble when she meets a man named Smokey, who introduces her to a dangerous world filled with drugs and flashy riches. Though the song was riddled with lascivious language, most of it was not recognized by white audiences at the time because the lyrics were written in jive slang, which was an African American urban slang that became popular in the ’40s. The song ends with Calloway lamenting, “Poor Min!” making listeners believe that Minnie’s tale did not have a happy ending.
The song shot Calloway and his band into fame, and it continued to be an extremely popular and well-known tune for the next decade. A big factor in the song’s success was due to Calloway’s impeccable scatting talent. Scatting refers to a vocal improvisation using nonsensical syllables, using the voice as an instrument. When performed live, Calloway would perform a call and response with audience members, asking them to repeat each of his scat phrases. He would start out with simple scat variations that the audience could easily mimic, and made them increasingly difficult as he went on until the audience would give up in a fit of laughter. Because of his scatting abilities, he became known as the “Hi De Hi De Hi Man,” as that was the first scat he would sing when performing the song.
“Minnie the Moocher” has survived over the years, and Calloway even reprised his famous song in the 1955 movie Rhythm and Blues Revue and 1980′s The Blues Brothers. The song has been covered and remade by a variety of later artists like ’60s Australian band The Cherokees, hip hop artist Tupac Shakur, and British actor Hugh Laurie. 68 years after it’s original release, in 1999, “Minnie the Moocher” was inducted into the Grammy Hall of Fame.
If you’ve never heard the famous song, take a listen here!
Sources: Wikipedia, Song Facts, Populist
Read full storyComments { 0 }
Thriller Album Hits #1
Read full storyComments { 0 }
The New Yorker Debuts
Eustace-TilleyOn February 21, 1925, The New Yorker debuted with its first issue. The New Yorker is an American magazine which includes serious reportage, social commentary, essays, satire, fiction works, poetry, and essays. Though mostly centered around the life of New Yorkers, the magazine has a broad international fanbase and because it is produced weekly, it is known for its highly topical covers and commentary on American popular culture.
The magazine was founded by Harold Ross and his wife, New York Times reporter Jane Grant. Tired of the “corny” content which filled other humorous publications at the time, Ross strove to create something sophisticated, yet entertaining. The magazine started out as a glorified society column centering around life in New York featuring a now famous dandy gentleman staring at a butterfly through a monocle on the cover. The dandy man on the cover, later given the name ‘Eustace Tilley,’ was drawn by The New Yorker‘s first art director, Rea Irvin.
NY168 - Seaside Cafe.graffleTilley’s appearance on the first cover was meant to be a joke, but confused readers did not know what to make of it or the magazine at first. Was it supposed to be an accurate portrayal of The New Yorker readers? And if so, what did it mean? Are readers cosmopolitan individuals closely studying life’s small beauties? Or are they haughty beings only concerned with their own existence? The perplexing first cover image seemed to mirror the likewise befuddling content inside. Filled with gossip and writing targeted at in-the-know Manhattanites, those involved in the beginning soon decided a broader scope should be the natural evolution of the new publication.
Still holding on to its humorous roots, The New Yorker gradually established a base for serious fiction writers and journalists to publish their work. After World War II came to an end, the magazine began to print short stories, poems, essays, and other contemplative and stimulating writing by some of the 20th and 21st centuries’ most renowned writers. Such famed names as Haruki MurakamiVladimir NabokovJohn O’HaraPhilip RothJ. D. SalingerIrwin ShawJames ThurberJohn Updike, and E. B. White have appeared with bylines in the publication.
The New Yorker’s circulation is now well over one million, and its audience is made up mostly well-educated and liberal-minded individuals who seek the detailed coverage and commentary of Americana the magazine provides. Its combination of journalism and creative pieces as well as reviews and art has made The New Yorker one of the most revered magazines in the world.
Sources: The New, Wikipedia
Read full storyComments { 0 }
First Teddy Bear Sold
charliebearOn February 15, 1903, the first Teddy bear went on sale at a toy store in Brooklyn. The name ‘Teddy’ was borrowed from the nickname of then president, Theodore Roosevelt. Teddy bears have since become one of the most popular stuffed animal gift items to signify love, congratulations, or sympathy.
Roosevelt traveled to Mississippi in November of 1902 to help settle a border dispute between Mississippi and Louisiana. After the issue was resolved, Roosevelt went on a hunting expedition to relieve stress. In this famous incident, Roosevelt’s hunting guides had tied an injured black bear to a tree for him to kill. Though the details about the incident are unclear – like the age of the bear or the exact reason behind Roosevelt’s reaction – the most popular consensus is that upon seeing the bear, Roosevelt said he did not hunt prey who could not fight back and let the bear go. Clifford Berryman, who was a political cartoonist, witnessed the incident and based what was to become an extremely popular cartoon off the event. He titled it “Drawing the Line in Mississippi,” and it featured Roosevelt in hunting garb ordering a small bear cub to be released. It was published in the Washington Post a few days later, and the name ‘Teddy bear’ was spawned from this.
The original teddy bear cartoon featured in the Washington Post.
The original teddy bear cartoon featured in the Washington Post.
Morris and Rose Mitchom were toy store owners and inventors who owned a small store in Brooklyn. Inspired by the popular cartoon, the two decided to create a replica of the bear in the cartoon and dubbed it ‘Teddy’s bear.’ Because they feared the president would be offended by the use of his name in correlation with a stuffed toy, they wrote and asked his permission. Several months later, the president finally responded, giving the Mitchoms permission but also expressing his doubt that the name would actually boost sales. He was wrong.
They displayed two bears Rose sewed in the window and both were snatched up in no time. People were soon requesting more be made, and the ecstatic Mitchoms promised to produce more. After a while, they began solely producing the popular toy. Roosevelt and the Republican party adopted the bear as their campaign symbol in 1904, and they were displayed at all White House functions. The original teddy bear is now on display in the Smithsonian Museum.
Sources: Examiner, Wikipedia,
Read full storyComments { 0 }
|
<urn:uuid:6d695c63-6d48-40e7-85fc-59b45ca746b1>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5,
"fasttext_score": 0.08108067512512207,
"language": "en",
"language_score": 0.9770083427429199,
"url": "http://blog.calendars.com/category/history-2/"
}
|
Friday, August 24, 2012
Creating a Halocline
When discussing Estuaries with your children, the concept of a "halocline" may arise, but truly be hard for many children to picture or understand with just a definition. Even if it does not arise in the study work presented to your child, this particular noun can feel to a young elementary child like a "big kid word" they can understand and impressively define for others. Since it is easy to understand with a pretty simple demonstration, I recommend diving right in and giving your kid the chance.
A halocline occurs where salt water and freshwater are coming together but not really being mixed well. The halocline is a rapid change in salt content as one moves vertically through the water column. Haloclines have many causes and all kinds of things can play a part in whether one exists in a certain area or how deeply this "line" occurs. The important thing is just to help your child understand what a halocline is.
This simple activity is pretty cool and will help clarify the term on a very basic level for your little naturalist or scientist. Simply mix plenty of sea salt and food coloring in a container that is clear. An old Jar such as one for peanut butter would work well, as does the glass brownie pan you see above.
This colored salt water represents the ocean water.
Fill a measuring cup or something with a lip for pouring with fresh water and an opposing color to your ocean water color. As you slowly pour your freshwater into your ocean (as a river often runs into the ocean in shallows such as estuaries) you will see a line form when the container is viewed from the side. This line is where you have created a mini halocline in your container. If one does not form, you may need to pour more slowly, carefully or from a lesser height above the surface of the "ocean" water. Simply adjust and try again.
Haloclines happen because the density of salt water is generally greater than freshwater so in areas where there isn't a lot of mixing (wave action and moving currents are at a minimum) a halocline can form. Temperature has a big impact on haloclines and their stability because temperature directly effects density of all substances including both saltwater and fresh. Because of this, sharp haloclines are commonly found in fjords, still estuaries, caves, and places where the water is either still and/or very cold (Arctic ocean and the Antarctic).
No comments:
Post a Comment
Thanks for your comments!
|
<urn:uuid:c7efd3fd-743c-4f4f-af3e-0db779ee1afa>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.8125,
"fasttext_score": 0.05390661954879761,
"language": "en",
"language_score": 0.9229604601860046,
"url": "http://pinchxeverything.blogspot.com/2012/08/creating-halocline.html"
}
|
Fantastic Fraction Flags
Guest Post written by Sasha
Grade 5/6E had to make a flag for a made up country of Ruitangia. The flag had to be a particular way it had to be 32 cells and 5/8 of that flag had to be blue, 1/8 of that flag had to be yellow and 1/8 had to be red. This was a very challenging task because we had to figure out how many cells there would be for each colour. We also learnt how to remove lines from the table so it would be a proper flag. To make this flag we had to make table 4 squares tall and 8 squares long. Then we had to create a stylish design without using too many cells for each colour. It was a very difficult task and it challenged everyone’s ICT skills. We had to extend this project by choosing 3-4 colours and choosing how many cells we wanted. Then we had to figure out how much would be each fraction and would there be too many cells or not enough it was also a very challenging task there was many difficult ICT skills involved.
Fraction Flags
What do you think of our fraction flags?
One thought on “Fantastic Fraction Flags
Leave a Reply
|
<urn:uuid:34a49eb6-f3b6-4e00-a12c-a4a588574435>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.8125,
"fasttext_score": 0.2693026065826416,
"language": "en",
"language_score": 0.9881048202514648,
"url": "http://hunichenhollands.global2.vic.edu.au/2013/09/04/fantastic-fraction-flags/"
}
|
Welcome Guest | Login
Saroma captures your attention with the historical tour of this land, unscathed by the changes of time…..
The Cheras, Pallavas, Pandyas and Cholas ruled the region during the seventh, eight and ninth century. The Pallavas ruled a large portion of South India with Kancheepuram as their capital. Dravidian architecture reached its peak during the Pallava rule. The Shore Temple, which is a UNESCO World Heritage Site, was built during this period.
The Pandyas ruled Tamilnadu in the 8th century with Madurai as their capital. Temples like Meenakshi Amman Temple at Madurai and Nellaiappar Temple at Tirunelveli are the best examples of Pandyan Temple architecture.
The Cholas came to power during the 9th century. The Brihadeshwara Temple in Thanjavur and the Chidambaram Temple in Chidambaram are examples of architecture during the Chola realm. Brihadeshwara temple is an UNESCO Heritage Site.
In early 19th century, East India Company consolidated most of southern India into the Madras Presidency. When India became independent in 1947, Madras Presidency became Madras State, comprising today’s Tamil Nadu, coastal Andhra Pradesh, certain parts of Orissa, northern Karnataka and parts of Kerala. The state was subsequently split up along linguistic lines. In 1968, Madras State was renamed Tamil Nadu.
|
<urn:uuid:cc07792b-1e15-448f-b719-f651a107eba6>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.640625,
"fasttext_score": 0.10263550281524658,
"language": "en",
"language_score": 0.9544609189033508,
"url": "http://saromaholidays.com/Destination/India/South_India/Tamil_Nadu/History.aspx"
}
|
Life as a Gatherer
two cavewomen are collecting apples from a tree
Buy and Download this image in HD.
and without watermark in our eStore or upon request.
The modern view is that ancient gatherers had a tenuous existence, totally dependent on the surrounding environment, but early people were both healthier and better organized than previously believed.
Indeed, examining prehistoric living from a health perspective, we can learn a lot about the types of activity and foods that are necessary to live at optimal health even today.
Early hunter-gatherer people typically lived together in “bands.” Life was dedicated to food gathering and every single person participated.
Groups of early humans would base their social structure on age, and they would divide up work duties by sex.
It was not uncommon for the men in a band to take up the hunting tasks. They might travel up to ten miles in a single day of hunting, often sprinting in rapid pursuit of their prey for up to a mile over rough terrain—the forerunner of today’s cross-training.
On the other hand, foraging for food such as nuts and berries, fruit and vegetables was primarily the work of women. It was usually conducted within the immediate vicinity of the camp.
But that doesn’t mean their life was any less important or demanding. In fact, foraging would typically contribute between 75% and 80% of the total calories consumed by the band. The purpose of hunting was to provide the protein that might have otherwise been lacking in a gathering-intensive society.
Today, anthropologists refer to the limited region around a given camp or settlement that was exploited for food as the “site catchment area.” Cohabiting groups of individuals would migrate seasonally to more bountiful catchments.
Obviously, a lot of walking was involved. Estimates indicate that the average gatherer would cover a distance somewhere in the range of three to seven miles a day.
While a gatherer might not have needed to sprint as fast as she could, it is still likely that much of her day was spent hiking, climbing, bending, crouching, lifting, stretching, carrying and otherwise exercising rigorously. Such activity contributed to women’s overall health and stamina.
Perhaps more important, gathering was the activity that enabled plants to be identified not only for immediate food and medicinal purposes, but eventually for cultivation and the basis of farming, leading to the Neolithic Revolution.
Being able to identify, pick, choose and bring home useful items from the environment was a critical survival skill. And it still is today, although instead of calling it “gathering,” we now call it “shopping.”
Since you’re here …
SEE CAVEMENWORLD’s great guides and transform your lifestyle!
Osi Cavemen
Pin It on Pinterest
Pin It on Pinterest
|
<urn:uuid:0e549356-57fd-40ef-9eba-867ecdf06718>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.8125,
"fasttext_score": 0.06239676475524902,
"language": "en",
"language_score": 0.9674888253211975,
"url": "http://www.cavemenworld.com/explore/life-as-a-gatherer/"
}
|
Главное меню
• К списку уроков
внеклассное мероприятие
20.04.2017 264 52 Максимова Светлана Николаевна
Theme: London
Materials for the lesson: cards with exercises, pictures with views of London, slides with exercises and pictures.
Objectives: The students will be able to:
- tell about London as a capital of Great Britain;
- use definite and indefinite articles;
- speak about sightseeing of London and describe them.
Procedure of the lesson
a) The speech with the students.
Answer the questions:
Good morning, students! Glad to see you. How are you?
What date is it today? What season is it now?
What is the weather like?
What is the weekday today?
You know a lot about Great Britain. Today we will talk about its capital. You will make a virtual tour to London.
Check –up home task
Before starting our tour you should control the home task.
A)Revise the words
1. London 16. part
2. Westminster 17. different
3. Stock Exchange 18. heart
4. Oxford street 19. financial centre
5. financial 20.goods
6. office 21. extend
7. hotel 22. area
8. restaurant 23. government
9. market 24. wealthy
10. capital 25. endless
11. large 26. attract
12. population 27. visitors
13. situated 28. poor
14. cover 29. rich
15. divide 30. Narrow
b) Answer the questions to the text. The students take the cards with questions and ask each other.
1. What can you say about London as the capital of Great Britain?
2. How many people live in London?
3. Where is London situated?
4. What is meant by Greater London?
5. What parts is London traditionally divided into?
6. What did you learn about the City?
7. How much does the City extend?
8. How many people work in the City?
9. How many people live there?
10. What is the West End?
11. What is the Westminster?
12. What is the East End?
I Presentation of the new materials
a) Good morning, students and guests!
Welcome to our round London sightseeing tour! London is situated in the south-east of England
on the river Thames. What is London ?There are more than one answer to this question .We can
say that it is one of the largest cities in the world. We can say that it is one of the world’s largest ports. We can point out that it is the capital of the United Kingdom. Today you will visit the most popular places of interest in London .
The most famous and popular places of interest in London are: The National Gallery, The Tower of London, St. Paul’s Cathedral, Trafalgar Square, Piccadilly Circus, Westminster Abbey, Buckingham Palace, The House of Parliament, Big Ben, Hyde Park, Admirals Nelsons Column, Downing Street,10.
b) Revise the information about articles.
II Practice
a) Look at the pictures: it is thick, short and happy indefinite article a/an , and also thin, tall and bored definite article the.
Open the books page 128. Complete the sentences with definite or indefinite articles where necessary. Work in two groups, than control.
Right answers: 1) the 2) the, the, the, - 3) the, the 4) the 5) – 6) the 7) a, the 8) a
9) the 10) the 11) a, -, - 12) the, -
b) Look at the active board. Look at the words we are going to use. This are the most famous places of interest in London: St. Paul’s Cathedral, the national Gallery, Trafalgar Square, Admiral Nelson’s Column, The Tower of London, Buckingham Palace, Hyde Park, Big Ben, Downing Street 10, Piccadilly Circus, London Zoo, Houses of Parliament, William the Conqueror, Sir Christopher Wren.
Work with the map of London. Look at the map of London.
First we…
will visit the Houses of Parliament and see Big Ben, next to it there is Westminster Abbey.
Then we …
will see Buckingham Palace but we won't visit it because the State Rooms of the Palace are open to the visitors in August and September.
Next we …
will come to Trafalgar Square and go to the National Gallery and see one of the finest collection of pictures and portraits in the world.After that we'll see the house where the Prime Minister lives.
Finally we …
will see Admiral Nelson's Column, go to Piccadilly Circus, visit St.Paul's
Cathedral and the Tower of London.
The teacher read the information about London sightseeing and show slides.
c) Choose the correct answer. Look at the slides and complete the sentences.
1.________is the work of the famous architect Sir Christopher Wren.
a) The chapel of King’s College
b) St. Paul’s Cathedral
c) Globe Theatre
2. For ______ years the building of St. Paul’s Cathedral.
a) 65
b) 45
c) 35
3. Admiral Nelson's Column is in
a) Hyde Park
b) Trafalgar Square
c) Downing Street
4.___________ lives in Buckingham Palace.
a) The Prime Minister
b) The Queen
c) A famous writer
5. Big Ben is the largest ________
a) tower
b) bridge
c) clock
Right answers:1) b 2) c 3) b 4) b 5) c
d) Match A with B
Right answers:
Speaker’s Corner Hyde Park
Downing Street The Prime Minister
Buckingham Palace Residence of Queen
The Tower of London William the Conqueror
Westminster Abbey 900 years old
St. Paul’s Cathedral Sir Christopher Wren
Assessment of the students
Today we have visited the most famous places of interest in London. I think you enjoyed your trip greatly and learned lot of interesting facts about London. All of you get good marks for the lesson.
Home task
Describe one of all sightseeing. Write an essay or make a poster.
Thank you for your activities. The lesson is over. Good bye!
Скачать материал
Полный текст материала смотрите в скачиваемом файле.
На странице приведен только фрагмент материала.
|
<urn:uuid:77f252c7-2984-416f-a3e2-2b2774f405f7>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5,
"fasttext_score": 0.3404949903488159,
"language": "en",
"language_score": 0.7958884835243225,
"url": "http://tak-to-ent.net/load/482-1-0-22049"
}
|
Harriet Beecher Stowe Additional Biography
(Masterpieces of American Literature)
Uncle Tom’s Cabin was so captivating and moving to its readers that its more subversive attacks on white male hegemony went largely unnoticed. Furthermore, positing the spiritual superiority of women and black people was not enough to disrupt the status quo. Coupled with the lack of a clear and intellectually convincing argument for the moral and intellectual identity between the races and genders, the focus on the religious vision of Christ as mother overshadows the book’s possible political impact. By postulating the moral and spiritual superiority of the two suppressed groups, women and black people, instead of a vision of equal adulthood, Stowe marred the political impact of the book.
(Survey of Novels and Novellas)
Harriet Beecher Stowe was born Harriet Elizabeth Beecher on June 14, 1811, the seventh child of Lyman and Roxana Beecher. By this time her father’s fame as a preacher had spread well beyond the Congregational Church of Litchfield, Connecticut. All seven Beecher sons who lived to maturity became ministers, one becoming more famous than his father. Harriet, after attending Litchfield Academy, a well-regarded school, was sent to the Hartford Female Seminary, which was founded by her sister, Catharine—in some respects the substitute mother whom Harriet needed after Roxana died in 1816 but did not discover in the second Mrs. Beecher. In later years, Harriet would consistently idealize motherhood. When Catharine’s fiancé, a brilliant young man but one who had not experienced any perceptible religious conversion, died in 1822, the eleven-year-old Harriet felt the tragedy. In 1827, the shy, melancholy girl became a teacher in her sister’s school.
In 1832, Lyman Beecher accepted the presidency of Lane Seminary in Cincinnati, Ohio, and soon Catharine and Harriet had established another school there. Four years later, Harriet married a widower named Calvin Stowe, a Lane professor. In the years that followed, she had seven children. She also became familiar with slavery, as practiced just across the Ohio River in Kentucky; with the abolitionist movement, which boasted several notable champions in Cincinnati, including the future chief justice of the United States, Salmon P. Chase; and with the Underground Railroad. As a way of supplementing her husband’s small income, she also contributed to local and religious periodicals.
Not until the Stowes moved to Brunswick, Maine, in 1850, however, did she think of writing about slavery. Then, urged by her brother, Henry,...
(The entire section is 730 words.)
(Great Authors of World Literature, Critical Edition)
ph_0111201280-Stowe_HB.jpg Harriet Beecher Stowe. Published by Salem Press, Inc.
Harriet Elizabeth Beecher Stowe presented two regional backgrounds in her fiction: the South before the Civil War and the rural area of New England and Maine. Her novels of the antebellum South, were less authentic as well as more melodramatic in style. They were more popular, however, because of the timeliness of their theme and the antislavery feeling they created.{$S[A]Stowe, Catharine;Stowe, Harriet Beecher}{$S[A]Beecher Stowe, Harriet;Stowe, Harriet Beecher}
Harriet Elizabeth Beecher was the daughter of a famous minister, the Reverend Lyman Beecher, and the sister of Henry Ward Beecher. She was educated in the school of her older sister Catharine, who encouraged her inclination to write. The family moved to...
(The entire section is 584 words.)
(Masterpieces of American Literature)
Harriet Beecher was born in Litchfield, Connecticut, on June 14, 1811, the seventh child of Lyman and Roxana Foote Beecher. Two years after her mother’s death in 1816, Harriet’s father married Harriet Porter of Portland, Maine. Lyman Beecher, a minister in the tradition of eighteenth century preacher Jonathan Edwards, who had attempted to breathe life into old Calvinism, dominated the household. Daily family worship and religious instruction shaped the lives of all the Beecher children. All seven brothers who reached maturity became ministers, according to their father’s wishes, and the girls in the family were expected to marry ministers. Because of her father’s focus on his sons’ mental and intellectual preparation as...
(The entire section is 976 words.)
(Novels for Students)
Stowe seemed destined to write a powerful protest novel like Uncle Tom's Cabin. Her father was Lyman Beecher, a prominent evangelical...
(The entire section is 477 words.)
|
<urn:uuid:3e779e38-5be5-46c9-b3fe-83b389a7fa99>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4.03125,
"fasttext_score": 0.07343095541000366,
"language": "en",
"language_score": 0.9677932262420654,
"url": "https://www.enotes.com/topics/harriet-beecher-stowe/biography/more"
}
|
[Lone Sentry: WWII Photographs, Documents and Research]
TM-E 30-480: Handbook on Japanese Military Forces
Technical Manual, U.S. War Department, October 1, 1944
Chapter V: Special Forces
Section I: Naval Land Forces
1. ROLE AND CHARACTER. Until several years after World War I, Japan had no separate permanent naval landing organization corresponding to the U.S. Marine Corps. Instead, naval landing parties were organized temporarily from fleet personnel for a particular mission and were returned to their ships at its conclusion. This practice was made possible by the fact that every naval recruit was given training in land warfare concurrently with training in seamanship. The results of such training, together with any special skills such as machine gunner, truck driver, etc., were noted on the seaman's service record to serve as a basis for his inclusion in a landing party. Normally, the fleet commander designated certain ships to furnish personnel for the landing party. This practice, however, depleted their crews and lowered their efficiency for naval action. Therefore, in the late 1920's Japan began to experiment with more permanent units known as Special Naval Landing Forces (Rikusentai). Those units were formed at the four major Japanese naval bases: Sasebo, Kure, Maizuru, and Yokosuka, and were given numerical designations as formed; for example, there is a Sasebo 2nd Special Naval Landing Force and a Kure 2nd Special Naval Landing Force. They are composed entirely of naval personnel with a naval officer, usually a commander, in charge. These forces, first used against China and later against the Allies, have gone through several stages of evolution as the general war situation has changed. As the present war progressed, and the Japanese Navy became more involved in the seizure and defense of Pacific islands, other naval land organizations came into existence. Examples of these are: the Base Force (Tokubetsu Konkyochitai), the Guard Force (Keibitai), the Pioneers (Setsueitai) and the Naval Civil Engineering and Construction Units (Kaigun Kenchiku Shisetsu Butai).
2. SPECIAL NAVAL LANDING FORCES. a. Use in China. Special naval landing forces were used extensively in landing operations on the China coast beginning with 1932, and often performed garrison duty upon capturing their objective. Their performance was excellent when unopposed, but when determined resistance was encountered they exhibited a surprising lack of ability in infantry combat. These early special naval landing forces were organized as battalions, each estimated to comprise about 2,000 men divided into 4 companies. Three companies each consisted of 6 rifle platoons and 1 heavy machine gun platoon; the fourth company, of 3 rifle platoons and a heavy-weapons platoon of four 3-inch naval guns, or two 75-mm regimental guns and two 70-mm battalion guns. Tank and armored car units were employed in garrison duty and, where the terrain and situation favored their use, in assault operations.
b. Offensive use in World War II. When the present war began, special naval landing forces at first were used to occupy a chain of Pacific island bases. Wake Island was taken by one such force, while another seized the Gilbert Islands. Later they were used to spearhead landing operations against Java, Ambon, and Rabaul, where the bulk of the attack forces consisted of army personnel. During this period the special naval landing forces, although heavily armed, were used as mobile striking units. They consisted of two rifle companies (each having a machine-gun platoon), and one or two companies of heavy weapons (antitank guns, sometimes antiaircraft guns, and tanks), a total of 1,200 to 1,500 men. A small number of special troops (engineer, ordnance, signal, transport and medical) was also included. Figure 78 illustrates the composition of this type of unit, and also the change to heavier fire power as compared with the organization of the earlier types of naval landing forces used in China from 1932 to 1937.
[Figure 78. Maizuru No. 2 Special Landing Force: organization as of 19 November 1941.]
c. Special naval landing forces in defense. Special naval landing forces, or similar organizations, are occupying a number of outlying bases, because the Army has been reluctant to take over the defenses of these outposts. Since Japan has lost the initiative in the Pacific, these forces have been given defensive missions, and the Japanese Navy has changed their organization accordingly. This point is strikingly illustrated by a comparison of the organization of the Yokosuka 7th Special Naval Landing Force (fig. 79), encountered on New Georgia, with that of the Maizuru 2nd (fig. 78). The Yokosuka 7th has a larger amount of artillery, and its guns are mainly pedestal-mounted naval pieces. As first organized, the Yokosuka 7th was deficient in infantry troops and infantry weapons for defense, but later it was reinforced by a second rifle company. This new company consisted of 3 rifle platoons of 1 officer and 48 enlisted men each (3 light-machine-gun squads and 1 grenade-discharger squad), and a heavy-machine-gun platoon of 1 officer, 58 enlisted men, and 8 heavy machine guns.
[Figure 79. Yokosuka 7th Special Landing Force (As Originally Organized)]
Figure 79.
[Figure 79--Continued. Yokosuka 7th Special Landing Force (As Reinforced)]
Figure 79—Continued.
Other special naval landing forces probably started with an organization similar to that of the Maizuru 2nd, but their gun strengths and organizations most probably have veered toward that of the Yokosuka 7th. This process was found to have occurred in the Gilberts and Marshalls. Under Allied pressure Japan has found it necessary to increase the defenses of some islands by reinforcing the special naval landing force, or by combining two or more special naval landing forces into a new organization known as a Combined Special Naval Landing Force. In New Georgia the Kure 6th, the Yokosuka 7th, and portions of the Maizuru 4th were combined into the 8th Combined Special Naval Landing Force. In the Gilberts a special naval landing force was combined with a guard or base force to form a Special Defense Force.
3. TRAINING OF SPECIAL NAVAL LANDING FORCE. The earlier special naval landing forces received extensive training in landing operations and beach defense, but their training in infantry weapons and tactics does not appear to have been up to the standard of the Japanese Army. More recently there has been a greater emphasis on infantry training for units already in existence. Tactical doctrine for land warfare follows that of the Army, with certain changes based on lessons learned during the current war. The platoon is the basic tactical unit, rather than the company. The Japanese Navy has not hesitated to cut across company lines in assigning missions within the landing force and in detailing portions of a landing force to detached missions.
4. UNIFORMS AND PERSONAL EQUIPMENT. Small arms and personal equipment are similar to that used by the Army. Dress uniform consists of navy blues with canvas leggings. The Japanese characters for "Special Naval Landing Force" appear on the naval cap in the manner in which the words "U.S. Navy" appear on the caps of U.S. enlisted men. Field uniforms are similar to the Army in cut and color, although the color is sometimes more green. The typical Army cloth cap and steel helmet are used, but the insignia is an anchor instead of the star of the Army (see ch. XI).
5. MISCELLANEOUS NAVAL ORGANIZATIONS. a. Base force or special base force (Tokubetsu Konkyochitai). This unit is the Naval Command echelon for the defense forces of a prescribed area. In addition to headquarters personnel, the base force has certain heavy coast artillery and also heavy and medium antiaircraft artillery. There appears to be no fixed organization, the size of the base force depending upon the importance and extent of the area to be defended. The following units may be found attached to base forces:
Small naval surface units (patrol boats).
One or more special naval landing forces.
One or more guard forces.
Pioneer units.
Navy civil engineering and construction units.
b. Pioneers (Setsueitai). The function of this unit is the construction of airfields, fortifications, barracks, etc. It is commanded by a naval officer, usually of the rank of captain or commander, has attached officers and civilians with engineering experience, and is semimilitary in character. There appear to be 2 types of organization, of 800 and 1,300 men respectively, depending on the size of the job. The unit contains from 1/4 to 1/3 Japanese, and the balance are Koreans or Formosans. The 15th Pioneers was such a unit.
c. Navy civil engineering and construction unit (Kaigun Kenchiku Shisetsu Butai). This unit appears to be used primarily for common labor, and is of little combat value. It is commanded by a Japanese civilian and is composed mainly of Koreans, with about 10 percent armed Japanese to serve as overseers. Its size appears to be around 1,000 men. In combat value, it is inferior to the pioneer unit since it contains fewer armed Japanese.
d. Guard force (Keibitai). This unit is used for the defense of small installations. It is composed of naval personnel, and has light and medium antiaircraft and heavy infantry weapons. Its size, armament, and organization vary, and several guard forces may be attached to a base force.
[Back to Table of Contents] Back to Table of Contents
|
<urn:uuid:fefe9f59-b24c-4bc3-8897-3e6b432b1398>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.02011209726333618,
"language": "en",
"language_score": 0.9599003195762634,
"url": "http://lonesentry.com/manuals/handbook-japanese-military/navy-special-forces.html"
}
|
Sighted vs. Blind
Although the majority of people with Non 24-Hour Sleep Wake Disorder (Non-24) are blind, it also occurs in a small number of sighted individuals. As many as half to three-quarters of totally blind patients (i.e., having no light perception) are considered to have Non-24, representing approximately 65 to 95 thousand Americans. The number of sighted people with Non-24 is unknown.
Most blind people have some light perception. As a result, their circadian rhythms are synchronized to a 24-hour day-night cycle, as in the sighted. However, it has been estimated that of the 1.3 million blind people in the United States, 10% have no light perception. These totally blind individuals are at the greatest risk for Non-24.
Non-24 is common in totally blind people due to a lack of light entering the eyes. Photoreceptors in the retina normally signal the brain and regulate the 24-hour day-night cycle. For a totally blind individual with Non-24, their inability to perceive light prevents the synchronization of their internal body clock to the day-night cycle. As a result, the internal body clock defaults to a non-24 hour cycle, causing fluctuating periods of good sleep followed by periods of poor sleep and excessive daytime sleepiness.
The exact cause of Non-24 in sighted people is unknown. But, it is believed to be caused by neurological factors.
|
<urn:uuid:a54f6a03-f20a-4e70-a552-68d62af7de9c>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.6875,
"fasttext_score": 0.351576030254364,
"language": "en",
"language_score": 0.9483685493469238,
"url": "https://sleepfoundation.org/non-24/content/sighted-vs-blind"
}
|
Turret (architecture)
Jump to navigation Jump to search
Turret (highlighted in red) attached to a tower on a baronial building in Scotland
A building may have both towers and turrets; towers might be smaller or higher, but turrets instead project from the edge of a building rather than continue to the ground. The size of a turret is therefore limited, since it puts additional stresses on the structure of the building. Turrets were traditionally supported by a corbel.
In modern times, a gun turret is a weapon mount that houses the crew or mechanism of a projectile-firing weapon, allowing the weapon to be aimed and fired in some degree of azimuth and elevation. It can be found on warships, combat vehicles, military aircraft, and land fortifications, and usually offers some degree of armour or protection.[3]
See also[edit]
1. ^ (from Italian: torretta, little tower; Latin: turris, tower)
2. ^ Ontarioarchitecture.com
3. ^ British Scaffold Tower Manufacturer
|
<urn:uuid:256b26a7-29c2-46c8-a383-e38b64c48e97>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.59375,
"fasttext_score": 0.5753572583198547,
"language": "en",
"language_score": 0.9515741467475891,
"url": "https://en.turkcewiki.org/wiki/Turret_(architecture)"
}
|
Posted in Religion
Vicars and Vicarages
The leader (‘incumbent’) of a Church of England parish church is known as a vicar, the equivalent of a priest, and his/her title is Reverend. The role is earned after several years’ training and at least three as a curate (assistant vicar). The Bishop of a diocese (region) appoints the vicar to a vacancy. A rent-free house, called a vicarage, is traditionally a substantial part of the job’s perks.
The vicar wears a long black cassock under a shorter robe called a surplice. The cassock has a stand-up collar with a front gap by which a white insert is displayed. Over the shoulders hangs a long fabric stole, sometimes richly embroidered. The vicar may have a team of helpers covering administration, music and some of the preaching and teaching. There are also the Church Wardens and the Verger.
Centuries ago the vicar frequently lived on an upper floor of the church but after clergymen were allowed to marry post-Reformation, more comfortable accommodation was deemed necessary; hence the building of vicarages either adjacent to the church or very close by. Today, large vicarages have generally been sold off and replaced with small, modern properties, while some vicars make independent arrangements.
(Image: The Diocese of York at Wikimedia Commons / CC BY-SA 2.0)
|
<urn:uuid:e7fe8843-adcd-4a14-83e8-f66b843fe0df>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.0717816948890686,
"language": "en",
"language_score": 0.9577757120132446,
"url": "https://britdips.xyz/6409/"
}
|
How George Westinghouse Gave The Railroads a Brake
By Alexa Veselic
George Westinghouse
George Westinghouse
A booming in the 1800s, crashes and work accidents associated with train travel were often fatal.
Using an ineffective and slow hand-brake system, brakemen sat atop the train’s cars and, as the brake whistle sounded, manually turned a hand wheel to apply the brakes to that individual car. The men would then leap on to the next freight car in the line and do the same.
The deadly system continued until 1868 when George Westinghouse developed the revolutionary air-brake, which allowed the train’s engineer to apply the brakes to all cars simultaneously using compressed air.
The air-brake system made it possible for the construction of longer and faster trains, and also eliminated the dangerous job of the brakemen and made travel safer for passengers.
Westinghouse was born into industry in Central Bridge, New York in 1846. His father, George Westinghouse Sr., owned a company that manufactured farm equipment.
By his early twenties, Westinghouse had already developed two products to improve railroad travel, the car replacer and railroad switch. The steel needed for these technologies is what originally brought Westinghouse to Pittsburgh. At the time, Pittsburgh produced about 45% of the nation’s total iron and cast more steel than any other city.
While traveling the country selling his products, Westinghouse developed the concept for his air-brake. After patenting the design at the age of 23, Westinghouse formed the Westinghouse Air-Brake Company at 25th Street and Liberty Avenue in the Strip District.
Westinghouse was not merely known for his innovative mind, but also his paternal attitude toward his employees. According to Ed Reis, Westinghouse historian at the Heinz History Center, this quality is what separated Westinghouse from other Pittsburgh industrialists.
“He had a very good rapport with his workers which wasn’t very common at the time,” Reis said. “There was never a strike, he paid better than the other industrialists and he treated his workers with respect.”
The fact that Westinghouse had only 361 patents credited to him, far fewer than his competitor, Thomas Edison, was due in part to the respect he had for his employees. “If you were working for Westinghouse and a product you were working on was selected for a patent, then your name was on the patent not Westinghouse,” Reis said. “He believed workers deserved the recognition.”
Westinghouse instituted Saturday half holidays at his plants, giving employees a five and a half day work week, which was uncommon at the time. He also introduced paid vacations and pension and disability programs. Westinghouse believed that employee satisfaction was what truly made a company successful.
By 1890, Westinghouse Air-Brake Company had relocated to Wilmerding, Pa., an industrial town with insured housing for his employees’ families and a safe factory for workers. In the first year, 6,000 workers at the Wilmerding plant were producing over 1,000 sets of brakes a day. Westinghouse died in 1914 but his legacy of innovation continues.
The original factory itself still stands in the Strip District as a reminder of one the city’s industrial giants, today housing the offices of the Pittsburgh Opera. A statue was erected in 1930 in honor of Westinghouse in Schenley Park through voluntary
|
<urn:uuid:a02538e9-a537-4718-a442-794115e99b0f>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.71875,
"fasttext_score": 0.11382526159286499,
"language": "en",
"language_score": 0.9828699827194214,
"url": "http://www.offthebluff.com/how-george-westinghouse-gave-the-railroads-a-brake/"
}
|
kidzsearch.com > wiki Explore:images videos games
KidzSearch Safe Wikipedia for Kids.
Jump to: navigation, search
A conflict is a struggle between people. The struggle may be physical, or between conflicting ideas. The word comes from Latin conflingere Conflingere means to come together for a battle. Conflicts can either be within one person, or they can involve several people or groups. Conflicts arise because there are needs, values or ideas that are seen to be different, and there is no means to reconcile the dispute.
Very often, conflicts lead to fights, or even wars (in the case where conflicts are solved with weapons). Conflict between ideas is usually fought with propaganda.
Related pages
|
<urn:uuid:6d26497e-d4fd-4a24-8aaf-ab81b85c1340>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.640625,
"fasttext_score": 0.1167861819267273,
"language": "en",
"language_score": 0.934578001499176,
"url": "https://wiki.kidzsearch.com/wiki/Conflict"
}
|
All Sources -
Updated Media sources (1) About content Print Topic Share Topic
views updated
The growing participation of African Americans in television, both in front of and behind the camera, has coincided with the radical restructuring of race relations in the United States from the end of World War II to the present day. Throughout this period, the specific characteristics of the television industry have complicated the ways in which these changing relations have been represented in television programming.
Television was conceived as a form of commercialized mass entertainment. Its standard farecomedy, melodrama, and variety showsfavors simple plot structures, family situations, light treatment of social issues, and reassuring happy endings, all of which greatly delimit character and thematic developments. Perhaps more than any other group in American society, African Americans have suffered from the tendencies of these shows to depict onedimensional character stereotypes.
Because commercial networks are primarily concerned with the avoidance of controversy and the creation of shows with the greatest possible appeal, African Americans were rarely featured in network series during the early years of television. Since the 1960s, the growing recognition by network executives that African Americans are an important group of consumers has led to greater visibility; however, in most cases, fear of controversy has led programmers to promote an unrealistic view of African-American life. Black performers, writers, directors, and producers have had to struggle against the effects of persistent typecasting and enforced sanitization in exchange for acceptance in white households. Only when African Americans made headway into positions of power in the production of television programs were alternative modes of representing African Americans developed.
Although experiments with television technology date back to the 1880s, it was not until the 1930s that sufficient technical expertise and financial backing were secured for the establishment of viable television networks. The National Broadcasting Company (NBC), a subsidiary of the Radio Corporation of America (RCA), wanted to begin commercial television broadcasting on a wide scale but was interrupted by the outbreak of World War II, and the television age did not commence in earnest until after peace was declared.
In 1948 the three major networksthe National Broadcasting Company (NBC), the Columbia Broadcasting System (CBS), and the American Broadcasting Company (ABC)began regularly scheduled primetime programming. That same year, the Democratic Party adopted
a strong civil rights platform at the Democratic convention, and the Truman administration issued a report entitled To Secure These Rights, the first statement made by the federal government in support of desegregation. Yet these two epochal revolutionstelevision and the civil rights movementhad little influence on one another for many years. While NBC, as early as 1951, stipulated that programs dealing with race and ethnicity should avoid ridiculing any social or racial group, most network programming rarely reflected the turbulence caused by the agitation for civil rights, nor did activists look to television as a medium for effecting social change. The effort to obtain fair and honest representation of African Americans and African-American issues on television remains a complex and protracted struggle.
In the early years of television, African Americans appeared most often as occasional guests on variety shows. Music entertainment artists, sports personalities, comedians, and political figures of the stature of Ella Fitzgerald, Lena Horne, Sarah Vaughan, Louis Armstrong, Duke Ellington, Cab Calloway, Pearl Bailey, Eartha Kitt, the Harlem Globetrotters, Dewey "Pigmeat" Markham, Bill "Bojangles" Robinson, Ethel Waters, Joe Louis, Sammy Davis Jr., Ralph Bunche, and Paul Robeson appeared in such shows as Milton Berle's Texaco Star Theater (19481953), Ed Sullivan's Toast of the Town (19481955), the Steve Allen Show (19501952; 19561961), and Cavalcade of Stars (19491952). Quiz shows like Strike It Rich (19511958), amateur talent contests like Chance of a Lifetime (19501953; 19551956), and shows concentrating on sporting events (particularly boxing matches), like The Gillette Cavalcade of Sports (19481960), provided another venue in which prominent blacks occasionally took part.
Rarely did African Americans host their own shows. Short-run exceptions included The Bob Howard Show (19481950); Sugar Hill Times (1949), an all-black variety show featuring Willie Bryant and Harry Belafonte; the Hazel Scott Show (1950), the first show featuring a black female host; the Billy Daniels Show (1952); and the Nat "King" Cole Show (19561957). There were even fewer all-black shows designed to appeal to all-black audiences or shows directed and produced by blacks. Short-lived local productions constituted the bulk of the latter category. In the early 1950s, a black amateur show called Spotlight on Harlem was broadcast on WJZ-TV in New York City; in 1955, the religious Mahalia Jackson Show appeared on Chicago's WBBM-TV.
Comedy was the only fiction-oriented genre in which African Americans were visible participants. Comedy linked television with the deeply entrenched cultural tradition of minstrelsy and blackface practices dating back to the antebellum period. In this cultural tradition, the representation of African Americans was confined either to degrading stereotypes of questionable intelligence and integrity (such as coons, mammies, Uncle Toms, or Stepin Fetchits) or to characterizations of people in willingly subservient positions (maids, chauffeurs, elevator operators, train conductors, shoeshine boys, handypeople, and the like). Beginning in the 1920s, radio comedies had perpetuated this cultural tradition, tailored to the needs of the medium.
The dominant television genre, the situation comedy, was invented on the radio. Like its television successor, the radio comedyself-contained fifteen-minute or half-hour episodes with a fixed set of characters, usually involving minor domestic or familial disputes, and painlessly resolved in the allotted time periodlent itself to caricature. Since all radio comedy was verbal, it relied for much of its humor on the misuse of language, such as malapropisms or syntax error; and jokes made at the expense of African Americans (and their supposed difficulties with the English language) were a staple of radio comedies.
The first successful radio comedy, and the series that in many ways defined the genre, was Amos 'n' Andy, (19291960), which employed white actors to depict unflattering black characters. Amos 'n' Andy featured two white comedians, Freeman Gosden and Charles Correll, working in the style of minstrelsy and vaudeville. Another radio show that was successfully transferred to television was Beulah (19501953). The character Beulah was originally created for a radio show called Fibber McGee and Molly (19351957), in which Beulah was played by Marlin Hurt, a white man. These two shows, which adopted an attitude of contempt and condescending sympathy toward the black persona, were re-created on television with few changes, except that the verisimilitude of the genre demanded the use of black actors rather than whites in blackface and "blackvoice." As with Amos 'n' Andy (19511953)in its first season the thirteenth most-watched show on televisionthe creators of Beulah had no trouble securing commercial support; both television shows turned out to be as popular as their radio predecessors, though both were short-lived in their network television incarnations.
Beulah (played first by Ethel Waters, then by Louise Beavers) developed the story of the faithful, complacent Aunt Jemima who worked for a white suburban middle-class nuclear family. Her unquestioning devotion to solving familial problems in the household of her white employers, the Hendersons, validated a social structure that forced black domestic workers to profess unconditional fidelity to white families, while neglecting their personal relations to their own kin. When blacks were included in Beulah's personal world, they appeared only as stereotypes. For instance, the neighbor's maid, Oriole (played by Butterfly McQueen), was an even more pronounced Aunt Jemima character; and Beulah's boyfriend, Bill Jackson (played by Percy Harris and Dooley Wilson), the Henderson's handyperson, was a coon. The dynamics between the white world of the Hendersons and Beulah's black world were those of the perfect object with a defective mirror image. The Hendersons represented a well-adjusted family, supported by a strong yet loving working father whose sizable income made it possible for the mother to remain at home. In contrast, Beulah was condemned to chasing after an idealized version of the family because her boyfriend did not seem too interested in a stable relationship; she was destined to work forever because Bill Jackson did not seem capable of taking full financial responsibility in the event of a marriage. As the show could only exist as long as Beulah was a maid, it was evident that her desires were never to be fulfilled. If Beulah seemed to enjoy channeling all her energy toward the solution of a white family's conflicts, it was because her own problems deserved no solution.
Amos 'n' Andy, on the other hand, belonged to the category of folkish programs that focused on the daily life and family affairs of various ethnic groups. Several such programs, among them Mama (19491956), The Goldbergs (19491955), and Life with Luigi (19521953)depicting the lives of Norwegians, Jews, and Italians, respectivelywere popularized in the early 1950s. In Amos 'n' Andy, the main roles comprised an assortment of stereotypical black characters. Amos Jones (played by Alvin Childress) and his wife, Ruby (played by Jane Adams), were passive Uncle Toms, while Andrew "Andy" Hogg Brown (played by Spencer Williams) was gullible and half-witted. George "Kingfish" Stevens (played by Tim Moore) was a deceiving, unemployed coon, whose authority was constantly being undermined by his shrewd wife Sapphire (played by Ernestine Wade) and overbearing mother-in-law, "Mama" (played by Amanda Randolph). "Lightnin'" (played by Horace Stewart) was a janitor, and Algonquin J. Calhoun (played by Johnny Lee) was a fast-talking lawyer. These stereotypical characters were contrasted, in turn, with serious, level-headed black supporting characters, such as doctors, business people, judges, law enforcers, and so forth. The humorous situations created by the juxtapositions of these two types of charactersstereotypical and realisticmade Amos 'n' Andy an exceptionally intricate comedy and the first all-black television comedy that opened a window for white audiences on the everyday lives of African-American families in Harlem.
Having an all-black cast made it possible for Amos 'n' Andy to neglect relevant but controversial issues like race relations. The Harlem of this show was a world of separate but equal contentment, where happy losers, always ready to make fools of themselves, coexisted with regular people. Furthermore, the show's reliance on stereotypes precluded both the full-fledged development of its characters and the possibility of an authentic investigation into the pathos of black daily life. Even though the performers often showed themselves to be masters of comedy and vaudeville, it is unfortunate that someone like Spencer Williams, who was also a prolific maker of all-black films, would only be remembered by the general public as Andy.
While a number of African Americans were able to enjoy shows like Beulah and Amos 'n' Andy, many were offended by their portrayal of stereotypes, as well as by the marked absence of African Americans from other fictional genres. Black opposition had rallied without success to protest the airing of this kind of show on the radio in the 1930s. Before Amos 'n' Andy aired in 1951, the National Association for the Advancement of Colored People (NAACP) began suing CBS for the show's demeaning depiction of blacks, and the organization did not rest until the show was canceled in 1953. Yet the viewership of white and black audiences alike kept Amos 'n' Andy in syndication until 1966. The NAACP's victory in terminating Amos 'n' Andy and Beulah also proved somewhat pyrrhic, since during the subsequent decade the networks produced no dramatic series with African Americans as central characters, while stereotyped portrayals of minor characters continued.
Many secondary comic characters from the radio and cinema found a niche for themselves in television. In the Jack Benny Show (19501965), Rochester Van Jones (played by Eddie "Rochester" Anderson) appeared as Benny's valet and chauffeur. For Anderson, whose Rochester had amounted to a combination of the coon and the faithful servant in the radio show, the shift to television proved advantageous, as he was able to give his character greater depth on the television screen. Indeed, through their outlandish employer-employee relationship, Benny and Anderson established one of the first interracial onscreen partnerships in which the deployment of power alternated evenly from one character to the other. The same may not be said of Willie Best's characterizations in shows like The Stu Erwin Show (19501955) and My Little Margie (19521955). Best tended to confine his antics to the Stepin Fetchit style and thereby reinforced the worst aspects of the master-slave dynamic.
African-American participation in dramatic series was confined to supporting roles in specific episodes in which the color-line tradition was maintained, such as the Philco Television Playhouse (19481955), which featured a young Sidney Poitier in "A Man Is Ten Feet Tall" in 1955; the General Electric Theater (19531962), which featured Ethel Waters and Harry Belafonte in "Winner by Decision" in 1955; and The Hallmark Hall of Fame (1952) productions in 1957 and 1959 of Marc Connelly's "Green Pastures," a biblical retelling performed by an all-black cast. African Americans also appeared as jungle savages in such shows as Ramar of the Jungle (19521953), Jungle Jim (1955), and Sheena, Queen of the Jungle (19551956). The television western, one of the most important dramatic genres of the time, almost entirely excluded African Americans, despite their importance to the real American West. In the case of those narratives set in contemporary cities, if African Americans were ever included, it was only as props signifying urban deviance and decay. A rare exception to this was Harlem Detective (19531954), an extremely low-budget, local program about an interracial pair of detectives (with William Marshall and William Harriston playing the roles of the black and white detectives, respectively) produced by New York's WOR-TV.
Despite the sporadic opening of white households to exceptional African Americans and the effectiveness of the NAACP's action in canceling Amos 'n' Andy, the networks succumbed to the growing political conservatism and racial antagonism of the mid-1950s. The cancellation of the Nat "King" Cole Show exemplifies the attitude that prevailed among programmers during that time. Nat "King" Cole had an impeccable record: his excellent musical and vocal training complemented his noncontroversial, delicate, and urbane delivery; he had a nationally successful radio show on NBC in the 1940s; and over forty of his recordings had been listed for their top sales by Billboard magazine between 1940 and 1955. Cole's great popularity was demonstrated in his frequent appearances as guest or host on the most important television variety shows. NBC first backed Cole completely, as is evidenced by the network's willingness to pour money into the show's budget, to increase the show's format from fifteen to thirty minutes, and to experiment with different time slots. Cole also had the support of reputable musicians and singers who were willing to perform for nominal fees. His guests included Count Basie, Mahalia Jackson, Pearl Bailey, and all-star musicians from "Jazz at the Philharmonic." Yet the Nat "King" Cole Show did not gain enough popularity among white audiences to survive the competition for top ratings; nor was it able to secure a stable national sponsor. After about fifty performances, the show was canceled.
African Americans exhibited great courage in these early years of television by supporting some shows and boycotting others. Organizations such as the Committee on Employment Opportunities for Negroes, the Coordinating Council for Negro Performers, and the Committee for the Negro in the Arts constantly fought for greater and fairer inclusion. During the height of the civil rights movement, the participation of African Americans in television intensified. Both Africans and African Americans became the object of scrutiny for daily news shows and network documentaries. The profound effects of the radical recomposition of race relations in the United States and the independence movement in Africa could not go unreported. "The Red and the Black" (January 1961), a segment of the Close Up! documentary series, analyzed the potential encroachment of the Soviet Union in Africa as European nations withdrew from the continent; "Robert Ruark's Africa" (May 1962), a documentary special shot on location in Kenya, defended the colonial presence in the continent. The series See It Now (19511958) started reporting on the civil rights movement as early as 1954, when the U.S. Supreme Court ruled to desegregate public schools, and exposed the measures that had been taken to hinder desegregation in Norfolk high schools in an episode titled "The Lost Class of '59," aired in January 1959. CBS Reports (1959) examined, among other matters, the living conditions of blacks in the rural South in specials such as "Harvest of Shame" (November 1960). In December 1960 NBC White Paper aired "Sit-In," a special report on desegregation conflicts in Nashville. "Crucial Summer" (which started airing in August 1963) was a five-part series of half-hour reports on discrimination practices in housing, education, and employment. It was followed by "The American Revolution of '63" (which started airing in September 1963), a three-hour documentary on discrimination in different areas of daily life across the nation.
However, the gains made by the airing of these programs were offset by the effects of poor scheduling, and they were often made to compete with popular series programs and variety and game shows from which blacks had been virtually erased. As the civil rights movement gained momentum, some southern local stations preempted programming that focused on racial issues, while other southern stations served as a means for the propagation of segregationist propaganda.
As black issues came to be scrutinized in news reports and documentaries, African Americans began to appear in the growing genre of socially relevant dramas, such as The Naked City (19581963), Dr. Kildare (19611966), Ben Casey (19611966), The Defenders (19611965), The Nurses (19621965), Channing (19631964), The Fugitive (19631967), and Slattery's People (19631965). These shows, which usually relied on news stories for their dramatic material, explored social problems from the perspective of white doctors, nurses, educators, social workers, or lawyers. Although social issues were seriously treated, their impact was much diminished by the easy and felicitous resolution with which each episode was brought to a close. Furthermore, the African Americans who appeared in these programsRuby Dee, Louis Gossett Jr., Ossie Davis, and otherswere given roles in episodes where topics were racially defined, and the color line was strictly maintained.
The short-lived social drama East Side/West Side (19631964) proved an exception to this rule. It was the first noncomedy in the history of television to cast an African American (Cicely Tyson) as a regular character. The program portrayed the dreary realities of urban America without supplying artificial happy endings; on occasion, parts of the show were censored because of their liberal treatment of interracial relations. East Side/West Side ran into difficulties when programmers tried to obtain commercial sponsors for the hour during which it was aired; eventually, despite changes in format, it was canceled after little more than twenty episodes.
Unquestionably, the more realistic television genres that evolved as a result of the civil rights movement served as powerful mechanisms for sensitizing audiences to the predicaments of those affected by racism. But as television grew to occupy center stage in American popular entertainment, the gains of the civil rights movement came to be ambiguously manifested. By 1965, a profusion of toprated programs had begun casting African Americans both in leading and supporting roles. The networks and commercial sponsors became aware of the purchasing power of African-American audiences, and at the same time they discovered that products could be advertised to African-American consumers without necessarily offending white tastes. Arguably, the growing inclusion of African Americans in fiction-oriented genres was premised on a radical inversion of previous patterns. If blacks were to be freed from stereotypical and subservient representation, they were nevertheless portrayed in ways designed to please white audiences. Their emergence as a presence in television was to be facilitated by a thorough cleansing.
A sign of the changing times was the popular police comedy Car 54, Where Are You? (19611963). Set in a rundown part of the Bronx, this comedy featured black officers in secondary roles (played by Nipsey Russell and Frederick O'Neal). However, the real turning point in characterizations came with I Spy (19651968), a dramatic series featuring Bill Cosby and Robert Culp as Alexander Scott and Kelly Robinson, two secret agents whose adventures took them to the world's most sophisticated spots, where racial tensions did not exist. In this role, Cosby played an immaculate, disciplined, intelligent, highly educated, and cultured black man who engaged in occasional romances but did not appear sexually threatening and whose sense of humor was neither eccentric nor vulgar. While inverting stereotypical roles, I Spy also created a one-to-one harmonious interracial friendship between two men.
I Spy was followed by other top-rated programs. In Mission Impossible (19661973), Greg Morris played Barney Collier, a mechanic and electronics expert and member of the espionage team; in Mannix (19671975), a crime series about a private eye, Gail Fisher played Peggy Fair, Mannix's secretary; in Ironside (19671975), Don Mitchell played Mark Sanger, Ironside's personal assistant and bodyguard; and in the crime show Mod Squad (19681973), Clarence Williams III played Linc Hayes, one of the three undercover police officers working for the Los Angeles Police Department. This trend was manifested in other top-ranked shows: Peyton Place (19641969), the first prime-time soap opera, featured Ruby Dee, Percy Rodriguez, and Glynn Turman as the Miles Family; in Hogan's Heroes (19651971), a sitcom about American prisoners in a German POW camp during World War II, Ivan Dixon played Sergeant Kinchloe; in Daktari (19661969), Hari Rhodes played an African zoologist; in Batman (19661968), Eartha Kitt appeared as Catwoman; in Star Trek (19661969), Nichelle Nichols was Lieutenant Uhura; in the variety show Rowan and Martin's Laugh-In (19661973), Chelsea Brown, Johnny Brown, and Teresa Graves appeared regularly; and in the soap opera The Guiding Light (1952), Cicely Tyson started appearing regularly after 1967.
Julia (19681971) was the first sitcom in over fifteen years to feature African Americans in the main roles. It placed seventh in its first season, thereby becoming as popular as Amos 'n' Andy had been in its time. Julia Baker (played by Diahann Carroll) was a middle-class, cultured widow who spoke standard English. Her occupation as a nurse suggested that she had attended college. She was economically and emotionally self-sufficient; a caring parent to her little son Corey (played by Marc Copage); and equipped with enough sophistication and wit to solve the typical comic dilemmas presented in the series. However, many African Americans criticized the show for neglecting the more pressing social issues of their day. In Julia's sub-urban world, it was not so much that racism did not matter, but that integration had been accomplished at the expense of black culture. Julia's cast of black friends and relatives (played by Virginia Capers, Diana Sands, Paul Winfield, and Fred Williamson) appeared equally sanitized. Ironically, Julia perpetuated some of the same misrepresentations of the black family as Beulah for despite its elegant trappings, Julia's was yet another female-headed African-American household.
As successful as Julia was the Bill Cosby Show (19691971), which featured Bill Cosby as Chet Kincaid, a single, middle-class high school gym teacher. In contrast to Julia, however, this comedy series presented narrative conflicts that involved Cosby in the affairs of black relatives and innercity friends, as well as in those of white associates and suburban students. The Bill Cosby Show sought to integrate the elements of African-American culture through the use of sound, setting, and character: African-American music played in the background, props reminded one of contemporary political events, Jackie "Moms" Mabley and Mantan Moreland appeared frequently as Cosby's aunt and uncle, and Cosby's jokes often invested events from black everyday life with comic pathos. A less provocative but long-running sitcom, Room 222 (19691974), concerned an integrated school in Los Angeles. Pete Dixon (played by Lloyd Haynes), a black history teacher, combined the recounting of important events of black history with attempts to address his students' daily problems. Another comic series, Barefoot in the Park (19701971)with Scoey Mitchell, Tracey Reed, Thelma Carpenter, and Nipsey Russellwas attempted, but failed after thirteen episodes; it was an adaptation of the film by the same name but with African Americans playing the leading roles.
By the end of the 1960s, many of the shows in which blacks could either demonstrate their decision-making abilities or investigate the complexities of their lives had been canceled. Two black variety shows failed due to poor scheduling and lack of white viewer support: The Sammy Davis Jr. Show, the first variety show hosted by a black person since the Nat "King" Cole Show (1966); and The Leslie Uggams Show (1969), the first variety show hosted by a black woman since Hazel Scott. A similar fate befell The Outcasts (19681969), an unusual western set in the period immediately following the Civil War. The show, which featured two bounty hunters, a former slave and a former slave owner, and addressed without qualms many of the same controversial themes associated with the civil rights movement, was canceled due to poor ratings. Equally short-lived was Hawk (1966), a police drama shot on location in New York City, which featured a full-blooded Native American detective (played by Burt Reynolds) and his black partner (played by Wayne Grice). An interracial friendship was also featured in the series Gentle Ben (19671969), which concerned the adventures of a white boy and his pet bear; Angelo Rutherford played Willie, the boy's close friend. While interracial friendships were cautiously permitted, the slightest indication of romance was instantly suppressed: The musical variety show Petula (1968) was canceled because it showed Harry Belafonte and Petula Clark touching hands.
Despite these limitations, the programs of the 1960s, 1970s, and 1980s represented a drastic departure from the racial landscape of early television. In the late 1940s, African Americans were typically confined to occasional guest roles; by the end of the 1980s, most top-rated shows featured at least one black person. It had become possible for television shows to violate racial taboos without completely losing commercial and viewer sponsorship. However, greater visibility in front of the camera did not necessarily translate into equal opportunity for all in all branches of television: the question remained as to whether discriminatory practices had in fact been curtailed, or had simply survived in more sophisticated ways. It was true that the presence of blacks had increased in many areas of television, including, for example, the national news: Bryant Gumbel co-anchored Today (1952) from 1982 to 1997; Ed Bradley joined 60 Minutes (1968) in 1981; Carole Simpson was a weekend anchor for ABC World News Tonight, where she had started as a correspondent in 1982, from 1988 to 2003.
Nevertheless, comedy remained the dominant form for expressing black lifestyles. Dramatic shows centering on the African-American experience have had to struggle to obtain high enough ratings to remain on the airthe majority of the successful dramas have been those where blacks share the leading roles with other white protagonists.
During the 1970s and 1980s, the number of social dramas, crime shows, or police stories centering on African Americans or featuring an African American in a major role steadily increased. Most of the series were canceled within a year. These included The Young Lawyers (19701971), The Young Rebels (19701971), The Interns (19701971), The Silent Force (19701971), Tenafly (19731974), Get Christie Love! (19741975), Shaft (1977), Paris (19791980), The Lazarus Syndrome (1979), Harris & Co. (1979), Palmerstown, USA (19801981), Double Dare (1985), Fortune Dane (1986), The Insiders (1986), Gideon Oliver (1989), A Man Called Hawk (1989), and Sonny Spoon (1988). The most popular dramatic series with African-American leads were Miami Vice (19841989), In the Heat of the Night (19881994), and The A-Team (19831987). On Miami Vice and In the Heat of the Night, Philip Michael Thomas and Howard Rollins, the black leads, were partnered with better-known white actors who became the most identifiable character for each series. Perhaps the most popular actor on a dramatic series was the somewhat cartoonish Mr. T, who played Sergeant Bosco "B.A." Baracus on The A-Team, an action-adventure series in which soldiers of fortune set out to eradicate crime. Although in the comedy Barney Miller (19751980) Ron Glass played an ambitious middle-class black detective, the guest spots or supporting roles in police series generally portrayed African Americans as sleazy informants, such as Rooster (Michael D. Roberts) on Baretta (19751978), or Huggy Bear (Antonio Fargas) on Starsky and Hutch (19751979).
In prime-time serials, African Americans appeared to have been unproblematically assimilated into a middle-class lifestyle. Dynasty (19811989) featured Diahann Carroll as one of the series' innumerable variations on the "rich bitch" persona; while Knots Landing (19791993), L.A. Law (19861994), China Beach (19881990), and The Trials of Rosie O'Neal (19911992) developed storylines with leading black roles as well as interracial romance themes. Later dramatic series featuring African Americans in regularly occurring roles included Homicide: Life on the Street (19931999), NYPD Blue (19932005), Oz (19972003), The Practice (19972004), Third Watch (19992005), Boston Public (20002004), and Six Feet Under (20012005), as well as ER (1994), "24" (2001), The Wire (2002), Without a Trace (2002), Law & Order (1990) and its spin-offs Law & Order: Special Victims Unit (1999) and Law & Order: Criminal Intent (2001), and CSI: Crime Scene Investigation (2000) and its spinoffs CSI: Miami (2002) and CSI: New York (2004).
MTM Enterprises produced some of the most successful treatments of African Americans in the 1980s. In their programs, which often combined drama and satire, characters of different ethnic backgrounds were accorded full magnitude. Fame (19821983) was an important drama about teenagers of different ethnicities coping with the complexities of contemporary life. Frank's Place (19871988), an offbeat and imaginative show about a professor who inherits a restaurant in a black neighborhood in New Orleans, provided viewers with a realistic treatment of black family affairs. Though acclaimed by critics, Frank's Place did not manage to gain a large audience, and the show was canceled after having been assigned four different time slots in one year.
African Americans have been featured in relatively minor and secondary roles on science fiction series. Star Trek 's communications officer Lieutenant Uhura (played by Nichelle Nichols) was little more than a glorified telephone operator. Star Trek: The Next Generation (19871994) featured LeVar Burton as Leiutenant Geordi La Forge, a blind engineer who can see through a visor. A heavily made-up Michael Dorn was cast as Lieutenant Worf, a horny-headed Klingon officer, and Whoopi Goldberg appeared frequently as the supremely empathetic, long-lived bartender Guinan. In Deep Space 9 (19921999), the third Star Trek series, a major role was given to Avery Brooks as Commander Sisko, head of the space station on which much of the show's action takes place, while Star Trek: Voyager (19952001) featured Tim Russ as Vulcan security officer Tuvok. Enterprise (20012005), the fifth Star Trek series, featured Anthony Montgomery as Ensign Travis Mayweather.
Until recently, blacks played an extremely marginal role in daytime soap operas. In 1966, Another World became the first daytime soap opera to introduce a storyline about a black character, a nurse named Peggy Harris Nolan (played by Micki Grant). In 1968, the character of Carla Hall was introduced as the daughter of housekeeper Sadie Gray (played by Lillian Hayman). Embarrassed by her social and ethnic origins, Carla was passing for white in order to be engaged to a successful white doctor. Some network affiliates canceled the show after Carla appeared. Since then, many more African Americans have appeared in soap operas, including Al Freeman Jr., Darnell Williams, Phylicia Rashad, Jackée, Blair Underwood, Nell Carter, Billy Dee Williams, Cicely Tyson, and Ruby Dee. In most cases, character development has been minor, with blacks subsisting on the margins of activity, not at the centers of power. An exception was the interracial marriage between a black woman pediatrician and a white male psychiatrist on General Hospital in 1987. Generations, the only soap opera that focused exclusively on African-American family affairs, was canceled in 1990 after a year-long run. However, The Young and the Restless (1973) has featured such African-American actors as Kristoff St. John, Victoria Rowell, Shemar Moore, and Tonya Lee Williams in long-running storylines. In addition, black actor James Reynolds joined the cast of Days of Our Lives (1965) in 1982 as police commander Abe Carver, and continued in the role for more than twenty years, with a short break in the early 1990s to star in Generations. Reynold's Abe Carver has become one of television's longest-running black characters.
The dramatic miniseries Roots (1977) and Roots: The Next Generation (1979)more commonly known as Roots II were unusually successful. For the first time in the history of television, close to 130 million Americans dedicated almost twenty-four hours to following a 300-year saga chronicling the tribulations of African Americans in their sojourn from Africa to slavery and, finally, to emancipation. Yet Roots and Roots II were constrained by the requirements of linear narrative, and characters were seldom placed in situations where they could explore the full range of their historical involvement in the struggle against slavery. The miniseries Beulah Land (1980), a reconstruction of the southern experience during the Civil War, attempted to recapture the success of Roots, but ended up doing no more than reviving some of the worst aspects of Gone with the Wind. Other important but less commercially successful dramatic historical reconstructions include The Autobiography of Miss Jane Pittman (1973), King (1978), One in a Million: The Ron LeFlore Story (1978), A Woman Called Moses (1978), Backstairs at the White House (1979), Freedom Road (1979), Sadat (1983), and Mandela (1987). There are also a number of made-for-television movies based on the civil rights movement, including The Ernest Green Story (1993), Mr. & Mrs. Loving (1996), The Color of Courage (1998), Ruby Bridges (1998), Selma, Lord, Selma (1999), Freedom Song (2000), Boycott (2002), and The Rosa Parks Story (2002).
A number of miniseries and made-for-television movies about black family affairs and romance were broadcast in the 1980s. Crisis at Central High (1981) was based on the desegregation dispute in Little Rock, Arkansas, while Benny's Place (1982), Sister, Sister (1982), The Defiant Ones (1985), and The Women of Brewster Place (1989) were set in various African-American communities. Other more recent examples include The Josephine Baker Story (1990), The Temptations (1998), Introducing Dorothy Dandridge (1999), The Corner (2000), Carmen: A Hip Hopera (2001), The Old Settler (2001), Lackawanna Blues (2005), and Their Eyes Were Watching God (2005).
The 1970s witnessed the emergence of several television sitcoms featuring black family affairs. In these shows, grave issues such as poverty and upward mobility were embedded in racially centered jokes. A source of inspiration for these sitcoms may have been The Flip Wilson Show (19701974), the first successful variety show hosted by an
African American. The show, which featured celebrity guests like Lucille Ball, Johnny Cash, Muhammad Ali, Sammy Davis Jr., Bill Cosby, Richard Pryor, and B. B. King, was perhaps best known for the skits Wilson performed. The skits were about black characters (Geraldine Jones, Reverend Leroy, Sonny the janitor, Freddy Johnson the playboy, and Charley the chef) who flaunted their outlandishness to such a degree that most viewers were unable to determine whether they were meant to be cruel reminders of minstrelsy or parodies of stereotypes.
A number of family comedies, mostly produced by Tandem Productions (Norman Lear and Bud Yoking), became popular around the same time as The Flip Wilson Show : these included All in the Family (19711983), Sanford and Son (19721977), Maude (19721978), That's My Mama (19741975), The Jeffersons (19751985), Good Times (19741979), and What's Happening (19761979). On Sanford and Son, Redd Foxx and Demond Wilson played father-and-son Los Angeles junk dealers. Good Times, set in a housing development on the South Side of Chicago, portrayed a working-class black family. Jimmie Walker, who played J.J., became an overnight celebrity with his "jive-talking" and use of catchphrases like "Dy-No-Mite." On The Jeffersons, Sherman Hemsley played George Jefferson, an obnoxious and upwardly mobile owner of a dry-cleaning business. As with Amos 'n' Andy, these comedies relied principally on stereotypesthe bigot, the screaming woman, the grinning idiot, and so onfor their humor. However, unlike their predecessor of the 1950s, the comedies of the 1970s integrated social commentary into the joke situations. Many of the situations reflected contemporary discussions in a country divided by, among other things, the Vietnam War. And because of the serialized form of the episodes, most characters were able to grow and learn from experience.
By the late 1970s and early 1980s, the focus of sitcoms had shifted from family affairs to nontraditional familial arrangements. The Cop and the Kid (19751976), Diff'rent Strokes (19781986), The Facts of Life (19791988), and Webster (19831987) were about white families and their adopted black children. Several comic formulas were also reworked, as a sassy maid (played by Nell Carter) raised several white children in Gimme a Break! (19811987), and a wise-cracking and strong-willed butler (played by Robert Guillaume) dominated the parody Soap (19771981). Guillaume later played an equally daring budget director for a state governor in Benson (19791986). Several less successful comedies were also developed during this time, including The Sanford Arms (1976), The New Odd Couple (19821983), One in a Million (1980), and The Red Foxx Show (1986).
The most significant comedies of the 1980s were those in which black culture was explored on its own terms. The extraordinarily successful The Cosby Show (19841992), the first African-American series to top the annual Nielsen ratings, featured Bill Cosby as Cliff Huxtable, a comfortable middle-class paterfamilias to his Brooklyn family, which included his successful lawyer wife Clair Huxtable (played by Phylicia Rashad) and their six children. The series 227 (19851990) starred Marla Gibbs, who had previously played a sassy maid on The Jeffersons, in a family comedy set in a black section of Washington, D.C. A Different World (19871993), a spin-off of The Cosby Show, was set in a black college in the South. Amen (19861991), featuring Sherman Hemsley as Deacon Ernest Frye, was centered on a black church in Philadelphia. In all of these series, the black-white confrontations that had been the staple of African-American television comedy were replaced by situations in which the humor was provided by the diversity and difference within the African-American community.
Some black comediesCharlie & Company (1986), Family Matters (19891998), Fresh Prince of Bel Air (19901996), and True Colors (19901992)followed the style set by The Cosby Show. Others like In Living Color (19901994) took the route of reworking a combination of variety show and skits in a manner reminiscent of The Flip Wilson Show. Other popular variety and sketch comedy series starring African-American comedians included HBO's The Chris Rock Show (19972000) and Dave Chappelle's Chappelle's Show (20032005) on Comedy Central. Much of the originality and freshness of these comedies is due to the fact that some of them were produced by African Americans (The Cosby Show, A Different World, Fresh Prince of Bel Air, and In Living Color ). Carter Country (19771979), a sitcom that pitted a redneck police chief against his black deputy (played by Kene Holliday), inspired several programs with similar plot lines: Just Our Luck (1983), He's the Mayor (1986), The Powers of Matthew Star (19821983), Stir Crazy (1985), Tenspeed and Brown Shoe (1980), and Enos (19801981).
UPN, launched as the United Paramount Network in 1995, has made a staple of programming situation comedies featuring primarily African-American casts, including Moesha (19962001), The Parkers (19992004), Girlfriends (2000), One on One (2001), Half & Half (2002), All of Us (2003), Eve (2003), and Second Time Around (20042005). The actor Taye Diggs produced and starred as a hotshot attorney in the UPN dramatic series Kevin Hill (2004). The Fox network offered the comedy Living Single (19921998), starring Queen Latifah, and The Bernie Mac Show (2001), while the WB had actors Jaime Foxx in The Jaime Foxx Show (19962001) and Steve Harvey in Steve Harvey's Big Time (20032005). ABC's comedies included The Hughleys (19982002), starring D. L. Hughley, and My Wife and Kids (20012005), starring Damon Wayans, while cable station Showtime offered a series adaptation of the movie Soul Food (20002004). Reality series such as Survivor (2000), The Amazing Race (2001), American Idol (2002), and The Apprentice (2004) featured African Americans among their participants. The UPN's popular reality show America's Next Top Model (2001) also featured black participants, as well as an African-American host and producer, Tyra Banks.
Local stations, public television outlets, syndication, and cable networks have provided important alternatives for the production of authentic African-American programming. In the late 1960s, local television stations began opening their doors to the production of all-black shows and the training of African-American actors, commentators, and crews. Examples of these efforts include Black Journal later known as Tony Brown's Journal (19681976), a national public affairs program; Soul (19701975), a variety show produced by Ellis Haizlip at WNET in New York; Inside Bedford-Stuyvesant (19681973), a public affairs program serving the black communities in New York City; and Like It Is, a public affairs show featuring Gil Noble as the outspoken host.
At the national level, public television has also addressed African-American everyday life and culture in such series and special programs as History of the Negro People (1965), Black Omnibus (1973), The Righteous Apples (19791981), With Ossie and Ruby (19801981), Gotta Make This Journey: Sweet Honey and the Rock (1984), The Africans (1986), Eyes on the Prize (1987), and Eyes on the Prize II (1990). The Public Broadcasting Service (PBS) documentary series American Masters (1986) featured a number of episodes on African-American artists, including Louis Armstrong, James Baldwin, Duke Ellington, Lena Horne, Sidney Poitier, and others. The American Experience (1988), another documentary series on PBS, included episodes on the careers of Ida B. Wells, Adam Clayton Powell, Malcolm X, Marcus Garvey, and other important African Americans, along with episodes on topics in black culture and history, including "Roots of Resistance: The Story of the Underground Railroad" (1995), "Scottsboro: An American Tragedy" (2000), and "The Murder of Emmett Till" (2003). In addition, black journalist Gwen Ifill became the moderator of Washington Week (1967) and senior correspondent for The NewsHour with Jim Lehrer (1995) on PBS in 1999. Ifill also moderated the first televised debate between the candidates for vice president during the 2004 presidential campaign.
Syndication, the system of selling programming to individual stations on a one-to-one basis, has been crucial for the distribution of shows such as Soul Train (1971), Solid Gold (19801988), The Arsenio Hall Show (19891994), The Oprah Winfrey Show (1986), and The Montel Williams Show (1991). A wider range of programming has also been made possible by the growth and proliferation of cable services. Robert Johnson took a personal loan for $15,000 in the early 1980s to start a cable businessBlack Entertainment Television (BET)catering to the African Americans living in the Washington, D.C., area. At that time BET consisted of a few hours a day of music videos. By the early 1990s, the network had expanded across the country, servicing about 25 million subscribers, and had a net worth of more than $150 million. (Its programming had expanded to include black collegiate sports, music videos, public affairs programs, and reruns of, among others, The Cosby Show and Frank's Place.) The Black Family Channel, founded in 1999 as MBC Network, is a black-owned and operated cable network for African-American families with children's programs, sports, news, talk shows, and religious programming.
Children's Programming
As late as 1969, children's programming did not include African Americans. The first exceptions were Sesame Street (1969) and Fat Albert and the Cosby Kids (19721989). These two shows were groundbreaking in content and format; they emphasized altruistic themes, the solution of everyday problems, and the development of reading skills and basic arithmetic. Other children's shows that focused on or incorporated African Americans include The Jackson Five (1971), ABC After-School Specials (1972), The Harlem Globetrotters Popcorn Machine (19741976), Rebop (19761979), 30 Minutes (19781982); Reading Rainbow (19832004), Pee-Wee's Playhouse (19861991); Saved by the Bell (19891993), Saved by the Bell: The New Class (19932000), and Where in the World Is Carmen San Diego (19911996).
Although African Americans have had to struggle against both racial tension and the inherent limitations of television, they have become prominent in all aspects of the television industry. As we enter the twenty-first century, the format and impact of television programming will undergo some radical changes, and the potential to provoke and inform audiences will grow. Television programs are thus likely to become more controversial than ever, but they will also become an even richer medium for effecting social change. Perhaps African Americans will be able to use these technical changes to allay the racial discord and prejudice that persists off-camera in America.
This article primarily explores the racial issues that impacted on television in its golden years right up to the current century. The arrival of digital delivery systems that have enhanced satellite, cable, DVD and even the internet has reduced the power and reach of broadcast television. Nevertheless, African Americans continue to be short-changed by the medium even with the huge success of Oprah Winfrey, Chris Rock, and a few other Black super stars. The more the technology changes the more it stays the same.
See also Black Entertainment Television (BET); Carroll, Diahann; Cosby, Bill; Davis, Ossie; Dee, Ruby; Film in the United States; Gossett, Louis, Jr.; Minstrels/Minstrelsy; Poitier, Sidney; Radio; Tyson, Cicely; Wilson, Flip
Allen, Robert C., ed. Channels of Discourse, Reassembled: Television and Contemporary Criticism, 2d ed. Chapel Hill: University of North Carolina Press, 1992.
Bogle, Donald. Blacks in American Films and Television: An Encyclopedia. New York: Garland, 1988.
Bogle, Donald. Primetime Blues: African Americans on Network Television. New York: Farrar, Strauss Giroux, 2001.
Brooks, Tim, and Earle Marsh, eds. The Complete Directory to Prime Time Network and Cable TV Shows, 1946Present. 8th ed. New York: Ballantine, 2003.
Dates, Jannette L., and William Barlow, eds. Split Image: African Americans in the Mass Media, 2d ed. Washington, D.C.: Howard University Press, 1993.
Gray, Herman S. Cultural Moves: African Americans and the Politics of Representation. Berkeley: University of California Press, 2005.
Hunt, Darnell M., ed. Channeling Blackness: Studies on Television and Race in America. New York: Oxford University Press, 2005.
Lommel, Cookie. African Americans in Film and Television. Philadelphia: Chelsea House, 2003.
MacDonald, J. Fred. Blacks and White TV: Afro-Americans in Television Since 1948, 2d ed. Chicago: Nelson-Hall, 1992.
McNeil, Alex. Total Television: A Comprehensive Guide to Programming from 1948 to the Present, 4th ed. New York: Penguin, 1996.
Means Coleman, Robin R. African American Viewers and the Black Situation Comedy: Situating Racial Humor. New York: Garland, 1998.
Neale, Stephen, and Frank Krutnik. Popular Film and Television Comedy. London and New York: Routledge, 1990.
Pulley, Brett. The Billion Dollar BET: Robert Johnson and the Inside Story of Black Entertainment Television. Hoboken, N.J.: Wiley, 2004.
Torres, Sasha, ed. Living Color: Race and Television in the United States. Durham, N.C.: Duke University Press, 1998.
Torres, Sasha. Black, White, and in Color: Television and Black Civil Rights. Princeton, N.J.: Princeton University Press, 2003.
White, Mimi. "What's the Difference? 'Frank's Place' in Television." Wide Angle 13 (1990): 8293.
charles hobson (1996)
chris tomassini (2005)
views updated
Chester Pach
Stahl, Lesley. Reporting Live. New York, 1999.
Woodward, Bob. The Commanders. New York, 1991.
According to the tape of the telephone conversation, Johnson said:
views updated
Rock band
For the Record
Not a Rebellion, but a Counter-Revolution
Postponed Major Label Deals
Set Apart From Their Contemporaries
Solo Efforts
Selected discography
For the Record
Not a Rebellion, but a Counter-Revolution
Postponed Major Label Deals
Set Apart From Their Contemporaries
Solo Efforts
Selected discography
Marquee Moon, Elektra Records, 1977.
Adventure, Elektra Records, 1978.
The Blow Up, ROIR CD, 1978.
Television, Capital Records, 1992.
Verlaines solo albums
Flashlight, IRS Records.
Words From the Front, Warner Bros.
Warm and Cool, Rykodisc, 1992.
Lloyds solo albums
Field of Fire, Moving Target, 1985.
Real Time, Celluloid, 1987.
Esquire, January 1993.
Guitar Player, January 1993.
Musician, September 1992; June 1995.
Pulse!, September 1992; November 1992.
Spin, November 1992; January 1993.
Joanna Rubiner
views updated
FROM 1960–1980
The experience of seeing movies is likely to conjure thoughts of going to a movie theater: the smell of popcorn at the concession stand, the friendly bustle of fellow moviegoers in the lobby, the collective anticipation as the auditorium lights dim, and the sensation of being enveloped by a world that exists, temporarily, in the theater's darkness. Anyone who enjoys movies has vivid memories of going out to see movies; the romance of the movie theater is crucial to the appeal of cinema. But what about all of the movies we experience by staying in? The truth is that most of us born since 1950 have watched many more movies at home, on the glowing cathode-ray tube of a television set, than on the silver screen of a movie theater.
It is not often recognized, but the family home has been the most common site of movie exhibition for more than half of the cinema's first century. In the United States this pattern began with the appearance of commercial broadcast television, starting with the debut of regular prime-time programming in 1948, and has grown with each new video technology capable of delivering entertainment to the home—cable, videocassette recorders (VCRs), direct broadcast satellites (DBS), DVD (digital video disc) players, and video-on-demand (VOD). Over much of this period, watching movies on TV represented a calculated tradeoff for consumers: television offered a cheap and convenient alternative to the movie theater at the cost of a diminished experience of the movie itself. With the introduction of high-definition (HDTV) television sets and high-fidelity audio in the 1990s, however, the humble TV set has grown to be the centerpiece of a new "home theater," which can offer a viewing experience superior in most ways to that of a typical suburban multiplex. In fact, with theaters desperate for additional income, going out to the movies now often involves sitting through a barrage of noisy, forgettable commercials for products aimed mostly at teenagers. In an odd twist, the only hope for avoiding commercials has become to stay in and watch movies on television.
We tend to think of film and television as rival media, but their histories are so deeply intertwined that thinking of them separately is often a hindrance to understanding how the film and television industries operate or how people experience these media in their everyday lives. Starting in the late 1950s, Hollywood studios began to produce substantially more hours of film for television (in the form of TV series) than for movie theaters, and that pattern holds to this day. Since the early 1960s, it has been apparent that feature films are merely passing through movie theaters en route to their ultimate destination on home television screens. As physical artifacts, films may reside in studio vaults, but they remain alive in the culture due almost entirely to the existence of television. Whether films survive on cable channels or on DVD, they rarely appear on any screens other than television screens once they have completed their initial theatrical release. Given the importance of television in the film industry and in film culture, why do we think of film and television separately?
First, when television appeared on the scene, there was already a tradition of defining the cinema in contrast with other media and art forms. Much classic film theory and criticism, for instance, sought to define film as an autonomous medium by comparing it with precedents in theater, painting, and fiction. In each case, the goal was to acknowledge continuities while highlighting the differences that made film unique. Within this framework, it seemed natural to look for the differences between film and television, even as the boundaries between the media blurred and television became the predominant site of exhibition for films produced in Hollywood.
Second, there is an inherent ambiguity in the way that the term "television" functions in common usage, and this complicates efforts to delineate the relationship between film and television. Depending upon the context of usage, the word "television" serves as convenient shorthand for speaking about at least four different aspects of the medium:
1. Technology: "Television" is used to identify the complex system of analog and digital video technology used to transmit and receive electronic images and sounds. While electronic signals are transmitted and received virtually simultaneously, the images and sounds encoded in those signals may be live or recorded. In other words, the "liveness" of television—a characteristic often used to distinguish television and film—is inherent in the acts of transmission and reception, but not necessarily in the content that appears on TV screens.
2. Consumer Electronics: "Television" also refers to the television set, an electronic consumer good that is integrated into the spaces and temporal rhythms of everyday life. While the movie theater offers a sanctuary, set aside from ordinary life, the TV set is embedded in life. Initially, the TV set was an object found mainly in the family home; increasingly, television screens of all sizes have been dispersed throughout society and can be found in countless informal social settings. As a consumer good, the HDTV set is also becoming a fetish object for connoisseurs of cutting-edge technology—independent of the particular content viewed on the screen.
3. Industry: "Television" refers also to the particular structure of commercial television, a government-regulated industry dominated by powerful networks that broadcast programs to attract viewers and then charge advertisers for the privilege of addressing those viewers with commercials. Using the airwaves to distribute content, the television industry initially had no choice but to rely on advertising revenue, which led to the peculiar flow of commercial television—the alternation of segmented programs punctuated regularly by commercials—as well as the reliance on series formats to deliver consistent audiences to advertisers.
4. Content: "Television" serves as a general term for the content of commercial television, particularly when comparing film and television. Considering the vast range of content available on television, this usage often leads to facile generalizations, suggesting that there is an inherent uniformity or underlying logic to the programs produced for television.
As a result of the ambiguity involved in the usage of the term "television," there is no sensible or consistent framework for thinking about the relationship of film and television. Instead, a single characteristic often serves as the basis for drawing a distinction between the two forms, even though it may obscure more significant similarities. For example, the common assumption that television is a medium directed at the home, while film is a medium directed at theaters, overlooks the importance of the TV set as a technology for film exhibition. Similarly, the emphasis on television's capacity for live transmission obscures the fact that most TV programs are recorded on film or videotape and that feature films make up a large percentage of TV programming.
Third, film has enjoyed a prestige that only recently has been accorded to television, and this status marker has encouraged people to view film and television separately. Every culture creates hierarchies of taste and prestige, and whether explicitly stated or implicitly assumed, film has had a higher cultural status than television. It has been a sign of success, for example, when an actor or a director moves out of television into movies. Similarly, film critics have enjoyed much greater prestige than any critic who has written about television. The scholarly field of film studies, and universities in general, were slow to welcome the study of television. All of this suggests that there has been an unrecognized, but nevertheless real, investment in a cultural hierarchy that treats film as a more serious and respectable pursuit than television, and this hierarchy supported the assumption that film and television are separate media. Of course, any hierarchy of cultural values is subject to change over time. When a television series like The Sopranos (beginning 1999) achieves greater critical acclaim than virtually any movie of the past decade, it is a signal that values are shifting.
By the time the networks introduced regular prime-time programs in 1948, television's arrival as a popular medium had been anticipated for nearly two decades, during which the public had followed news reports of scientific breakthroughs, public demonstrations, and political debates. Electronics manufacturers spearheaded research into the technology of television broadcasting, which was envisioned by them as an extension of the existing system of radio broadcasting in which stations linked to powerful networks broadcast programs to home receivers. The Radio Corporation of America (RCA), which operated the NBC radio network, dominated the electronics industry and lobbied heavily to see its technology adapted by the Federal Communications Commission (FCC) as the industry standard.
b. Philadelphia, Pennsylvania, 25 March 1924
Sidney Lumet's career began at an extraordinary and unique moment in the history of American television. For a few years during the first decade of television, the TV networks broadcast live theatrical performances from studios in New York and Los Angeles to a vast audience nationwide. These ephemeral productions—as immediate and fleeting as any witnessed in the amphitheaters of ancient Greece, yet staged in the blinding glare of commercial television—served as the training ground for a generation of American film directors, which also included Franklin Schaffner, George Roy Hill, Martin Ritt, Arthur Penn, and John Frankenheimer.
Before beginning a fifty-year movie career, Lumet worked at CBS, where he directed hundreds of hours of live television for such series as Danger (1950–1955), You Are There (1953–1957), Climax! (1954–1958), and Studio One (1948–1958). The craft of directing live television, invented through trial and error by pioneers like Lumet, required economy, speed, and precision: concentrated rehearsals with an ensemble of actors, brief blocking of the camera setups, followed by intense concentration on the moment of performance because retakes were out of the question.
Lumet's approach to filmmaking bears traces of this formative experience. Unlike many directors, Lumet begins each film with several weeks of rehearsal in which he and his actors come to a shared understanding of each scene, to ensure that the actual production runs like clockwork. On the set, Lumet works quickly, seldom shooting more than four takes of any shot. He often completes a shooting schedule in thirty days or less, and brings productions in under budget. In an age of superstar directors who may spend years on a single film, Lumet has worked steadily, building a career, scene by scene, film by film, through classics (Dog Day Afternoon, 1975) and clunkers (A Stranger Among Us, 1992).
Lumet's best films—Serpico (1973), Dog Day Afternoon, Running on Empty (1988), and Prince of the City (1981)—are blunt and immediate. What they lack in formal precision, they make up for in the vitality of the performances and the conviction of the storytelling. Lumet can be a superb visual stylist when orchestrating confrontations between actors in confined spaces, but he is generally indifferent to the visual potential of his material and has never seemed concerned with creating a signature style. His approach to filmmaking, with its emphasis on preparation, ensemble acting, and an unobtrusive camera that captures the spontaneity of performance, translates the values of live television into the medium of film.
Twelve Angry Men (1957), Long Day's Journey Into Night (1962), Fail-Safe (1964), The Pawnbroker (1964), The Hill (1965), Serpico (1973), Murder on the Orient Express (1974), Dog Day Afternoon (1975), Network (1976), Prince of the City (1981), The Verdict (1982), Running on Empty (1988), Q&A (1990)
Bogdanovich, Peter. Who the Devil Made It: Conversations with Legendary Film Directors. New York: Ballantine,1998.
Cunningham, Frank R. Sidney Lumet: Film and Literary Vision. Lexington: University Press of Kentucky, 1991.
Lumet, Sidney. Making Movies. New York: Knopf, 1995.
Christopher Anderson
The Hollywood studios were far from passive bystanders during this period. Having already invested in radio, but seen the radio industry controlled by those companies able to establish networks, the studios hoped to command the television industry as they had dominated the movie industry, by controlling networks that would serve as the key channels of distribution in television. The studios also envisioned alternative uses for television technology that would conform more closely to
the economic exchange of the theatrical box office. These included theater television, in which programs would be transmitted to theaters and shown on movies screens, and subscription television, in which home viewers would pay directly for the opportunity to view exclusive programs.
The plans of studio executives were thwarted by the FCC, which stepped in following the Supreme Court's 1948 Paramount decision, to investigate whether the major studios, with their record of monopolistic practices in the movie industry, should be allowed to own television stations. While the studios awaited a decision, the established radio networks—CBS, NBC, and ABC—signed affiliate agreements with the most powerful TV stations in the largest cities, leaving the studios without viable options for forming competitive networks. Thwarted in their ambitions, the major studios withdrew from television until the mid-1950s. Theater television died in its infancy and subscription television would not become a major factor for years to come.
In the meantime, smaller studios and independent producers rushed to supply television with programming. The networks initially promoted the idea that television programs should be produced and broadcast live in order to take advantage of the medium's unique qualities. The networks supplied local affiliates with live programs for their evening schedules and a small portion of their daytime schedule, but each affiliate, along with the small group of independent stations that had chosen not to join a network, still needed to fill the long hours of a broadcast day—and there was not yet a backlog of television programs available. Television stations looked to feature films as the only ready source of programming, and the only features available to them came from outside the major Hollywood studios: British companies and such Poverty Row studios as Monogram Pictures and Republic Pictures Corporation. The theatrical market for B movies had begun to dry up after World War II, and these companies eagerly courted this new market for low-budget films, licensing hundreds of titles for broadcast. It has been estimated that 5,000 feature film titles were available to television by 1950.
Responding to the same demand for programs, small-scale independent producers in Hollywood also began to produce filmed series for television. The most visible early producers in the low-budget "telefilm" business (as it came to be known) were the aging cowboy stars William "Hopalong Cassidy" Boyd (1895–1972), Gene Autry (1907–1998), and Roy Rogers (1911–1998), but they were soon joined by veteran film producers like Hal Roach (1892–1992), radio producers like Frederick W. Ziv (1905–2001), and entrepreneurial performers like Bing Crosby (1903–1977) as well as Lucille Ball (1911–1989) and Desi Arnaz (1917–1986), whose Desilu Studio grew to become one of the most successful television studios of the 1950s.
By mid-decade, as the television audience grew and the demand for programming drove prices higher, the major Hollywood studios discovered their own financial incentives for licensing feature films to television and for entering the field of television production. RKO opened the market for the major studios in 1954 when its owner, Howard Hughes, sold the studio's pre-1948 features to General Teleradio, the broadcasting subsidiary of General Tire and Rubber Company that operated independent station WOR in New York. Warner Bros. followed in 1956 by selling its library of 750 pre-1948 features for $21 million. After this financial windfall was earned from titles locked away in studio vaults, the floodgates opened at all of the studios. Soon the television listings were filled with movies scheduled morning, noon, and night. The most famous of these movie programs was New York station WOR's Million Dollar Movie, which broadcast the same movie five evenings in a row. New York-bred filmmakers like Martin Scorsese have spoken fondly of discovering classic Hollywood movies for the first time while watching the Million Dollar Movie. In a very real sense, television served as the first widely available archive
of American movies, sparking an awareness of film history and creating a new generation of movie fans.
As the Hollywood studios began to release their films to television, they also began to produce filmed television series. Walt Disney (1901–1966) led the way in 1954 with the debut of Disneyland (1954–1990), the series designed to launch his new theme park. Warner Bros., Twentieth Century Fox, and MGM joined prime time the following year. By the end of the 1950s, Hollywood studios were the predominant suppliers of prime time programs for the networks. The transformation was most obvious at Warner Bros., which at one point in 1959 had eight television series in production and not a single feature film. In order to meet the demand for television programs, Warner Bros. geared up to produce the equivalent of a feature film each working day.
While the studios specialized in high volume "telefilm" productions made with the efficiency of an assembly line, the most acclaimed television programs of the decade were anthology drama series that offered a new, original play performed and broadcast live each week. In the intensely creative environment required to produce a live production witnessed by millions of viewers, programs such as Studio One (1948–1958) and Playhouse 90 (1956–1961) served as the training ground for a new generation of writers (Paddy Chayefsky, Reginald Rose, Rod Serling), directors (Arthur Penn, Sidney Lumet, John Frankenheimer, Franklin Shaffner, George Roy Hill) and actors (Paul Newman, Rod Steiger, James Dean, Piper Laurie, Kim Hunter, Geraldine Page and many more) who became the first in a long line of television-trained artists to make the transition into movies.
FROM 1960–1980
Diversifying into television may have seemed risky for a studio in the early 1950s, but within a decade television had become firmly entrenched in Hollywood, where the studios had come to depend for their very existence on the income provided by television. Networks and local stations leaned almost exclusively on Hollywood to satisfy their endless need for programming. By the end of the 1950s, 80 percent of network prime-time programming was produced in Hollywood; it had become nearly impossible to turn on a TV set without encountering a film made in Hollywood, whether a television series or a feature film.
The most significant development for the movie studios occurred in 1960, when they came to an agreement with the Screen Actors Guild that allowed them to sell the television rights to films made after 1948. NBC, the network most committed to color television, introduced Hollywood feature films to prime time in September 1961 with the premiere of the series NBC Saturday Night Movie (1961–1977). ABC added movies to its prime time schedule in 1962. As the perennial first place network with the strongest schedule of regular series, CBS did not feel a need to add movies until 1965. Still, the networks embraced feature films so fervently that by 1968 they programmed seven movies a week in prime time, and four of these finished among the season's highest rated programs.
As recent Hollywood releases became an increasingly important component of prime time schedules, the competition for titles quickly drove up the prices. In 1965 the average price for network rights to a feature film was $400,000, but that figure doubled in just three years. The networks publicized the broadcast premiere of recent studio releases as major events. A milestone of the period occurred in 1966, when ABC paid Columbia $2 million for the rights to the studio's blockbuster hit, The Bridge on the River Kwai (1957). Sponsored solely by Ford Motor Company to promote its new product line, the movie drew an audience of 60 million viewers.
As television became a crucial secondary market for the movie industry, movies needed to be produced with the conditions of commercial television in mind. Many of these concessions to the television industry of the 1960s and 1970s contributed to the impression of the cinema's superiority. In an era when a new generation of filmmakers and critics were promoting the idea that film was an art form, television stations and networks chopped movies to fit into 90- or 120-minute time slots and interrupted them every 12 or 13 minutes for commercials. Because of the moral standards imposed on commercial television by advertisers and the FCC, studios soon required directors to shoot "tame" alternate versions of violent or sexually explicit scenes for the inevitable television version. Studios began to balk when directors used wide-screen compositions in which key action occurred at the edges of the frame—outside the narrower dimensions of the television screen. As a reminder, camera viewfinders were etched with the dimensions of the TV frame. Studios also began to use optical printers to create "pan-and-scan" versions of widescreen films. Using this technique, scenes shot in a single take often were cut into a series of alternating closeups, or reframed during the printing process by panning across the image, so that key action or dialogue occurred within the TV frame.
As the cost of television rights for feature films climbed during the 1960s, each of the networks began to develop movies made expressly for television. NBC partnered with MCA Universal to create a regular series of "world premiere" movies, beginning with Fame is the Name of the Game in 1966. As the network with the lowest-rated regular series, ABC showed the greatest interest in movies made for television. The ninety-minute ABC Movie of the Week premiered in 1968. As executive in charge of the movies, Barry Diller (b. 1942) essentially ran a miniature movie studio at ABC. He supervised the production of 26 movies per year, each made for less than $350,000. Among the many memorable ABC movies during this period were Brian's Song (1971), a tearjerker about a football player's terminal illness starring Billie Dee Williams and James Caan that became the year's fifth highest-rated broadcast, and That Certain Summer (1972), a TV milestone in which Hal Holbrook and Martin Sheen played a gay couple. By 1973 ABC scheduled a Movie of the Week three nights per week. Director Steven Spielberg, whose suspenseful 1971 film Duel managed to sustain excruciating tension even with the commercial breaks of network television, has become the most celebrated graduate of the made-for-TV movie.
As a market for filmed series, theatrical features, and original movies, television contributed substantially to the economic viability of the movie studios during the 1960s and 1970s. In fact, the television market inspired the first round of consolidation in the movie industry, as the rising value of film libraries made the studios appealing targets for conglomerates looking to diversify their investments. As a subsidiary of the conglomerate Gulf + Western, Paramount became the model for the full integration of the movie and TV industries in the late 1970s, when Barry Diller moved from ABC to Paramount, accompanied by his protégé, Michael Eisner (b. 1942). Paramount produced many of the television series that led ABC to the top of the ratings in the 1970s (Happy Days [1974–1984], Laverne and Shirley [1976–1983], Mork and Mindy [1978–1982], and Taxi [1978–1983]), but also learned how to leverage the familiarity of TV stars and TV properties to create cross-media cultural phenomena. The signal event in this process was Paramount's successful transformation of John Travolta from a supporting player in the TV series Welcome Back, Kotter (1975–1979), into the star of the blockbuster hits
Saturday Night Fever (1977) and Grease (1978). The Diller regime also decided to transform the long-cancelled, cult-hit TV series Star Trek (1966–1969), into a movie franchise with Star Trek: The Motion Picture (1979), which revived the commercial prospects for a dormant studio property. The Paramount model spread throughout the industry in the 1980s, as Diller became the chairman of Twentieth Century Fox and Eisner became chairman of Walt Disney Studios.
The first three decades of network television in America represent a period of remarkable stability for the television industry. Once the basic structure of the television industry had been established, the television seasons rolled past with comforting familiarity. However, the rapid growth of cable television and home video in the 1980s, followed by a new round of consolidation in the media industries, disrupted the balance of power in the television industry and led to the complete integration of television networks and Hollywood studios.
Cable television began in the 1940s and 1950s as community antenna television (CATV), a solution to reception problems in geographically isolated towns where people had trouble receiving television signals with a home antenna. The turning point for cable television came during the 1970s, when several corporations began to distribute program services by satellite, making it possible to reach audiences on a national—and eventually international—scale without the need for local affiliate stations. Time, Inc. was the first company to launch a satellite-based service when it premiered Home Box Office (HBO) in 1975. The service began on a small scale, with only a few hundred viewers for its initial broadcast, but it demonstrated that a subscription service for movies and special events could be a viable economic alternative to commercial broadcasting. By the end of the decade, other subscription-based movie channels, including Showtime, the Movie Channel, and HBO's own spinoff network, Cinemax, had followed suit. With these movie channels, and many other new cable channels, cable service expanded rapidly. In 1978, only 17 percent of American households had cable; by 1989, cable penetration had reached 57 percent. This new market was a boon for the studios, which benefited from the increased prices that accompanied the competition for television rights to recently released films, and also for viewers, who were finally able to see complete, unedited feature films in their homes.
Videocassette recorders (VCRs) became a common feature in American homes during the 1980s. Videotape was introduced in 1956, but it was initially used only within the television industry. Its widespread use by television viewers awaited the development of the videocassette by Sony during the 1970s. The consumer market for home VCRs developed slowly at first because Sony and its rival Matsushita developed incompatible systems (Betamax and VHS, respectively). The market also stalled because of a lawsuit filed in 1976 by Disney and Universal against Sony, charging that home videotaping represented a violation of copyright laws. The issue was settled in Sony's favor by a 1984 Supreme Court decision, and the consumer market for VCRs exploded. Although in 1982, 4 percent of American households owned a VCR, by 1988, the figure had reached 60 percent.
As a result of the rise of cable and home video, the motion picture industry developed new release patterns that channeled movies from their debut in theaters to their eventual appearance on television through a carefully managed series of exclusive distribution "windows" designed to squeeze the maximum value from each stage of a movie's lifespan in the video age: theatrical release, home video, pay-per-view, pay cable, basic cable, and broadcast television. By the time a movie has made its way down the chain to broadcast TV, and is available for free to television viewers, it has received so much exposure that it is no longer a form of showcase programming.
As these technological developments shook the familiar patterns of the television and movie industries, a series of regulatory changes governing the television industry and relaxed enforcement of antitrust laws by the Reagan-era Justice Department heated up the media industries, subjecting them to a general trend of mergers and acquisitions that swept through corporate America in the 1980s. This climate gave rise to the series of mergers and acquisitions that saw the Big Three networks change hands in 1985 and 1986, which will be discussed in greater detail below. Regulatory changes also produced a sharp increase in the number of television stations, as corporations invested in chains of stations. In 1970, of the 862 stations in the country, only 82 operated independently of the three networks. The number of independent stations doubled in the 1980s. By 1995 there were 1,532 stations, of which 450 were independent of the three major networks. As the number of stations increased, it became possible to create new television networks.
In 1985, the media conglomerate News Corporation, owned by media tycoon Rupert Murdoch, purchased Twentieth Century Fox Studios. Then in 1986, Murdoch purchased six television stations which served as the foundation for launching the Fox Network, led by former Paramount chairman Barry Diller. Because Fox began by programming just a few nights each week, it technically did not meet the FCC definition of a full-fledged network, and therefore was not constrained by FCC rules that prohibited a network from producing its own programs. As a result, Fox served as the paradigm for a new era in the media industries, with a television network stocked with series produced by its corporate sibling, Twentieth Century Fox Television. Programs like The Simpsons (beginning 1989) and The X-Files (1993–2002) grew into network hits and lucrative commercial franchises within a perfect, closed loop of corporate synergy in which all profits remained within the parent company, News Corporation.
Pointing to the loophole that Fox had squeezed through in order to produce its own programs, the networks lobbied for an end to the FCC rules that had kept them from producing programs or sharing in the lucrative syndication market (where programs are sold to local stations and international markets) since the early 1970s. These Financial Interest and Syndication Rules were gradually repealed between 1991 and 1995. The policy change not only gave networks the opportunity to produce their own programs, but it also eliminated the last remaining barriers separating the movie and television industries. Studios quickly formed new television networks or merged with existing networks. Time Warner's WB Network and Viacom's United Paramount Network (UPN) debuted in 1995 (the two were merged into the CW in 2006). ABC came under the control of the Walt Disney Company in August 1995 when Disney acquired the network's parent company, Capital Cities/ABC Television Network for $19 billion. Viacom purchased CBS in 1999, and NBC acquired Vivendi Universal in 2005. In this stage of consolidation, the boundaries between film and television are certainly not perceived as barriers; rather, they represent opportunities for diversifying a media conglomerate's product lines.
At the turn of the twenty-first century, the boundaries between the media blurred, thanks to the convergence of digital technologies and consolidation in the media industries. Many filmmakers use digital video in place of film throughout the entire filmmaking process, and it is only a matter of time before movies are distributed and projected in theaters using digital technology. The vast libraries of film and television titles that give the conglomerates much of their economic value are being digitized and stored on computer servers. The latest round of mergers in the media industries has created conglomerates that actively promote cross-media synergy. The enticement of extraordinary riches for anyone fortunate enough to be involved in the creation of a hit TV series means that talent no longer flows from TV to movies; many producers, directors, writers, and performers move eagerly between film and television.
b. Chicago, Illinois, 5 February 1943
Michael Mann is roughly the same age as Martin Scorsese, Francis Coppola, George Lucas, and the other directors of the film-school generation who revived American filmmaking in the 1970s, but he is seldom thought of as a member of that generation, despite the fact he too attended film school in the 1960s. Like the romantic loners who inhabit his films, Mann followed his own route to the film industry. He attended film school in London, instead of New York or Los Angeles, and while his peers traveled directly from film school to the movie industry, Mann detoured through television, where he learned his craft by writing for the police series Police Story (1973–1977) and Starsky and Hutch (1975–1979) and then by creating the series Vega$ (1978–1981).
Mann understood the potential for rich storytelling inherent in the series format and appreciated the creative authority of the writer-producer in television. In 1981 he directed his first feature film, the accomplished existential thriller Thief, yet returned to television to produce Miami Vice (1984–1989) and Crime Story (1986–1988), two of the most innovative series in television history. In the tradition of the great auteur directors of the studio era, Mann burrowed deeply into an exhausted genre; beneath the familiar façade of the police series, he discovered the darkest impulses of his age and his own voice as an artist. Returning to film, Mann hit his stride at the turn of the millennium, and directing at least two classics (The Last of the Mohicans [1992], Heat [1995]) and a number of other films (The Insider [1999], Ali [2001], and Collateral [2004]) that express his enduring theme—the challenges faced by a man (it is always a man) who attempts to live by a personal moral code in a capricious, corrupting world.
Mann spent his formative years in television drama during the 1970s, when one police series looked exactly like every other. Yet to accompany his narrative voice, he developed a powerful personal style that is as evident in his television series as in his films. When he returned to television with the unfortunately short-lived Robbery Homicide Division (2002–2003), he shot the entire series on digital video (DV). Other television producers and filmmakers have used DV because it is less expensive than film, or because it is easier to manipulate for post-production effects, but Mann discovered the expressive qualities of the medium's hyperrealism. The television series turned out to be a trial run for Collateral, which used DV to transform nighttime Los Angeles into a throbbing, spectral world. Thanks to a visual aesthetic first worked out in television, Mann was able to create one of the most visually striking movies of the time.
Films: Thief (1981), Manhunter (1986), The Last of the Mohicans (1992), Heat (1995), The Insider (1999), Ali (2001), Collateral (2004); Television Series: Miami Vice (1984–1989), Crime Story (1986–1988), Robbery Homicide Division (2002–2003); Other: AFI—The Director—Michael Mann (2002)
Fuller, Graham. "Making Some Light: An Interview with Michael Mann." In Projections 1, edited by John Boorman and Walter Donahue, 262–278. London: Faber & Faber,1992.
James, Nick. Heat. London: British Film Institute, 2002.
Christopher Anderson
The two-way migration of talent between movies and television first took off in the 1980s, the decade when the director of a few stylish four-minute music videos on MTV could find him or herself with a contract to direct a feature film. Advances in television set technology and the reduced cost of larger screens made it possible for viewers to appreciate differences in visual styles on television. For the first time in the history of
television, competition gave producers and networks an incentive to create distinctive styles. The proliferation of cable channels and the habits of viewers armed with remote controls made a distinctive visual style as important as character and setting in creating an identity for a television series.
When critics praised the groundbreaking crime series Hill Street Blues (1981–1987) and Miami Vice (1984–1989) in the 1980s, they spoke not only about the stories but also about stylistic innovations: the documentary techniques of Hill Street Blues, the adaptation of a music video aesthetic in Miami Vice, a series created and produced by Michael Mann (b. 1943), who moved easily between TV and movies. David Lynch made a big splash with Twin Peaks (1990–1991) a series that brought Lynch's unique vision to television before losing focus in its second season.
Since then directors, writers, and producers have continued to alternate between movies and television. Some directors, such as Oliver Stone (with the mini-series Wild Palms [1993]) and John Sayles (with the series Shannon's Deal [1990–1991]) have made token appearances in television. Others have served as executive producers, including Steven Spielberg (with the miniseries Taken, 2002) and George Lucas (with the series The Young Indiana Jones Chronicles, 1992–1993). Several screenwriters have shifted into television because of the storytelling potential of the series format and the creative control of the writer-producer in television. These include Joss Whedon (Buffy the Vampire Slayer, 1997–2003), Aaron Sorkin (The West Wing, 1999–2006), and Alan Ball (Six Feet Under, 2001–2005). There are several writer-directors who move consistently between film and television, depending on the nature of the project, including Michael Mann, Edward Zwick and Marshall Herskovitz, and Barry Levinson. The most successful producer in Hollywood during this era may be Jerry Bruckheimer, who continues to produce blockbuster hits like Armageddon (1998) and Pirates of the Caribbean (2003), while his company produces the three CSI: Crime Scene Investigation television series for CBS.
In order to attract the young adult viewers most desired by advertisers, television networks must attempt to create programs that attract and reward a discriminating audience. In the past, this audience may have been dissatisfied with commercial networks for interrupting or otherwise interfering with a drama or a movie, but they could only dream of an alternative. Today a flick of the remote control takes them directly to movies and uninterrupted drama series available on HBO and Showtime, collected in DVD box sets, and soon via video-on demand—all experienced in theater-quality, high-definition and Surround Sound. Discerning viewers are still drawn to television, but they have acquired a taste for a viewing experience that is increasingly cinematic. In one portent of the future, the commercial networks have switched to widescreen framing for quality drama series like ER (beginning 1994) and The West Wing.
The experience of watching television at home is becoming more like the experience of watching movies on a big screen. The convergence of digital technologies is gradually eliminating the material distinction between film and video. Media corporations would like to move to a model of video-on-demand in which viewers select individual titles from the studio's library. With these changes on the horizon, it is possible to imagine a time in the not-too-distant future when the differences between film and television will be no more than a topic of historical interest.
SEE ALSO Studio System;Technology
Anderson, Christopher. Hollywood TV: The Studio System in the Fifties. Austin: University of Texas Press, 1994.
Caldwell, John Thornton. Televisuality: Style, Crisis, andAuthority in American Television. New Brunswick, NJ: Rutgers University Press, 1995.
Hilmes, Michele. Hollywood and Broadcasting: From Radio to Cable. Champaign: University of Illinois Press, 1990.
Monaco, Paul. The Sixties: 1960–1969. Berkeley: University of California Press, 2003.
Mullen, Megan Gwynne. The Rise of Cable Programming in the United States: Revolution or Evolution? Austin: University of Texas Press, 2003.
Wasko, Janet. Hollywood in the Information Age: Beyond the Silver Screen. Austin: University of Texas Press, 1995.
Wasser, Frederick. Veni, Vidi, Video: The Hollywood Empire and the VCR. Austin: University of Texas Press, 2002.
Christopher Anderson
views updated
TELEVISION This entry includes 2 subentries:
Programming and Influence
Programming and Influence
Institutional Impacts of Television
James L.Baughman
Early Developments
Early Commercial Broadcasting
Color Television
The Future of Television
Federal Communications Commission. Home page at
See alsoElectricity and Electronics ; Telecommunications .
views updated
Sections within this essay:
Federal Regulation of Licenses, Content, and Advertising
Regulation of Television Broadcast Licenses
Content Regulation: The Fairness Doctrine
Content Regulation: Rules Underlying the Fairness Doctrine
Content Regulation: Obscene, Profane, and Indecent Broadcasts
Regulation of Advertising
Children and Television
Additional Resources
Federal Communications Commission
National Telecommunications and Information Administration
Pursuant to this power Congress passed the Communications Act of 1934, which expanded the definition of "radio communication" to include "signs, signals, pictures, and sounds of all kinds, including all instrumentalities, facilities, apparatus, and services … incidental to such transmission.". With the advent of television in the late 1930s and its growth in popularity during the 1940s and 1950s, "radio communication" was eventually interpreted to encompass television broadcasts as well.
The rapid growth of telecommunications also prompted Congress to create the Federal Communications Commission (FCC), an executive branch agency charged with overseeing the telecommunications industry in the United States. The FCC has exclusive jurisdiction to grant, deny, review, and terminate television broadcast licenses. The FCC is also responsible for establishing guidelines, promulgating regulations, and resolving disputes involving various broadcast media. The FCC does not, however, typically oversee the selection of programming that is broadcast. There are exceptions for this general rule, including limits on indecent programming, the number of commercials aired during children's programming, and rules involving candidates for public office. Five commissioners, appointed by the president and confirmed by the Senate, direct the FCC. Commissioners are appointed for five-year terms; no more than three may be from one political party. Within the FCC, the Media Bureau develops, recommends and administers the policy and licensing programs relating to electronic media, including cable and broadcast television in the United States and its territories.
The FCC enacts and enforces regulations addressing competition among cable and satellite companies and other entities that offer video programming services to the general public. This jurisdiction includes issues such as:
• Mandatory carriage of television broadcast signals
• Commercial leased access
• Program access
• Over-the-air reception devices
• Commercial availability of set-top boxes
• Accessibility of closed captioning and video description on television programming
Federal Regulation of Licenses, Content, and Advertising
Regulation of Television Broadcast Licenses
Pursuant to provisions in the Telecommunications Act of 1996, television in the United States must convert from analog signal broadcast to digital signal. During the transition period, the FCC has temporarily assigned each television station a second station to broadcast the digital signal, while continuing to broadcast the analog on the original channel. Total conversion is expected to be completed in 2006, unless the FCC approves an extension. The FCC is not accepting any applications for new stations until television broadcasting has completed the conversion to digital.
The FCC has broad discretion to establish the qualifications for applicants seeking a television broadcast license and for licensees seeking renewal. The FCC has exercised this discretion to prescribe an assortment of qualifications relating to citizenship, financial solvency, technical prowess, and moral character, and other criteria the commission has deemed relevant to determine the fitness of particular applicants to run a television station. The FCC will also compare the programming content proposed by an applicant to the content of existing programming. The FCC favors applicants who will make television entertainment more diverse and competitive.
To limit the concentration of power in television broadcast rights, the FCC has promulgated rules restricting the number of television stations that a licensee may operate. An applicant who has reached the limit may seek an amendment, waiver, or exception to the rule, and no licensee may be denied an additional license until he or she has been afforded a full hearing on the competing public interests at stake. Applicants or licensees who are dissatisfied with a decision issued by the FCC may seek review from the U. S. Court of Appeals for the District of Columbia Circuit, which has exclusive jurisdiction over appeals concerning FCC decisions granting, denying, modifying, or revoking television broadcast licenses. Decisions rendered by the appellate court may be appealed to the U. S. Supreme Court.
The FCC is authorized to assess and collect a schedule of license fees, application fees, equipment approval fees, and miscellaneous regulatory assessments and penalties to cover the costs of its enforcement proceedings, policy and rulemaking activities, and user information services. The commission may establish these charges and review and adjust them every two years to reflect changes in the Consumer Price Index. Failure to timely pay a fee, assessment, or penalty is grounds for dismissing an application or revoking an existing license.
Content Regulation: The Fairness Doctrine
The original rationale for federal regulation of telecommunications was grounded in the finite num-ber of frequencies on which to broadcast. Many Americans worried that if Congress did not exercise its power over interstate commerce to fairly allocate the available frequencies to licensees who would serve the public interest, then only the richest members of society would own television broadcast rights and television programming would become one-dimensional, biased, or slanted. Only by guaranteeing a place on television for differing opinions, some Americans contended, would the truth emerge in the marketplace of ideas. These concerns manifested themselves in the fairness doctrine.
First fully articulated in 1949, the fairness doctrine had two parts: it required broadcasters to (1) cover vital controversial issues in the community; and (2) provide a reasonable opportunity for the presentation of contrasting points of view. Violation of the doctrine could result in a broadcaster losing its license. Not surprisingly, licensees grew reluctant to cover controversial stories out of fear of being punished for not adequately presenting opposing views. First Amendment advocates decried the fairness doctrine as chilling legitimate speech. The doctrine came under further scrutiny in the 1980s when the explosion of cable television stations dramatically expanded the number of media outlets available.
In 1987 the FCC abolished the fairness doctrine by a 4-0 vote, concluding that the free market and not the federal government is the best regulator of news content on television. Individual media outlets compete with each other for viewers, the FCC said, and this competition necessarily involves establishing the accuracy, credibility, reliability, and thoroughness of each story that is broadcast. Over time the public weeds out news providers that prove to be inaccurate, unreliable, one-sided, or incredible.
Content Regulation: Rules Underlying the Fairness Doctrine
The personal attack and political editorial rules fell by the wayside in 2000, when the Court of Appeals for the District of Columbia ordered the FCC to either provide a detailed justification for their continued application or abandon them. Initially, the FCC suspended the rules on a temporary basis, but later formally repealed both rules.
Proponents of both the personal attack and political editorial rules, as well as the fairness doctrine, have sometimes called for reinstatement. For example, during the 2004 presidential campaign, a furor erupted when some stations decided to broadcast "Stolen Honor", a documentary critical of presidential candidate John Kerry. However, none of the rules have been reinstated.
Content Regulation: Obscene, Profane, and Indecent Broadcasts
The United States Code prohibits the broadcast of any material that is "obscene, indecent, or profane," but offers no definition for those terms. Instead, that task is left to the FCC through its rulemaking and adjudicatory functions. Essentially, it is illegal to air obscene programming at any time. To determine what is obscene, the U.S. Supreme Court crafted a three-prong test:
• An average person, applying contemporary community standards, would find that the material, as a whole, appeals to the prurient interest
• The material depicts or describes, in a patently offensive way, sexual conduct specifically defined by applicable law
Federal law also prohibits the broadcast of indecent programming or profane language during certain hours. According to the FCC, indecent programming involves patently offensive sexual or excretory material that does not rise to the level of obscenity. Indecent material cannot be barred entirely, because it is protected by the First Amendment. The FCC has promulgated a rule that bans indecent broadcasts between the hours of 6:00 a.m. and 10:00 p.m. The FCC defines profanity as "including language so grossly offensive to members of the public who actually hear it as to amount to a nuisance". Profanity is also barred from broadcast between 6:00 a.m. and 10:00 p.m.
In 1978 in FCC v. Pacifica Foundation, the U. S. Supreme Court upheld an FCC order finding that a pre-recorded satirical monologue constituted indecent speech with the repeated use of seven "dirty words" during an afternoon broadcast. The Supreme Court acknowledged that the monologue was not obscene and thus could not have been regulated had it been published in print. But the Court distinguished broadcast media from print media, pointing out that radio and television stations are uniquely pervasive in Americans' lives, and are easily accessible by impressionable children who can be inadvertently exposed to offensive materials without adult supervision. Print media, the Court said, do not intrude upon Americans' privacy to the same extent or in the same manner. Thus, the Court concluded that the FCC could regulate indecent speech on radio and television but cautioned that the commission must do so in a manner that does not completely extinguish such speech.
When a station airs obscene, indecent, or profane material, the FCC may revoke the station's license, impose a monetary forfeiture, or issue a warning. One of the highest profile cases in the last few years came after a half-time performance with Janet Jackson and Justin Timberlake at the 2004 Super Bowl. In August 2004, the FCC ordered CBS Broadcasting to pay $550,000 for its broadcast of indecent material. The FCC issued $7.9 million in indecency fines in 2004.
The FCC undertakes investigations into alleged obscene, profane, and indecent material after receiving public complaint. The FCC reviews each complaint to determine whether it appears that a violation may have occurred. If so, the FCC will begin an investigation. The context of the broadcast is the key to determine whether a broadcast was indecent or profane. The FCC analyzes what was aired, the meaning of it, and the context in which it aired. Complaints can be made online, via e-mail or regular mail, or by calling 1-888-CALL-FCC (voice) or 1-888-TELLFCC (TTY).
As cable television gained prominence during the 1980s, it became unclear whether the FCC's rules on indecency and profanity applied to this burgeoning medium. Cable operators do not use broadcast spectrum frequencies, but they are licensed by local communities in the same way broadcast television station operators are licensed by the FCC. Moreover, cable operators partake in the same kind of First Amendment activities as do their broadcast television counterparts.
Congress tried to clarify the responsibilities of cable operators when it passed the Cable Television Consumer Protection and Competition Act of 1992 (CTCPCA). CTCPCA authorized cable channel operators to restrict or block indecent programming. The authorization applied to leased access channels, which federal law requires cable systems to reserve for lease by unaffiliated parties, and public access channels, which include educational, governmental, or local channels that federal law requires cable operators to carry. Cable operators claimed that the statute was fully consistent with the First Amendment because it left judgments about the suitability of programming to the editorial discretion of the operators themselves. But cable television viewers filed a lawsuit arguing that the statute violated the First Amendment by giving cable operators absolute power to determine programming content.
Regulation of Advertising
Conversely, society is not served by false, deceptive, or harmful advertisements, and thus regulations aimed at curbing such advertising are typically found to serve a substantial governmental interest. The best example involves the federal ban on cigarette advertising. In 1967 the FCC acted upon citizen complaints against the misleading nature of tobacco advertisements by implementing a rule that required any television station carrying cigarette advertisements to also air public service announcements addressing the health risks posed by tobacco. The rule withstood a court challenge. In addition, two years later Congress passed the Public Health and Cigarette Smoking Act of 1969, which banned all electronic advertising of cigarettes as inherently misleading and harmful. The act took effect in 1971 and survived a court challenge that same year. The law remains in effect today. No federal laws or FCC rules ban alcohol advertising, however.
Children and Television
The 1990 Children's Television Act (CTA) was passed to increase the amount of educational and informational television programming for children. CTA requires broadcast stations to serve the educational and informational needs of children through its overall programming, including programming specifically designed to serve these needs ("core programming"). Core programming is programming specifically designed to serve the educational and informational needs of children ages 16 and under. CTA requires that broadcasters:
• Provide parents and consumers with advance information about core programs being aired.
• Define the type of programs that qualify as core programs.
• Air at least three hours per week of core educational programming.
• Limit the amount of time devoted to commercials during children's programs.
Fueled in part by growing public sentiment against the increasingly violent nature of television programming, NTIA and FCC officials recommended that federal law give parents greater control over the programming viewed by their children. The Telecommunications Act of 1996 introduced a ratings system that requires television shows to be rated for violence and sexual content. The act also created the so-called V-chip, a receptor inside television sets that gives parents the ability to block programs they find unsuitable for their children. Under the act, authority to establish TV ratings is given to a committee comprised of parents, television broadcasters, television producers, cable operators, public interest groups, and other interested individuals from the private sector.
In 2004, the FCC imposed children's educational and informational programming obligations on digital multicast broadcasters. Effective January 1, 2006, at least three hours per week of core programming must be provided on the main programming stream, for digital broadcasters. The minimum amount of core programming increases for digital broadcasters that multicast; it increases in proportion to the amount of free video programming offered by the broadcaster on multicast channels. The FCC also limited the amount of commercial matter on all digital video programming, whether free or pay, that is aimed at an audience 12 years old and under.
Beginning January 1, 2006, the FCC also imposed rules governing and limiting the display of Internet web site addresses during programs directed at children 12 and under. The requirements apply to both analog and digital programming. Moreover, FCC rules prohibit "host-selling". According to the FCC, host-selling is any character endorsement that may confuse a child viewers from distinguishing between program and non-program material.
Additional Resources
American Jurisprudence West Group, 1998. U..S. Constitution: First Amendment.
West's Encyclopedia of American Law West Group, 1998.
Federal Communications Commission
445 12th Street S.W.
Washington, DC 20554 USA
Phone: (888) 225-5322
Fax: (202) 835-5322
Primary Contact: Kevin J. Martin, Chairman
National Telecommunications and Information Administration
1401 Constitution Ave. N.W
Washington, DC 20230 USA
Phone: (202) 482-7002
Fax: (202) 482-1840
Primary Contact: Michael D. Gallagher, Administrator
views updated
In his book Watching Race, Herman Gray describes television as a medium used to “engage, understand, negotiate, and make sense of the material circumstances of [everyday life]” (Gray 1995, p. 43). A person’s entire worldview is obviously influenced by many factors, but it remains evident that, in many parts of the world, television plays a significant role in shaping public perceptions of race and racial differences.
Racialized groups are relegated to the role of the “invisible other” in television programming in the United States. Viewers of most commercial programming are led to believe racial/ethnic groups are virtually nonexistent, and that those that do exist reside in racial worlds or on television channels of their own. Cultural programming is needed that expresses the range of experiences of African Americans, Native Americans, Asian Americans, and Latino Americans. In the early twenty-first century, however, the major television networks (CBS, NBC, ABC, and FOX) continue to marginalize and stigmatize racial minorities in their programming.
In early television programming, African-American performers occupied stereotypical, unflattering roles. These actors gradually became a part of mainstream society at the expense of self-degradation, and the visual images of blacks on network television created a false portrayal of African-American identity. Such controlling images as the Jezebel, mammy, servant, matriarch, buffoon, minstrel, and slave presented a distorted reality of racial identity for African Americans. Shows such as Amos and Andy (1951–1953), Beulah (1950–1953), and Jack Benny (1950–1965) portrayed African Americans as lacking intellect and seemingly enjoying their subservient and less powerful positions in the world. As an example, the show Beulah typified the “good old-fashioned minstrel show” (Haggins 2001, p. 250). The lead character, Beulah, was the stereotypical domestic servant who was “happy” with her lot in life serving her boss. Similarly, her friend Oriole was her queen-sized “childlike idiot friend,” perpetuating the “pickaninny” stereotype of the bulging-eyed child with thick lips, unkempt hair, and eating a slice of watermelon. The third character typified the “Uncle Tom” and “Coon” character. Beulah’s boyfriend Bill embodied what it allegedly meant to be a black man. While in the presence of whites, Bill was hardworking, dependable, and content. In his “real” state, however, he was lazy and avoided the work and responsibilities he should have held as a man. Although more recent televisual depictions have not been as blatantly racist as this, shows such as Beulah paved the way for controlling images to be constructed, transformed, and perpetuated in all forms of media. Unfortunately, these images continue to create an illusion of African Americans as subservient and of less value, in addition to being criminal, predatory, and a threat to European Americans. These negative portrayals are a result of slavery and the objectification of African slaves as sexual creatures and servants to the whites who colonized North America, a stigma that remains in place in the twenty-first century.
Television portrayals have, to some degree, begun to challenge many of the long-standing controlling images associated with African Americans. Although they may not be as blatantly racist as they were in the past, networks, reporters, and television writers still perpetuate a subtle form of media racism that R. R. Means Coleman has dubbed “neominstrelsy.” According to Coleman, neominstrelsy refers to contemporary versions of the minstrel images of African Americans that were pervasive in early television programming. The early images set the stage for current media images, which often still function to sustain the myth that African Americans are unscrupulous, lack morals, and are only capable of entertaining others through comedy. According to the media reasearcher Robert Entman, “images of Blacks are produced by network news [that] reinforce whites’ antagonism toward Blacks,” and these images perpetuate stereotypic depictions and contribute to this cycle of television racism (Entman 1994, p. 516). Because television is one of the most heavily used media for information and entertainment purposes, it is imperative that media outlets and creators begin to rethink how these distorted images create an unrealistic picture of African-American life.
Unlike other racial groups, African Americans have been depicted in many television shows in which they have made up the vast majority of the cast. These shows include, but are not limited to, Good Times (1974–1979), The Cosby Show (1984–1992), A Different World (1987–1993), Moesha (1996–2001), Sister, Sister (1994–1999), Girlfriends (2000–), Half & Half (2002–), and Everybody Hates Chris (2005–). These shows have portrayed the diversity that exists within the African-American community. Yet while different kinds of relationships and life experiences have been portrayed in these shows, the constant is that African Americans are generally confined to shows that have a neominstrelsy theme. The characters are comedic and, to varying degrees, embody the Jezebel, mammy, matriarch, buffoon, minstrel, and “Stepin Fetchit” stereotypes. These images sustain the myth that African Americans are unscrupulous, lack morals, and are only capable of entertaining others through comedy. The Cosby Show was an exception, however, for it attempted to debunk these stereotypes. Yet while the show was praised for communicating a positive image of African-American identity, it was also criticized for not being “black” enough.
In the 2006–2007 television season, there were fewer than seven programs with a predominately African-American cast. While this may be better than blacks
having no visibility at all, little is being done to deconstruct societal beliefs about Africans Americans. Sadly, the shows are rarely fully developed and confine blacks to the genre of comedy, making it difficult to counter longstanding controlling images that impact real world perceptions of the African-American community.
Native Americans are rarely portrayed in movies or on television, and when they are shown they are often wearing stereotypical attire (i.e., headdress) or armed with antiquated artillery (i.e., bow and arrow), ready to fulfill the all too familiar image of the “noble” savage. These images perpetuate a negative image of racial/ethnic identity for First Nations people and instill the belief that being Native American is a “thing of the past.” Audiences are led to believe that Native Americans either do not exist or are too small in number to be fairly represented. No matter what period of time in which a story is being told, “contemporary portrayals [of First Nations persons] are typically presented in an historic context” (Merskin 1998, p. 335).
This depiction is a visual representation of a linguistic image accepted as an accurate symbol of First Nations people. They are restricted to an image of being a homogenous group of people lacking any distinctive qualities (e.g., tribes) or heterogeneity. The most pervasive and troubling image is the “conventionalized imagery [that] depicts Indians as wild, savage, heathen, silent, noble, childlike, uncivilized, premodern, immature, ignorant, bloodthirsty, and historical or timeless, all in juxtaposition to the white civilized, mature, modern (usually) Christian American man” (Meek 2006, p. 119). Other stereotypes include the portrayal of Native Americans as drunkards, gamblers, and wards of the government, and these images are too often perceived as accurate representations of the original inhabitants of North America. Television programs with a periodic or a recurring Native American character include, but are not limited to, The Lone Ranger (1949–1957), Dr. Quinn, Medicine Woman (1993–1998), Walker, Texas Ranger (1993–2001), Northern Exposure (1990–1995), McGyver (1985–1992), and Quantum Leap (1989–1993).
These shows portray First Nations people as either occupying a space on the Western frontier or being virtually invisible on television’s racial landscape. This portrayal is attributed to movie and television “Westerns,” which created the stereotypical genre of media representations. According to the sociologist Steve Mizrach, “Indians are shown as bloodthirsty savages, obstacles to progress, predators on peaceful settlers, enemy ‘hostiles’ of the U.S. Cavalry, etc. … the political context of the Indian Wars completely disappears” (Mizrach 1998). To this day, notes Duane Champagne of the Native Nations Law and Policy Center at the University of California, Los Angeles, “Hollywood prefers to isolate its Indians safely within the romantic past, rather than take a close look at Native American issues in the contemporary world” (Champagne 1994, p. 719). These archaic, inaccurate portrayals are further problematized by “images in which the linguistic behaviors of others are simplified and seen as deriving from those persons’ essences” (Meek 2006, p. 95) and “remind us of an oppressive past” (Merskin 2001, p. 160). Through these depictions, First Nations people are presented as “nonnative, incompetent speaker[s] of English” (Meek 2006, p. 96). This is a strategy used to emphasize “Indian civilized otherness by having the character speak English in monosyllables” (Taylor 2000, p. 375).
U.S. companies have long used images of American Indians for product promotion, mainly to “build an association with an idealized and romanticized notion of the past” (Merskin 2001, p. 160). Products such as Land O’ Lakes butter, Sue Bee honey, Big Chief sugar, and Crazy Horse malt liquor have stereotypic caricatures on their labels that are supposed to reflect Native American ethnicity, but which are actually “dehumanizing, one-dimensional images based on a tragic past” (Merskin 2000, p. 167).
Asian Americans are represented on television as a homogenous group of people whose ethnicity is Chinese, Korean, or Japanese. This worldview of a population that is in reality very ethnically diverse is both problematic and restricting. Stereotypes of Asian Americans emerged from efforts by whites to oppress racial groups deemed inferior, and from a nineteenth-century fear of an Asian expansion into white occupations and communities, often referred to as the “Yellow Peril.” These controlling images emerged in order to reduce Asian-American men and women to caricatures based on how the dominant society perceived their racial and gendered identities.
There are both general cultural stereotypes and gender-specific stereotypes of Asian Americans disseminated in the media. General cultural stereotypes include assumptions that Asian Americans are: (1) the model minority, (2) perpetual foreigners, (3) inherently and passively predatory immigrants who never give back, (4) restricted to clichéd occupations (e.g., restaurant workers, laundry workers, martial artists), and (5) inherently comical or sinister. Controlling, gender-specific images of Asian-American identity include Charlie Chan, Fu Manchu, Dragon Lady, and China Doll. Charlie Chan and Fu Manchu are emasculated stereotypes of Asian men as eunuchs or asexual. Charlie Chan, a detective character, was “effeminate, wimpy,” and “dainty,” (Sun 2003, p. 658), as well as “a mysterious man, possessing awesome powers of deduction” (Shah 2003). He was also deferential to whites, “non-threatening, and revealed his ‘Asian wisdom’ in snippets of ‘fortune-cookie’ observations.” Conversely, there is the Fu Manchu character, who is “a cruel, cunning, diabolical representative of the ‘yellow peril”’ (Sun 2003, p. 658).
Asian-American women are portrayed as being hypersexual–or as the opposite of “asexual” Asian men (Sun 2003). The Lotus Blossom (i.e., China Doll, Geisha Girl, shy Polynesian beauty) is “a sexual-romantic object,” utterly feminine, delicate, and welcome respites from their often loud, independent American counterparts” (Sun 2003, p. 659). Dragon Lady is the direct opposite of the Lotus Blossom. She is “cunning, manipulative, and evil,” “aggressive,” and “exudes exotic danger” (Sun 2003, p. 659). There are also the “added characteristics of being sexually alluring and sophisticated and determined to seduce and corrupt white men” (Shah 2003).
Shows with at least one recurring character of Asian descent include: The Courtship of Eddie’s Father (1969– 1972), Happy Days (1974–1984), Quincy, M.E. (1976– 1983), All-American Girl (1994–1995), Ally McBeal (1997–2002), Mad TV (1995–), Half & Half (2002–), Lost (2004–), and Grey’s Anatomy (2005–). Examples of how long-held stereotypes of Asian-American women are perpetuated include the character Ling Woo on Ally McBeal and Miss Swan on Mad TV. Ling Woo was an attorney (portrayed by the Chinese actress Lucy Liu) who was “tough, rude, candid, aggressive, sharp tongued, and manipulative” and hypersexualized (Sun 2003, p. 661). She was also a feminist, in stark contrast with past portrayals of Asian women. While some Asian Americans believed Ling was a stereotype breaker, she still perpetuated the Dragon Lady stereotype, especially when she “growl[ed] like an animal, breathing fire at Ally, walking into the office to the music of Wicked Witch of the West from The Wizard of Oz (Sun 2003, p. 661).
Miss Swan is an Asian-American character on FOX’s sketch comedy show Mad TV. She is played by the Jewish comedian Alexandria Borstein and represents an example of “yellowface,” which is the Asian equivalent of blackface and refers to a non-Asian person “performing” an Asian identity. Miss Swan is “a babbling nail salon owner with a weak grasp of the English language” (Arm-strong 2000), and she is always depicted as the perpetual foreigner, inherently predatory and restricted to the occupation of nail salon owner. She is also a comic character who speaks broken, unintelligible English. Ms. Borstein is not Asian, which has appeared to be okay with the show’s audience. This mixed casting of Asian characters was also a problem with the short-lived sitcom All-American Girl, which portrayed a Korean family but cast only one Korean actor (the comedian Margaret Cho). All the other actors were either Japanese American or Chinese American, thus perpetuating the assumption that Asians are interchangeable and must assimilate to mainstream (white) culture in order to “fit in.”
The controlling images of Asian Americans distort what it means to belong to this very heterogeneous ethnic group. Attempts to diversify television programming were made with All-American Girl, but much more work is needed to accurately represent Asian Americans. Both Lost and Grey’s Anatomy have strong and visible Asian-American actors as part of the regular cast. As the journalist Donal Brown notes, UCLA researchers believe these shows and characters are complex and have great appeal across racial and ethnic groups, but they are “concerned that the Asian American characters on television [are] portrayed in high status occupations perpetuating the ‘model minority’ stereotype” (Brown 2006).
“Latino representation in Hollywood is not keeping pace with the explosion of the U.S. Hispanic population, and depictions of Latinos in television and film too often reinforce stereotypes” (Stevens 2004). Television shows purport to reflect reality in their programs, but they rarely, if ever, do so when casting characters. According to advocacy group Children Now, Latinos make up over 12.5 percent of the U.S. population, yet only 2 percent of characters on television are Latino (Stevens 2004), not including those Latinos who are portraying white (non-ethnic) characters.
Latino Americans have been subjugated and oppressed as immigrants “invading” U.S. culture. Contemporary immigration issues notwithstanding, the most prevailing stereotypes associated with Latino-American males are the glorified drug dealer, the “Latin lover,” the “greaser,” and the “bandito” (Márquez 2004). Latina women are depicted as deviant, “frilly señoritas” or as “volcanic temptresses,” and Latino families, in general, are “unintelligent,” “passive,” “deviant,” and “dependent” (Márquez 2004). These depictions may be rare, but they can undoubtedly have a significant impact on perceptions and attitudes people develop about individuals of Latin descent. Images of Latino Americans do not reflect the “Latino explosion” in U.S. culture, and they ultimately reinforce the stereotypes that should be countered. These images may not be fully positive or fully negative, but their rarity makes it more problematic that these images are so restricting.
Notable television programs featuring or including a Latino-American character include: Chico and the Man (1974–1978), Luis (2003), The Ortegas (2003), NYPD Blue (1993–2005), Will & Grace (1998–2006), Popstar (2001), George Lopez (2002–), The West Wing (1999–2006), The Brothers Garcia (2000–2003), Taina (2001–2002), Dora the
Explorer (2000–), Desperate Housewives (2004–), CSI: Miami (2002–), and Ugly Betty (2006–). Latino-American culture has had tremendous appeal in popular culture, yet members of the different ethnic groups within the Latino community remain marginalized in primetime television programming. One promising program that can potentially debunk these controlling images is Ugly Betty, starring the actress America Ferrera. Betty is aspiring for success in the fashion industry and is faced with opposition because she does not fit the industry (read mainstream) cultural standard of beauty. Despite much opposition, she refuses to succumb to societal expectations and remains committed to not compromising her character and integrity.
Ugly Betty is based on an incredibly popular Columbian telenovela (soap opera), Yo soy Betty, la fea, that was very successful in Mexico, India, Russia, and Germany. Although the lead character defies conventional wisdom regarding televisual success, the show may presage an era in which issues concerning racial representation in television are dealt with onscreen as well as off.
Television demographics in the United States should mirror the racial demographics of both the country and the cities within which the television programs take place, but many analyses suggest they do not (see Márquez 2004). This becomes particularly salient for individuals who have had limited interpersonal contact with people from other racial groups. Diversifying television production teams and actors is an effective strategy for eradicating subtle and blatant racism, or “symbolic annihilation” (Meek 1998). Community activism is also a powerful tool in this regard. Through research and the creation of “diversity development” programs at networks like FOX, the national Children Now organization offers practical approaches to addressing racial representation in the media. Efforts by Children Now and other organizations committed to addressing issues of fair racial and ethnic representation in the media are critical in bringing about such change. It is only through education and formal efforts that programmers, scriptwriters, and other pivotal players can be made aware of the exclusionary nature of television. Awareness of such racism will ideally prompt the television community to become proactive in redefining their role in perpetuating controlling images that continue to plague twenty-first-century portrayals of racial groups.
SEE ALSO Cultural Racism; Film and Asian Americans; Social Psychology of Racism; Stereotype Threat and Racial Stigma; Symbolic and Modern Racism.
Armstrong, Mark. 2000. “Mr. Wong, Miss Swan: Asian Stereotypes Attacked.” E!-Online News, August 11. Available from
Brown, Donal. 2006. “Asian Americans Go Missing When It Comes to TV.” Available from
Champagne, Duane, ed. 1994. Native America: Portrait of the Peoples. Detroit, MI: Visible Ink.
Chihara, Michelle. 2000. “There’s Something About Lucy.” Boston Phoenix, February 28. Also available from
Entman, Robert. 1994. “Representation and Reality in the Portrayal of Blacks on Network Television Shows.” Journalism Quarterly 71 (3): 509–520.
Gray, Herman. 1995. Watching Race: Television and the Struggle for “Blackness.” Minneapolis: University of Minnesota Press.
Haggins, B. L. 2001. “Why ‘Beulah’ and ‘Andy’ Still Play Today: Minstrelsy in the New Millennium.” Emergences: Journal for the Study of Media & Composite Cultures 11 (2): 249–267.
Inniss, Leslie B., and Joe R. Feagin. 1995. “The Cosby Show: The View from the Black Middle Class.” Journal of Black Studies 25 (6): 692–711.
Mayeda, Daniel. M. “EWP Working to Increase Diversity in Television.” Available from
Means Coleman, R. R. 1998. African American Viewers and the Black Situation Comedy: Situating Racial Humor. New York: Garland.
Meek, B. A. 2006. “And the Injun Goes ‘How!’: Representations of American Indian English in White Public Space.” Language in Society 35 (1): 93–128.
Méndez-Méndez, Serafín, and Diane Alverio. 2002. Network Brownout 2003: The Portrayal of Latinos in Network Television News. Report Prepared for the National Association of Hispanic Journalists. Available from
Merskin, Debra. 1998. “Sending up Signals: A Survey of Native American Media Use and Representation in the Mass Media.” Howard Journal of Communications 9: 333–345.
———. 2001. “Winnebagos, Cherokees, Apaches and Dakotas: The Persistence of Stereotyping of American Indians in American Advertising Brands.” Howard Journal of Communications 12: 159–169.
Mizrach, Steve. 1998. “Do Electronic Mass Media Have Negative Effects on Indigenous People?” Available from
Rebensdorf, Alicia. 2001. “The Network Brown-Out.” AlterNet. Available from
Shah, Hemant. 2003. “‘Asian Culture’ and Asian American Identities in the Television and Film Industries of the United States.” Studies in Media & Information Literacy Education 3. Also available from
Stevens, S. 2004. “Reflecting Reality: A Fordham Professor Testifies before Congress about the Dearth of Latinos on Television and in Film.” Available at
Sun, Chyng Feng. 2003. “Ling Woo in Historical Context: the New Face of Asian American Stereotypes on Television.” In Gender, Race, and Class in Media: A Text-Reader, 2nd ed., edited by Gail Dines and Jean M. Humez. Thousand Oaks, CA: Sage.
Taylor, Rhonda H. 2000. “Indian in the Cupboard: A Case Study in Perspective. International Journal of Qualitative Studies in Education 13 (4): 371–384.
Tina M. Harris
views updated
At the same time radio began to achieve commercial viability in the 1920s, the United States and Britain began experimenting with "television," the wireless transmission of moving pictures. Although Britain was initially somewhat more successful, both countries experienced a lot of difficulty in the early stages. There were a variety of reason for this. In America, many people whose livelihoods were tied to radio were also responsible for developing television. Accordingly, they were in no hurry to see radio, a sure money maker, usurped by the new medium. In addition, the Depression greatly slowed the development of television in the 1930s. There was also a tremendous amount of infighting between potential television manufacturers and the Federal Communications Commission (FCC) in trying to establish uniform technical standards. And finally, just as it seemed as though television was poised to enter American homes, the onset of World War II delayed its ascendancy until the war's end. However, in the late 1940s and early 1950s commercial television exploded on the American market, forever changing the way products are sold, people are entertained, and news events are reported. In the years immediately following World War II television quickly became America's dominant medium, influencing, shaping, and recording popular culture in a way no other media has ever equaled.
Although televisions first appeared on the market in 1939, because there were virtually no stations and no established programming, it wasn't until just after World War II that TV began its meteoric rise to media dominance. As John Findling and Frank Thackeray note in Events That Changed America in the Twentieth Century, in 1946 only 7,000 TV sets were sold. However, as television stations began appearing in an increasing number of cities, the number of sets sold rose dramatically. In 1948 172,000 sets were sold; in 1950 there were more than 5,000,000 sets sold. By 1960 more than 90 percent of American homes had TV sets, a percentage which has only climbed since. Before television, Americans had spent their leisure time in a variety of ways. But as each new station appeared in a particular city, corresponding drops would occur in restaurant business, movie gates, book and magazine sales, and radio audiences. By the early 1960s Americans were watching over 40 hours of TV a week, a number that has remained remarkably stable ever since.
Television originally only had 12 Very High Frequency (VHF) channels—2 through 13. In the late 1940s over 100 stations were competing for transmission on VHF channels. Frequency overcrowding resulted in stations interfering with one another, which led to the FCC banning the issuance of new licenses for VHF channels for nearly four years, at the conclusion of which time stations receiving new licenses were given recently developed Ultra High Frequency (UHF) channels (14 through 88). However, most TV sets needed a special attachment to receive UHF channels, which also had a worse picture and poorer sound than VHF channels. Unfortunately, it was mostly educational, public access, and community channels that were relegated to UHF. Because of the FCC's ban, the three major networks, ABC, CBS, and NBC, were able to corner the VHF channels and dominate the television market until well into the 1980s.
From its introduction in American society, television has proven itself capable of holding its audience riveted to the screen for countless hours. As a result, people who saw television as a means through which to provide culturally uplifting programming to the American public were gravely disappointed. Instead, TV almost immediately became an unprecedentedly effective means of selling products. Most Americans weren't interested in "educational" programming, and if people don't watch advertisers don't pay for air time in which to sell their products. TV shows in the late 1940s followed the model established by the success of radio; single advertisers paid for whole shows, the most common of which were half hour genre and variety shows. But in 1950 a small lipstick company named Hazel Bishop changed forever the way companies sold their products.
When Hazel Bishop first began advertising on TV in 1950 they had only $50,000 a year in sales. In two short years of television advertising, and at a time when only 10 percent of American homes had TV sets, that number rose to a stunning $4.5 million. As advertisers flocked to hawk their products on TV, TV executives scrambled to find a way to accommodate as many companies as possible, which would result in astronomical profits for both the advertisers and the networks. TV executives realized that single product sponsorship was no longer effective. Instead, they devised a system of longer breaks during a show, which could be split up into 30 second "spots" and sold to a much larger number of advertisers. Although this advertising innovation led to television's greatest period of profitability, it also led to advertising dictating television programming.
In the early 1950s television advertisers realized they had a monopoly on the American public; they were competing with each other, not with other mediums, such as books or magazines. Americans watched regardless what was on. Advertisers discovered that what most people would watch was what was the least objectionable (and often the most innocuous) show in a given time slot; hence the birth of the concept of "Least Objectionable Programming." A TV show didn't have to be good, it only had to be less objectionable than the other shows in the same time slot. Although more "serious" dramatic television didn't entirely disappear, the majority of shows were tailored to create the mood advertisers thought would result in their consumers being the most amenable to their products. By the mid-1950s lightweight sitcoms dominated the American television market. The relative success and happiness of television characters became easily measurable by the products they consumed.
Prior to the twentieth century, "leisure time" was a concept realized by generally only the very wealthy. But as the American middle class grew astoundingly fast in the post World War II boom, a much larger population than ever before enjoyed leisure time, which helped contribute to television's remarkable popularity. Perhaps even more important was the rise of the concept of "disposable income," money that people could spend on their wants rather than needs. Advertisers paying for the right to influence how people might spend their disposable income largely funded television. As a result, television was ostensibly "free" prior to the late 1970s. Nevertheless, television's cost has always been high; it has played perhaps the single largest role in contributing to America's becoming a consumer culture unparalleled in world history. Countless television shows have achieved an iconic stature in American popular culture, but none have had as powerful an effect on the way American's live their day to day lives as have commercials.
By the mid-1950s it became clear that television's influence would not be confined to the screen. Other forms of media simply could not compete directly with television. As result, they had to change their markets and formats in order to secure a consistent, although generally much smaller than pre-television, audience. Perhaps the most far reaching consequence of the rise of television is that America went from being a country of readers to a country of watchers. Previously hugely popular national magazines such as Colliers, Life, and The Saturday Evening Post went out of business in the late 1950s. Likewise, as television, an ostensibly more exciting visual medium than radio, adapted programming previously confined to the radio, radio shows quickly lost their audience. Magazines and radio stations responded similarly. Rather than trying to compete with TV they became specialized, targeting a singular demographic audience. Simultaneously, advertisers grew more savvy in their research and development and realized that highly specific consumer markets could be reached via radio and magazine advertising. Strangely, television's rise to prominence secured the long term success of radio and magazines; their response to the threat of television eventually resulted in a much larger number of magazines and radio stations than had been available before television. In the late 1990s audiences can find a radio station or magazine that focuses on just about any subject they might want.
Perhaps the industry struck hardest by the advent of television was the American film industry. Hollywood initially considered TV an inferior market, not worthy of its consideration. And why not, for in the late 1940s as many as 90 million people a week went to the movies. But television's convenience and easy accessibility proved much competition for Hollywood. By the mid-1950s the industry's audience had been reduced to half its former number. Hollywood never recovered as far as actual theater audiences are concerned. In fact, by the late 1990s only 15 million people a week attended the cinema, and this in a country with twice the population it had in the late 1940s. However, Hollywood, as it seemingly always does, found a way to adapt. Rather than relying exclusively on box-office receipts, Hollywood learned to use television to its advantage. Now after market profits, including the money generated from pay-per-view, cable channels, premium movie channels, and, most of all, video sales and rentals, are just as important to a film's success, if not more so, than a film's box-office take.
In addition, television's media domination has contributed greatly to blurring the lines between TV and Hollywood. Most Hollywood studios also produce TV shows on their premises. Furthermore, just as radio stars once made the jump from the airwaves to the silver screen, TV stars routinely make the jump from the small screen to the movies. As a result, many American celebrities can't be simply categorized in the way they once were. Too many stars have their feet in too many different mediums to validate singular labels. Take, for example, Oprah Winfrey, a television talk show maven who has also been involved in several successful books, promoted the literary careers of others, frequently appeared on other talk shows and news magazines as a guest, and has acted in and produced a number of both television and Hollywood films. Although hers is an extreme example, television's unequaled cultural influence has resulted in turning any number of stars who would have formerly been confined to one or two mediums into omnipresent multimedia moguls.
If radio ushered in the era of "broadcast journalism," TV helped to further define and legitimize it. In addition, television newscasts have changed the way Americans receive and perceive news. By the late 1950s TV reporters had learned to take advantage of emerging technologies and use them to cover breaking news stories live. Television broadcasts and broadcasters grew to hold sway over public opinion. For, example, public sentiment against the Vietnam War was fueled by nightly broadcasts of its seemingly senseless death and destruction. Walter Cronkite added further fuel to the growing fire of anger and resentment in 1968 when he declared on-air that he thought the war in Vietnam was a "terrible mistake." When most Americans think of the events surrounding JFK, Martin Luther King, and Bobby Kennedy's assassinations, the civil rights movement, the first moon walk, the Challenger space shuttle disaster, the Gulf War, and the 1992 riots in Los Angeles after the Rodney King trial verdict, it is the televised images that first come to mind.
Unfortunately, by the late 1990s many Americans had come to rely on television news as their main source of information. In addition to nightly news and news oriented cable networks, cheap-to-make and highly profitable "news magazines" such as Dateline NBC, 20/20, and 48 Hours have become TV's most common form of programming. Rarely do these shows feature news of much importance; instead, they rely on lurid and titillating reports that do nothing to enrich our knowledge of world events but nevertheless receive consistently high ratings, thus ensuring the continuing flow of advertising dollars. That most Americans rely on television for their information means that most Americans are underinformed; for full accounts of a particular story it is still necessary to seek out supporting written records in newspapers, magazines, and books. The problem with relying on television for information is, as Neil Postman writes, "not that television presents us with entertaining subject matter but that all subject matter is presented as entertaining." Accordingly, it is Postman's contention that America's reliance on TV for information is dangerous, for when people "become distracted by trivia, when cultural life is redefined as a perpetual round of entertainments, when serious conversation becomes a form of baby-talk, when, in short, a people become an audience and their public business a vaudeville act, then a nation finds itself at risk."
Because of news broadcasting and the fact that television is the best way to reach the largest number of Americans, television has helped shape American politics in the second half of the twentieth century. Effective television advertising has become crucial to the success or failure of nearly any national election. Unfortunately, such advertising is rarely completely factual or issue oriented. Instead, most such advertisements are used to smear the reputation of a particular candidate's opponent. Perhaps the most famous example of such advertisements occurred in the 1988 Presidential campaign, during which Republican George Bush ran a series of slanted and inflammatory spots about his Democratic opponent, Michael Dukakis. Furthermore, the careers of several presidents have become inextricably intertwined with TV. For example, President Ronald Reagan, a former minor movie star and television product pitchman, used his television savvy so effectively that he came to be known as "the Great Communicator." Conversely, President Bill Clinton, whose initial effective use of television has reminded some of JFK's, became victim to his own marketability when in August of 1998 he admitted in a televised speech to the nation that he had lied about his affair with a young intern named Monica Lewinsky. His admission, which was meant to put the incident behind him, instead spawned a virtual television cottage industry, with literally dozens of shows devoting themselves to continual discussion about his fate, which was ultimately decided in a televised impeachment trial.
Strangely, considering the man was wary of the medium, perhaps no politician's career has been more tied to television than Richard Nixon's. As a vice presidential candidate on Dwight Eisenhower's 1952 Republican Presidential bid, Nixon came under fire for allegedly receiving illegal funding. A clamor arose to have Nixon removed from the ticket. On September 23, 1952, Nixon went on TV and delivered a denial to the accusations, which has since become known as "the Checkers speech." More than 1 million favorable letters and telegrams were sent supporting Nixon; he remained on the ticket and he and Eisenhower won in a landslide. Conversely only eight years later TV would play a role in Nixon's losing his own bid for the White House. Nixon agreed to a series of televised debates with his much more telegenic opponent, John Fitzgerald Kennedy. Nixon's pasty face and sweaty brow may have cost him the election. Many historian's believe that Kennedy "won" the debates as much by his more polished appearance and manner than by anything he said. Nixon learned from his error. While again running for the Presidency in 1968, Nixon hired a public relations firm to run his campaign. The result was a much more polished, image conscious, and TV friendly Nixon; he won the election easily. In the last chapter of Nixon's political career, broadcast and print media helped spur investigations into his involvement with the Watergate affair. The hearings were broadcast live on TV, which helped to turn public opinion against the President, who resigned from office as a result. One of the most famous images in TV history of television is that of Nixon turning and waving to the crowd as he boarded the helicopter that removed him from power forever.
Prior to World War II, baseball was widely recognized as America's national pastime. Games were often broadcast live, and the country blissfully spent its summers pursuing the on-the-field exploits of larger than life figures such as Babe Ruth. However, after the war other sports grew to prominence, largely because of television, beer, and, until 1970, cigarette advertisers, who saw in sports audiences a target market for their products. Individual sports such as golf and tennis, grew in popularity, but team sports such as hockey, basketball, and most of all, football had the greatest increases. By the late 1960s and with the advent of the Super Bowl, annually America's most viewed broadcast, football surpassed baseball as America's favorite pastime. Because of their nature, team sports have a built in drama that escalates in intensity over the duration of a season. Americans are drawn to the players in this drama, which has resulted in athletes hawking products perhaps more than any other cultural icons. In fact, as advertising revenues increasingly fund sports, athletes have become perhaps the highest paid workers in America. Michael Jordan earned a reported $30 million to play for the Chicago Bulls in the 1997-98 season. In the Fall of 1998 pitcher Kevin Brown signed with the Los Angeles Dodgers for a seven year deal worth $105 million. Accompanying their paychecks is a rise in media scrutiny. Elite athletes are hounded by paparazzi in a way once reserved for movie stars and royalty. Such is the price of television fame in America.
Despite the ridiculous salaries and the accompanying out-of-control egos of many athletes and owners, it could be argued that television has made sports better for most people. Seeing a game live at a venue remains thrilling, but TV, with its multiple camera angles and slow motion instant replays, is by far a better way to actually see a game. In addition to better vision, one has all the comforts of home without the hassle of inclement weather, expensive tickets, heavy traffic, and nearly impossible parking. And because of TV and its live transmission of sports, certain moments have become a part of America's collective cultural fabric in a way that never would have been possible without television. Heroes and goats achieve legendary status nearly immediately. Just as important to our culture as the televising of events of political and social importance is the broadcast of sporting events. Although not particularly significant in their contribution to human progress, because of television the images of San Francisco 49er Joe Montana's pass to Dwight Clark in the back of the endzone in the 1981 NFC championship game to beat the Dallas Cowboys or of a ground ball dribbling between Boston Red Sox first baseman Bill Buckner's legs in the sixth game of the 1986 World Series are just as much a part of American culture's visual memory as the image of Neil Armstrong walking on the moon.
As the twentieth century careens to a close and America prepares to embark on a new century, debates over television's inarguable influence continue to rage. Is TV too violent? Is television's content too sexually oriented? Has television news coverage become vacuous and reliant on the superfluous and tawdry? Is TV damaging our children and contributing to the fraying of America's social fabric? Regardless of the answers to these and countless other questions about television's influence, the inarguable fact is that television is the most important popular culture innovation in history. Our heroes and our villains are coronated and vanquished on television. Sound bites as diverse in intent and inception as "where's the beef," "read my lips," and "just do it," have become permanently and equally ensconced in the national lexicon. Television is not without its flaws, but its accessibility and prevalence has created what never before existed: shared visual cultural touchstones. As one America mourned the death of JFK, argued about the veracity of the Clarence Thomas/Anita Hill hearings, recoiled in horror as Reginald Denny was pulled from his truck and beaten, and cheered triumphantly as Mark McGwire hoisted his son in the air after hitting his 62nd home run. For better or for worse, in the second half of the twentieth century television dominated and influenced the American cultural landscape in an unprecedented fashion.
—Robert C. Sickels
Further Reading:
Baker, William F., and George Dessart. Down the Tube: An Inside Account of the Failure of American Television. New York, Basic Books, 1998.
Barnouw, Erik. Tube of Plenty: The Evolution of American Television. New York, Oxford University Press, 1990.
Caldwell, John Thornton. Televisuality: Style, Crisis, and Authority in American Television. New Brunswick, Rutgers University Press, 1995.
Comstock, George. The Evolution of American Television. Newbury Park, Sage Publications, 1989.
Findling, John E., and Frank W. Thackeray. Events that Changed America in the Twentieth Century. Westport, Connecticut, Green-wood Press, 1996.
Himmelstein, Hal. Television Myth and the American Mind. Westport, Praeger Publishers, 1994.
Postman, Neil. Amusing Ourselves to Death. New York, Penguin Books, 1985.
Stark, Steven D. Glued to the Set: The 60 Television Shows and Events that Made Us Who We Are Today. New York, Free Press, 1997.
Sturcken, Frank. Live Television: The Golden Age of 1946-1958 in New York. Jefferson, North Carolina, McFarland & Co., 1990.
Udelson, Joseph H. The Great Television Race: A History of the American Television Industry 1925-1941. The University of Alabama Press, 1982.
views updated
Operation of the cathode ray tube
High definition television
Cable television
Latest television
Television is a telecommunication device for sending (broadcasting) and receiving video and audio signals. The name television is derived from the Greek root tele for far sight and the Latin root visio meaning sight, or combined together to mean distance seeing. Broadly speaking, television, or TV, is, thus, the overall technology used to transmit pictures with sound using radio frequency and microwave signals or closed-circuit connections.
Television programming was regularly broadcast in such countries as the United States, England, Germany, France, and the Soviet Union before World War II (19391945). However, television in the U.S., for instance, did not become common in homes until the middle 1950s. By the early 2000s, over 250 million television sets were in use in the U.S., nearly one TV set per person.
german engineer and inventor Paul Julius Gottlieb Nipkow (18601940) designed the mechanism in 1884 that provided for television. Nipkow placed a spiral pattern of holes onto a scanning disk (later called a Nipkow disk.) He turned the scanning disk while it was in front of a brightly lit picture so that each part of the picture was eventually exposed. This technology was later used inside cameras and receivers to produce the first television images.
The invention of the cathode ray tube in 1897 by German inventor and physicist Karl Ferdinand Braun
(18501918) quickly made possible the technology that today is called television. Indeed, by 1907, the cathode ray tube was capable of supplying images for early incarnations of the television.
English inventor Alan Archibald Campbell-Swinton (18631930) invented electronic scanning in 1908. He used an electron gun to neutralize charges on an electrified screen. Later, he wrote a scientific paper describing the electronic theory behind the concept of television. Russian-born American physicist Vladimir Kosma Zworykin (18891982) added to Campbell-Swintons idea when he developed an iconoscope camera tube later in the 1920s.
American inventor Charles Francis Jenkins (18671934) and English inventor John Logie Baird (18881946) used the Nipkow scanning disc in the early 1920s for their developmental work with television. Baird used his working television model to show a 1926 audience. By 1928, he had set up an experimental broadcast system. In addition, Baird demonstrated a color transmission of television. Then, Philo Traylor Farnsworth (19061971), an American inventor and engineer, invented a television camera that could convert elements of an image into an electrical signal. Farnsworth demonstrated the first completely electronic television system in 1934, which he eventually patented.
Within 50 years, television had become a dominant form of entertainment and an important way to acquire information. This remains true in the mid-2000s, as the average U.S. citizen spends between two and five hours each day watching television.
Television operates on two principles that underlie how the human brain perceives the visual world. First, if an image is divided into a group of very small colored dots (called pixels), the brain is able to reassemble the individual dots to produce a meaningful image. Second, if a moving image is divided into a series of pictures, with each picture displaying a successive part of the overall sequence, the brain can put all the images together to form a single flowing image. The technology of the television (as well as computers) utilizes these two features of the brain to present images. The dominant basis of the technology is still the cathode ray tube.
Operation of the cathode ray tube
A cathode ray tube contains a positively charged region (the anode) and a negatively charged region (the cathode). The cathode is located at the back of the tube. As electrons exit the cathode, they are attracted to the anode. The electrons are also focused electronically into a tight beam, which passes into the central area of the television screen. The central region is almost free of air, so that there are few air molecules to deflect the electrons from their path. The electrons travel to the far end of the tube where they encounter a flat screen. The screen is coated with a molecule called phosphor. When an electron hits a phosphor, the phosphor glows. The electron beam can be focused in a coordinated way on different part of the phosphor screen, effectively painting the screen (a raster pattern). This process occurs very quicklyabout 30 times each secondproducing multiple images each second. The resulting pattern of glowing and dark phosphors is what is interpreted by the brain as a moving image.
Black and white television was the first to be developed, as it utilized the simplest technology. In this technology, the phosphor is white. Color television followed, as the medium became more popular, and demands for a more realistic image increased. In a color television, three electron beams are present. They are called the red, green, and blue beams. Additionally, the phosphor coating is not just white. Rather, the screen is coated with red, green, and blue phosphors that are arranged in stripes. Depending on which electron beam is firing and which color phosphor dots are being hit, a spectrum of colors is produced. As with the black and white television, the brain reassembles the information to produce a recognizable image.
High definition television
High definition television (HDTV) is a format that uses digital video compression, transmission, and presentation to produce a much crisper and lifelike image than is possible using the cathode ray tube technology. This is because more information can be packed into the area of the television screen. Conventional cathode ray tube screens typically have 525 or 625 horizontal lines of dots on the screen. Each line contains approximately from 500 to 600 dots (or pixels). Put another way, the information possible is 525× 500 pixels. In contrast, HDTV contains from 720 to 1080× 500 pixels. The added level of detail produces a visually-richer image.
Televisions of the 1950s and 1960s utilized an analog signal. The signals were beamed out into the air from the television station, to be collected by an antenna positioned on a building or directly on the television (commonly called rabbit ears). Today, the signal is digitized. This allows the electronic pulses to be sent through cable wire to the television, or to a satellite, which then beams the signal to a receiving dish in a format known as MPEG-2 (exactly like the video files that can be loaded on to a computer).
The digital signal is less subject to deterioration that is the analog signal. Thus, a better quality image reaches the television.
Cable television
Television signals are transmitted on frequencies that are limited in range. Only persons residing within a few dozen miles of a TV transmitter can usually receive clear and interference-free reception. Community antenna television systems, often referred to as CATV, or simply cable, developed to provide a few television signals for subscribers far beyond the service area of big-city transmitters. As time passed cable moved to the big cities. Cables appeal, even to subscribers able to receive local TV signals without an outdoor antenna, is based on the tremendous variety of programs offered. Some systems provide subscribers with a choice of hundreds of channels.
Cable systems prevent viewers from watching programs they have not contracted to buy by scrambling or by placing special traps in the subscribers service drop that remove selected channels. The special tuner box that descrambles the signals can often be programmed by a digital code sent from the cable system office, adding or subtracting channels as desired by the subscriber.
Wired cable systems generally send their programming from a central site called a head end. TV signals are combined at the head end and, then, sent down one or more coaxial-cable trunk lines. Signals for various neighborhoods along the trunk split away to serve individual neighborhoods from shorter branches called spurs.
Coaxial cable, even the special type used for CATV trunk lines, is made of material that dissipates the electrochemicals passing through. Signals must be boosted in power periodically along the trunk line, usually every time the signal level has fallen by approximately 20 decibels, the equivalent of the signal having fallen to one-hundredth (1/100th) of its original power. The line amplifiers used must be very sophisticated to handle the wide bandwidth required for many programs without degrading the pictures or adding noise. The amplifiers must adjust for changes in the coaxial cable due primarily to temperature changes. The amplifiers used are very much improved over those used by the first primitive community antenna systems, but even today trunk lines are limited in length to about a dozen miles. Not much more than about one hundred line amplifiers can be used along a trunk line before problems become unmanageable.
Cables program offerings are entirely confined within the shielded system. The signals provided to subscribers must not interfere with over-the-air radio and television transmissions using the same frequencies. Because the cable systems offerings are confined within the shielded system, pay-per-view programs can be offered on cable.
Cable is potentially able to import TV signals from a great distance using satellite or terrestrial-microwave relays. Cable systems are required to comply with a rule called Syndex, for syndication exclusivity, where an over-the-air broadcaster can require that imported signals be blocked when the imported stations carry programs they have paid to broadcast.
The current step in CATV technology is the replacing of wire-based coaxial systems with fiber optic service. Fiber optics is the technology where electrical signals are converted to light signals by solid-state laser diodes. The light waves are transmitted through very-fine glass fibers so transparent that a beam of light will travel through this glass fiber for miles.
Cables conversion to fiber optics results in an enormous increase in system bandwidth. Virtually the entire radio and TV spectrum is duplicated in a fiber optic system. As an example, every radio and TV station transmitting over the airwaves can be carried in a single thread of fiberglass; 500 separate television channels on a single cable system is easily handled. A fiber optic CATV system can be used for two-way communication more easily than can a wire-cable plant with electronic amplifiers. Fiber optic cable service greatly increases the support needed for interactive television services.
Latest television
Plasma television
Plasma television has been available commercially since the late 1990s, and are becoming popular in the early 2000s. Plasma televisions do not have a cathode ray tube. Thus, the screen can be very thin. Typically, plasma televisions screens are about 6 in (15 cm) thick but can reach down to 1 in (2.5 cm) thick. This allows the screen to be hung from a wall. Along with plasma television, flat panel LCD television is also available. Both are considered flat panel televisions. Some models are used as computer monitors.
In a plasma television, fluorescent lights are present instead of phosphors. Red, green, and blue fluorescent lights enable a spectrum of colors to be produced, in much the same way as with conventional television. Each fluorescent light contains a gas called plasma. Plasma consists of electrically charged atoms (ions) and electrons (negative in charge). When an electrical signal encounters plasma, the added energy starts a process where the particles bump into one another. This bumping releases a form of energy called a photon. The release of ultraviolet photons causes a reaction with phosphor material, which then glows.
Rear projection
Rear projection television send the video signal and its images to a projection screen with the use of a lens system. They were introduced in the 1970s but declined in the 1990s as better alternatives were made available. Many used large screens, some over 100 in (254 cm). These projection systems are divided into three groups: CRT-based (cathode ray tube-based), LCD-based (liquid crystal display-based), and DLP-based (digital light processing-based). Their quality since the 1970s have improved drastically so that in the 2000s they are still a viable type of television.
Chrominance Color information added to a video signal.
Coaxial cable A concentric cable, in which the inner conductor is shielded from the outer conductor; used to carry complex signals.
Compact disc Digital recording with extraordinary fidelity.
Field Half a TV frame, a top to bottom sweep of alternate lines.
Frame Full TV frame composed of two interlaced fields.
Parallax Shift in apparent alignment of objects at different distances.
Phosphor Chemical that gives off colored light when struck by electron.
ATV stands for advanced television, a name created by the U.S. Federal Communications Commission (FCC) for digital TV (or DTV). It is the television system replacing the current analog system in the United States. Television technology is rapidly moving toward the ATV digital system planned to replace the aging analog process. ATV is a digital-television system, where the aspects that produce the image are processed as computer-like data. Digitally processed TV offers several tremendous advantages over analog TV methods. In addition to sharper pictures with less noise, a digital system can be much more frugal in the use of spectrum space. ATV includes high-definition television (HDTV), which is a format for digital video compression, transmission, and presentation. For instance, high-definition television (HDTV), which was developed in the 1980s, uses 1,080 lines and a wide-screen digital format to provide a very clear picture when compared to the traditional 525- and 625-line televisions.
Most TV frames are filled with information that has not changed from the previous frame. A digital TV system can update only the information that has changed since the last frame. The resulting picture looks to be as normal as the pictures seen for years, but many more images can be transmitted within the same band of frequencies.
TV audiences have been viewing images processed digitally in this way for years, but the final product has been converted to a wasteful analog signal before it leaves the television transmitter. Satellite relays have relayed TV as digitally compressed signals to maximize the utilization of the expensive transponder equipment in orbit about the Earth. The small satellite dishes offered for home reception receive digitally-encoded television signals.
ATV will not be compatible with current analog receivers, but it will be phased in gradually in a carefully-considered plan that will allow older analog receivers to retire gracefully over time. Television in the 2000s includes such features as DVD (digital video recorders) players, computers, VCRs (video cassette recorders), computer-like hard drives (to store programming), Internet-access, video game consoles, pay-per-view broadcasts, and a variety of other advanced add-on features.
See also Digital recording; Electronics.
Crisell, Andrew. A Study of Modern Television: Thinking Inside the Box. Basingstoke, UK, and New York: Palgrave Macmillan, 2006.
Trundle, Eugene. Newnes Guide to Television and Video Technology. Oxford, UK, and Boston, MA: Newnes, 2001.
Ovadia, Schlomo. Broadband Cable TV Access Networks: From Technologies to Applications. Upper Saddle River, NJ: Prentice Hall, 2001.
Donald Beaty
views updated
In the United States, about 99 percent of households own at least one television set, and over 60 percent subscribe to cable television.
The device known as the television is actually a receiver that is the end point of a system that transmits pictures and sounds at a distance. The process starts with a television camera that converts all image information into electrical signals, which are delivered to homes and businesses through a television antenna, underground fiber optic cable, or satellite. The function of the receiver, or television set, is to unscramble the electrical signals, converting them into sounds and pictures.
Television is one of the greatest technological developments of all time. It did not happen overnight, but developed over a number of years, taking advantage of advances in the sciences and technologies of the time. Television has not only been a source of entertainment worldwide, but it has also linked people through their common experience of witnessing events that are happening in different parts of the world and beyond. For example, on July 20, 1969, about 720 million people all over the world watched on television as astronaut Neil Armstrong walked on the moon.
In the United States, about 99 percent of households own at least one television set. Over 60 percent subscribe to cable television. The average household watches approximately seven hours of television each day.
A group effort
No single person invented the television. Instead, it is the result of scientific research in various countries over several decades. In 1817, Baron Jöns Jakob Berzelius (1779–1848), a Swedish chemist, identified selenium as a chemical element. He found that selenium could conduct electricity and that this ability to conduct electricity varied with the amount of light hitting it. In 1878, Sir William Crookes (1832–1919), a British chemist and physicist, first mentioned cathode rays (beams of electrons in a glass vacuum tube). These scientific findings occurred separately and would take many years to be applied to the making of television.
In 1884, German engineer Paul Nipkow (1860–1940) built the first crude television with the help of a mechanical scanning disk. Small holes on the rotating disk picked up pieces of images and imprinted them on a light-sensitive selenium tube. A receiver then recreated the image pieces into a whole picture. Nipkow's mechanical invention, crude as it was, employed the scanning principle that would be used by future television cameras and receivers to record and recreate images for a television screen.
Television goes electronic
In 1911, while some scientists were trying to improve on Nipkow's mechanical scanning disk, Scottish electrical engineer Alan Archibald Campbell Swinton (1863–1930) discussed his idea of a "distant electric vision," using cathode rays. Although Swinton never built the electronic television that he so accurately described, other scientists brought into reality his idea of the television set as we know it today.
In 1897, German scientist Ferdinand Karl Braun (1850–1918) invented the cathode ray tube. Inside the glass tube, cathode rays could produce pictures by hitting the fluorescent (glowing) screen at the end of the tube. Boris Rosing of Russia demonstrated in 1907 that the cathode ray tube could serve as the receiver of a television system.
In England, John Logie Baird (1888–1946) experimented with Nipkow's scanning disk in the early 1920s. At around the same time, in the United States, Charles Francis Jenkins (1867–1934) was performing the same experiment. In 1926, Baird was the first to demonstrate the electrical transmission of images in motion.
The invention of television cameras during the 1920s further contributed to the development of television. Philo Farnsworth (1906–1971) of Idaho was only fifteen years old when he figured out the workings of an electronic television system. Farnsworth invented the image dissector tube, an electronic scanner. In 1927, he gave the first public demonstration of the electronic television by transmitting the image of a dollar sign. Along with another American, Allen B. Dumont (1901–1965), Farnsworth developed a pickup tube that became the home television set by 1939.
At the same time, Russian immigrant Vladimir Zworykin (1889–1982) invented an electronic camera tube called the iconoscope. Both television cameras invented by Farnsworth and Zworykin used a cathode ray tube as the television receiver for recreating the original images.
Color television
The earliest mention of color television was in a German patent in the early 1900s.
In 1928, John Logie Baird used Nipkow's mechanical scanning disk to demonstrate color television. A color television system developed by Hungarian-born American Peter Goldmark (1906–1977) in 1940 did not receive wide acceptance because it did not work in black-and-white television sets. It took almost twenty years for color television to be commercially available.
Raw Materials
The television is made up of four principal sets of parts: the exterior part or housing, the picture tube, the audio (sound) reception and stereo system, and the electronic components (parts). These electronics parts include cable and antenna input and output devices, a built-in antenna in most television sets, a remote control receiver, computer chips, and access buttons. The remote control, popularly called a "clicker," is an additional part of the television set.
The television housing is made of injection-molded plastic. In injection molding, liquid plastic is forced into a mold with the help of high pressure. The plastic takes on the shape of the mold as it cools. Some television sets may have exterior wooden cabinets. The audio reception and stereo systems are made of metal and plastic.
The picture tube materials consist of glass and a coating made of the chemical phosphor, which glows when hit by light. Other picture tube materials include electronic attachments around and at the rear of the tube. Brackets and braces hold the picture tube inside the housing.
The antenna and most of the input-output connections are made of metal. Some of the input-output connections are coated with plastic or special metals to improve the quality of the connection or to insulate it. (The insulation material prevents the escape of heat, electricity, or sound.) The chips (also called microchips) are made of silicon, metal, and solder (a metal that is heated and used to join metals).
Different types of engineers are responsible for designing a television set. These include electronics, audio, video, plastics, and fiber optics engineers. The engineering team may design a bigger television set patterned after an existing model. They may also design new features, including an improved picture, better sound system, or a remote control that can work with other devices, such as a DVD player.
The team members discuss ideas about the new features, redrawing plans as they develop new ideas about the design. After the engineers receive initial approval for manufacturing the set, they make a prototype, or a model, after which the other sets will be patterned. A prototype is important for testing out the design, appearance, and functions of the set. Having a prototype also enables the production engineers to determine the production processes, machining (the cutting, shaping, and finishing by machine), tools, robots, and changes to existing factory production lines.
When the prototype passes a series of strict tests and is finally approved for manufacture by management, the engineers draw detailed plans and produce specifications for the design and production of the model. Specifications include the type of materials needed, their sizes, and workmanship involved in the manufacturing process.
Television uses a process called scanning to capture and then recreate an image. When recording an image, the television camera breaks it down into 525 horizontal lines. Electron beams in the camera tube scan (read) the lines thirty times every second. (In Europe, Australia, and most countries in Asia, each image is separated into 625 lines, with the scanning done at twenty-five times per second.) The television receiver, which is the television set, recreates the images on the screen by using the same electrical signals recorded by the television camera. The picture tube inside the television contains three electron guns that receive the video (image) signals. The electron guns shoot electron beams at the phosphor-coated dots on the screen, scanning the screen in the same pattern that the images were recorded by the camera.
Raw materials and components that are manufactured by other persons are ordered. The production line is constructed and tested. Finally, the components that would go into the new television sets are put together in the assembly line.
The Manufacturing Process
1 Television housings are mostly made of plastic. Using a process called injection molding, high pressure is applied to liquid plastic to force it into molds. The plastic is allowed to cool and harden. The formed solid plastics are released from the molds, trimmed, and cleaned. They are then assembled to make up the television housing. The molds are designed so that brackets and supports for the various parts of the television set are part of the housing.
Picture tube
2 The television picture tube, also called a cathode ray tube (CRT), is shaped like a funnel. The widest part of the funnel is a slightly curved plate made of glass. The glass is the television screen on which pictures are viewed. A dark tint may be added to the glass plate to improve color. The inside of the screen is covered with tiny dots of phosphors, or chemicals that glow when hit by electrons. The phosphor dots come in the primary colors red, green, and blue.
3 Immediately behind the phosphor layer is a thin metal shadow mask with thousands of small holes. Some shadow masks are made of iron. The better-quality shadow masks are made of a mixture of nickel and iron called Invar that lets the picture tube operate at a higher temperature. Higher temperatures result in brighter pictures.
4 The narrow end of the color picture tube contains three electron guns. Their job is to shoot electron beams at the phosphors, with each gun responsible for a specific color. The shadow mask makes sure the electron guns each shoot at one color of phosphors in a process called scanning (see sidebar on page 272). When hit by electrons, the phosphors light up, creating the pictures on the television screen.
After the electron guns are placed inside the picture tube, air is removed from the tube to prevent it from interfering with the movement of the electrons. Then, the end of the tube is closed off with a fitted electrical plug that will be placed near the back of the set.
5 A deflection yoke, consisting of two electromagnetic coils, is fitted around the neck of the picture tube. The electromagnetic coils cause pulses of high voltage to guide the direction and speed of the electron beams as they scan the television screen.
Audio system
6 The speakers, which go into the housing, are typically made by another company that works closely with the television manufacturer. They are made according to certain characteristics specified by the manufacturer. Wiring, electronic sound controls, and integrated circuitry are assembled in the television set as it travels along the assembly line. An integrated circuit, also called a chip or microchip, is a tiny piece of silicon on which electronic parts and their interconnections are imprinted.
A moving scene that we see on television is actually a series of still (nonmoving) images that are shown in rapid succession. The physical phenomenon called "persistence of vision" enables the retina of the eye to hold on to an image for a fraction of a second longer after the eye has seen it. The brain, which works with the eye, puts these still images together so that the eye perceives them as a single moving scene.
Electronic parts
7 After the picture tube and the audio system are assembled in the set, other electronic parts are added to the rear of the set. The antenna, cable jacks, other input and output jacks, and the electronics for receiving remote control signals are prepared as subassemblies on another assembly line or by specialty contractors hired from outside the company. These electronic components are added to the set, and the housing is closed.
Quality Control
Like other precision products, the television requires strict quality control during manufacture. Inspections, laboratory testing, and field testing are constantly conducted during the development of prototypes. The manufacturer has to be sure the resulting product is not only technologically sound but also safe for use in homes and businesses.
The Future
Researchers continue to find new ways to improve on television sets. The high-definition television (HDTV) system that we have today is a digital television system. The conventional television system transmits signals using radio waves. During transmission, these waves could get distorted, for example, by bad weather. The television set, unable to distinguish between distorted and good-quality waves, converts all the radio waves it receives into pictures. Therefore, the resulting images may not all be of good quality.
Digital television, on the other hand, while also using radio waves, assigns a code to the radio waves. When it comes time to recreate the picture, the television set obtains information from the code on how to display the image. HDTV offers clearer and sharper images with its 1,125-line picture. Compared to the traditional 525-line picture, HDTV offers a far better picture because more lines are scanned by the television camera and receiver. This means more details of the images are included.
In the future, digital television could also allow the viewer to choose camera angles while watching a concert or a sports event. The viewer could also communicate with the host of a live program and edit movies on screen.
Flat-panel television screens, such as liquid-crystal display (LCD) and plasma screens, are being perfected to achieve the kind of picture and sound seen in movie theaters. They are also seen as replacements for the present bulky television sets made of cathode ray tubes. The flat screens are not only lightweight but are also energy-efficient. However, unless these state-of-the art technologies become affordable, it will be a while before consumers convert to flat-screen televisions.
A device used in television to send and receive electromagnetic waves, or waves of electrical and magnetic force brought about by the vibration of electrons.
cable television:
The transmission of television programs from television stations to home television sets through fiber optic cable or metal cable.
cathode ray tube (CRT):
A vacuum tube whose front part makes up the television screen. In the tube, images are formed by electrons striking the phosphor-coated screen.
Also called microchip, a very small piece of silicon that carries interconnected electronic components.
A small particle within an atom that carries a negative charge, which is the basic charge of electricity.
fiber optic cable:
A bundle of hair-thin glass fibers that carry information as beams of light.
Giving off light when exposed to electric current.
persistence of vision:
A physical phenomenon in which the retina of the eye holds on to an image for a fraction of a second longer after it has seen the image. The brain, which works with the eye, puts these still images together so that the eye perceives them as a single movement.
A chemical that glows when struck by light.
An object that is put into space and used to receive and send television signals over long distances.
To move electrons over a surface in order to transmit an image.
shadow mask:
A metal sheet with thousands of small holes. It is found behind the phosphor layer of the color picture tube and through which three electron guns at the other end of the tube shoot electron beams. The shadow mask ensures that each gun shoots at the specific phosphor color.
A nonmetallic material widely used in microchips because of its ability to conduct electricity.
For More Information
Graham, Ian. Communications. Austin, TX: Steck-Vaughn Company, 2001.
Graham, Ian. Radio and Television. Austin, TX: Steck-Vaughn Company, 2001.
Parker, Steve, Peter Lafferty, and Steve Setford. How Things Work. New York, NY: Barnes & Noble Books, 2001.
Brown-Kenyon, Paul I., Alan Miles, and John S. Rose,. "Unscrambling Digital TV." McKinsey Quarterly. (2000): pp.71-81.
Kubey, Robert, and Mihaly Csikszentmihalyi. "Television Addiction." Scientific American. (February 2000).
Web Sites
Early Television Foundation. (accessed on July 22, 2002).
"The Revolution of Television." Technical Press. (accessed on July 22, 2002).
|
<urn:uuid:4d1dea6d-51bf-4fc7-af23-2e5a57248b33>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5,
"fasttext_score": 0.048629581928253174,
"language": "en",
"language_score": 0.960943877696991,
"url": "https://www.encyclopedia.com/science-and-technology/computers-and-electrical-engineering/electrical-engineering/television"
}
|
during have to work with a broom like shape
during the Victorian era the chimney sweepers started cleaning soot off the walls in the United Kingdom. In the 1700s-1800s with toddlers being kidnapped or sold by their parents. These children were kept under an apprentice and had to sleep in unsatisfactory places also being forced to work under harsh conditions. Most of these children were younger than the age of ten but older than three cleaning inside chimneys, causing life threatening situations. One of the requirements of being a flue sweeper was the fact that the children had to be small. In order to have a toddler, a master or an apprentice would buy these small boys from their parents or kidnapped them off the streets. What would happen next is these young boys were often starved in order to fit inside a chimney and clean soot off the walls. Another is that the small little boys had to fit in the narrow chimneys that were no bigger than 20 inches. Furthermore, to the requirements there was several controversial subjects in the victorian era about the young boys working conditions. The controversial subjects included not being paid, children dying, developing cancer, being disfigured throughout their lifetime, and the fact that a small child forced to clean inside a chimney. Whenever the boys would mature they soon developed cancer in their scrotum and die during this tragic ordeal. Also the small boys bones would end up expanding differently than average children their age. Many of these children were not paid, and some were paid in food while others not so lucky. During the cleaning the small children had no protective gear and would have to work with a broom like shape on top of their heads to sweep the soot. While inside the chimneys the toddlers would die due to suffocation and get stuck and die. The master’s next action was to send yet again another child up the chimney up to remove the body, most frequently both boys ended up stuck. In later years, there would soon be many acts stopping children from working at young ages. After multiple deaths, and unpleasant labor conditions this was brought to the attention of parliament in the United Kingdom numerous acts were created to prevent children from dying in chimneys. A first act of defence was The Chimney Sweepers Act of 1788 that was established so no boy would work younger than eight years old. This was a first of many motions to prohibit child labor in the United Kingdom. Another was The Act of Chimney Sweepers and Chimney Regulation Act in 1840 this was meant to stop anyone from cleaning and working in chimneys under the age of twenty-one. Ultimately in 1875, an edited version of the past chimney sweepers acts an additional act named the Chimney Sweepers Act of 1875 mission was to forbid child labor following the death of George Brewster aged 12. Momentarily following his death, George’s master Mr. Wyer was sent to jail and charged with manslaughter. Unfortunately George Brewster’s body was jammed inside the flue while removing the soot from the walls. Once realizing young Mr. Brewster became stuck multiple people attempted to break down the wall and free George but in the end he was declared lifeless. George Brewster became the final child to have perished inside a chimney while removing the soot.As the future prolonged, there is no longer chimney’s that require anyone to climb inside and remove soot. In today’s time chimneys now run on firewood and are not being cleaned daily. It is now illegal for any child to clean a chimney as a career. In addition, children are not being sold in exchange for money from their broke and greedy parents but also they are not being kidnapped. Once child labor in the chimney sweeping industry became illegal this changed the design for future chimneys. In modern days, chimneys are very big and are built in different sizes now running on wood this makes cleaning chimneys easier for everyone.
I'm Dianna!
Check it out
|
<urn:uuid:fd70f1fa-9883-43f9-95cb-2d3331344dac>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.78125,
"fasttext_score": 0.1879868507385254,
"language": "en",
"language_score": 0.9833986163139343,
"url": "https://youmademydayphotography.com/during-have-to-work-with-a-broom-like-shape/"
}
|
Tuesday, March 5, 2019
what is booting
If you have a smart phone or computer or if you are a tech guy, then know or heard about booting. And to find more advanced information about booting, you've surfing a lot of websites on the internet, I hope you've got a lot of information about booting from there. Similarly, I have tried to give some important information about booting in this article. Hopefully you can get important information like other websites.
-> Booting is a process or set of operation that loads a hence starts the operating system, starting from the point when user switches on the power button.
what is booting
-> When a computer starts up ( by pressing the power button ), the first thing occurs is it send a signal to motherboard which in turn start the power supply. after supplying the correct amount of power to each device, it sends a signal bios which is called "POWER OK SIGNAL"
Once BIOS receive the “POWER OK “ signal , it’s start the Booting process by first initializing a process called “ POST ” (Power On Self Test) . BIOS first checks out that every devices associated with the computer has the right amount of power and it also check out that whether the memory is not corrupted. Then every devices is initialize and finally the BIOS is ready to control the boot.
Now the final process of Booting is begin , for this BIOS first find 512 bytes of image , called MBR (master boot record ) . or boot sector from the storage devices ( hard disk,CD,floppy disk,pendrive, etc.) which is used for Booting. The boot device's priority is set by the user in the BIOS setting.
Once the BIOS, finds a boot sector, it loads it into memory and executes it. If the bios does not find a valid boot sector, then it is check next device. this process continues until a valid boot sector is found. If BIOS fails to get valid boot sector , generally it’s stop the execution and give error massage “ Disk BOOT Failure “ .
Types of Booting :
Booting can be done is two ways:-
1. Cold Booting
2. Warm Booting
1. Cold Booting :-
cold booting
when computer system was turn off Normally by pressing the shut down option . Again computer system is turn on by press the power button , it check the all devices and components and it is work properly . then this will read all the instructions from the ROM and the Operating System will be Automatically gets loaded into the System. this Booting process is called cold Booting .
2. Warm Booting :-
Once a program encountered a mistake from that it cannot recover then the pc system was restart ( shut down and ON mechanically ) by pressing the
restart selection or ALT + CTRL +DEL keys press at the same time .
this Booting method is named warm Booting .
1 comment:
Best Rated Inversion Table
|
<urn:uuid:a235bdc4-7348-455b-aedd-290909c92c31>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.71875,
"fasttext_score": 0.5999745726585388,
"language": "en",
"language_score": 0.8859338164329529,
"url": "https://www.sumancomputerworld.com/2019/03/what-is-booting.html"
}
|
Static Keyword
Static keyword can be used with the variables and methods and with inner class. Anything declared as static is related to class and not an object.
Static Variable
Static variable is associated with a class. Only one instance of variable is present in the memory making it memory efficient. If a class object is created 10 times all the instances will be pointing to the same instance. The instance of the variable is created in a heap.
public class Counter {
private static int count=0;
private int nonStaticcount=0;
public void incrementCounter() {
public static int getCount() {
return count;
public int getNonStaticcount() {
return nonStaticcount;
public static void main(String args[]) {
Counter countObj1 = new Counter();
Counter countObj2 = new Counter();
System.out.println("Static count for Obj1: "+countObj1.getCount());
System.out.println("NonStatic count for Obj1: "+countObj1.getNonStaticcount());
System.out.println("Static count for Obj2: "+countObj2.getCount())
System.out.println("NonStatic count for Obj2: "+countObj2.getNonStaticcount())
Static count for Obj1: 2
NonStatic count for Obj1: 1
Static count for Obj2: 2
NonStatic count for Obj2: 1
Static Method
A static method can be accessed without creating the objects. Just by using the Class name the method can be accessed. Static method can only access static variables and not local or global non-static variables. Static methods can be accessed via it’s class reference, and there is no need to create an instance of class. Though you can access using instance reference as well but it will have not any difference in comparison to access via class reference.
public class Test{
public static void printMe() {
System.out.println("Hello World");
public class MainClass {
public static void main(String args[]) {
Hello World
Static Class
In java, you can have a static class as inner class. Just like other static members, nested classed belong with class scope so the inner static class can be accessed without having an object of outer clas
public class InnerClass {
static class StaticInner {
static int i = 9;
int no = 6;
private void method() {}
public void method1() {}
static void method2() {}
final void method3() {}
Static Block
Static blocks are portion of class initialization code, which are wrapped with static keyword. The static block is loaded when the class is loaded by the JVM for the 1st time only whereas init {} block is loaded every time class is loaded. Also first the static block is loaded then the init block.
public class LoadingBlocks {
System.out.println("Inside static");
System.out.println("Inside init");
public static void main(String args[]){
new LoadingBlocks();
new LoadingBlocks();
new LoadingBlocks();
Inside static
Inside init
Inside init
Inside init
Recommend Reading
1. How java manages Memory?
2. Why is it advised to use hashcode and equals together?
3. Comparable and Comparator in java
4. How to create Singleton class in Java?
5. Difference between equals and ==?
6. When to use abstract class over interface?
7. Difference between final, finally and finalize
8. Why is it recommended to use Immutable objects in Java
|
<urn:uuid:3c0c061e-e154-46fb-8c35-c1031a4b6589>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.84375,
"fasttext_score": 0.6344777345657349,
"language": "en",
"language_score": 0.7745695114135742,
"url": "https://java-questions.com/static-keyword.html"
}
|
impact basin, Mercury
impact basin, Mercury
Caloris, prominent multiringed impact basin on Mercury. The ramparts of Caloris are about 1,550 km (960 miles) across. Its interior contains extensively ridged and fractured plains. The largest ridges are a few hundred kilometres long. More than 200 fractures comparable to the ridges in size radiate from Caloris’s centre.
Read More on This Topic
Mercury: Caloris
The ramparts of the Caloris impact basin span a diameter of about 1,550 km (960 miles). Its interior is occupied by smooth…
Two types of terrain surround Caloris—the rim and the ejecta terrains. The rim is a ring of irregular mountains almost 3 km (2 miles) in height, the highest mountains yet seen on Mercury. A second, much smaller escarpment ring stands beyond the first. Smooth plains occupy the depressions between mountains. Beyond the outer escarpment is a zone of linear radial ridges and valleys that are partially filled by plains. Volcanism played a prominent role in forming many of these plains.
Caloris is one of the youngest large multiring basins. It probably was formed at the same time as the last giant basins on the Moon, about 3.9 billion years ago.
On the other side of the planet, exactly opposite Caloris, is a region of weirdly contorted terrain. It likely formed at the same time as the Caloris impact by the focusing of seismic waves from that event.
Clark R. Chapman
Your preference has been recorded
Step back in time with Britannica's First Edition!
Britannica First Edition
|
<urn:uuid:855bcdf1-a239-4345-976e-15da4bc90d7c>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.671875,
"fasttext_score": 0.050197482109069824,
"language": "en",
"language_score": 0.9265708923339844,
"url": "https://www.britannica.com/place/Caloris"
}
|
Javanese alphabet (Carakan) Carakan (Javanese alphabet)
The earliest known writing in Javanese dates from the 4th Century AD, at which time Javanese was written with the Pallava alphabet. By the 10th Century the Kawi alphabet, which developed from Pallava, had a distinct Javanese form.
For a period from the 15th century onwards, Javanese was also written with a version of the Arabic alphabet, called pegon (ن ݢ ٙ ڤٙي ٙ).
By the 17th Century, the Javanese alphabet had developed into its current form. During the Japanese occupation of Indonesia between 1942 and 1945, the alphabet was prohibited.
Notable features
Used to write:
Javanese (baṣa Jawa), an Austronesian language spoken by about 80 million people in Indonesia and Suriname. In Indonesia Javanese is spoken in Java, particularly in central and east Java, and on the north coast of West Java, and in Madura, Bali, Lombok, and in the Sunda region of West Java. The Javanese alphabet also can be used to write old Javanese.
Also used to write Sundanese and Madurese.
The Javanese alphabet
Akṣara Wyanyjana (Consonants)
Akṣara Carakan and Pasangan
Javanese consonants (Akṣara Carakan and Pasangan)
Akṣara murda consonants
Javanese Akṣara murda consonants
The pasangan (final consonants) are shown in red. ka, ta, pa, ga and ba are most commonly used. The others are rarely used.
Akṣara for writing Old Javanese
To write old Javanese some of the letters are aspirated. The arrangement of consonants is based on standard Sanskrit.
Old Javanese consonants
Akṣara Rekan (additional consonants)
Javanese Akṣara Rekan (additional consonants)
Vowels and vowel diacritics (Akṣara Swara & Saṇḍangan Swara)
Javanese vowels and vowel diacritics (Akṣara Swara)
Note: rê, rêu, lê, and lêu are also treated as consonants. So they have pasangan:
Javanese vowels and vowel diacritics (Akṣara Swara)
The long vowels (ā, êu, ī, ai, rêu, lêu, ū, and au) are no longer used in modern Javanese, but just for special purposes like writing old Javanese and transliterating foreign sounds.
Javanese numerals (Wilangan)
Numerals (Wilangan)
Javanese numerals (Wilangan)
The first line of numbers are native Javanese ones; the second line of number are adapted from Sanskrit.
Punctuation (Pada)
Javanese punctuation (Pada)
Sample text in the Javanese alphabet
Text provided by Aditya Bayu, with corrections by Hafidh Ihromi and Arif Budiarto
(Article 1 of the Universal Declaration of Human Rights)
Another sample text in the Javanese alphabet (Lord's prayer)
Javanese sample text (Lord's Prayer)
Rama kaula ingkang wonten ing swarga. Wasta Sampeyan dadosa suci. Sajaman Sampeyan rawuha. Kars Sampean dadosa ying bumi kados ing swarga. Rejeki kaula kang sadinten-dinten sukani dinten puniki maring kaula. Ambi puntan maring kaula dosa kaula, kados kaula puntan maring satunggil-tunggil tiyang kang salah maring kaula. Ami sampun bekta kaula ing percoban. Tapi cuculaken kaula bari pada sang awon. Sabab sajamana ambi kowasa sarta kamukten Gusti kagunganipun dumugi ing awet. Amin.
Latin alphabet for Javanese
Javanese alphabet
Javanese pronunciation
Javanese pronunciation
Sample text in Javanese
Hear a recording of this text
(Article 1 of the Universal Declaration of Human Rights)
Arabic alphabet for Javanese (Pegon / ڤَيݢَون)
Arabic alphabet for Javanese (Pegon) / ڤَيݢَون
Download an alphabet charts for Javanese in Excel or PDF format
Information, corrections and additions provided by Wolfram Siegel, Nurrahim Dwi Saputra and Michael Peter Füstumum
Sample videos in Javanese
Information about Javanese | Phrases | Numbers | Family words | Tower of Babel | Learning materials
Information about JavaneseÉjaan_Basa_Jawa;
Online Javanese lessons
The Official Site of Akṣara Jawa - free fonts and a tutorial on how to write with the Javanese alphabet (in Javanese and Indonesian)
Javanese fonts
Malayo-Polynesian languages
Languages written with the Latin alphabet
Syllabic alphabets / abugidas
Green Web Hosting - Kualo
Why not share this page:
|
<urn:uuid:48ea8e37-6339-402c-85f8-44542b3bf5c4>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.796875,
"fasttext_score": 0.021230041980743408,
"language": "en",
"language_score": 0.7871272563934326,
"url": "https://omniglot.com/writing/javanese.htm"
}
|
Similar and Dissimilar Charges
Purpose: To compare the responses of objects with similar and dissimilar charges.
Summary: Students use a hand-crank Van de Graaff Generator to charge different objects to determine whether they attract or repel each other.
Question: Do objects with a similar charge attract or repel each other?
Method/Materials: Hand-crank Van de Graaff Generator, piece of Styrofoam packaging material, string, and glass rod.
Students should start by making sure the Van de Graaff generator is totally discharged by using the grounded discharge wand provided. Tie one end of the string around the Styrofoam ball (or packaging "peanut") and the other around the glass rod. Have each student determine what will happen to the ball when it comes in contact with the generator dome that has been charged. While one student turns the wheel to charge the dome of the generator, another student should hold the glass rod so that the ball is touching the dome.
After students have made their observations of what happens to the ball that comes into contact with the dome, allow them to make predictions of what will happen when the student holding the glass rod puts his hand near the ball. Will it be attracted or repulsed by the student's hand? Encourage students to discuss their predictions with their classmates and provide rationale for their prediction.
Students will note that the peanut will be repelled by the dome after a few seconds. Discuss what happened to the original charge on the ball and the subsequent charge from the dome. Why did they repel each other? Why was the ball attracted to the student's hand? Explain how electrostatics occurs when there is an exchange of charge on two or more different objects.
Required Equipment
Required Equipment
Paul Hewitt"When showing electrostatics demonstrations, I quip with my students and ask who the Van de Graaff generator was named after. Then I whimsically answer 'Robert Generator,' which adds a bit of humor in class. I advise teachers do the same in my instructor manuals, which was with permission of Robert Van de Graff! One day a student asked if the device should have been named after "Robert Separator," for that's what it does. A Van de Graaff device is a separator; it separates negative charges from positive ones—what all charging devices do. So we keep learning, which adds to the delight of physics."
Paul Hewitt, former boxer, uranium prospector, sign painter, and cartoonist began college at the age of 28 and fell in love with physics. His name is synonymous with Conceptual Physics to physics educators everywhere. His textbook, the leading physics textbook for nonscientists since 1971, has changed the way physics is taught to both non-science and science majors as well.
Collin Wassilak
Share this
Leave a comment
Please note, comments must be approved before they are published
|
<urn:uuid:8cfacf1e-4eba-42b8-a522-16dd16c32702>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4.1875,
"fasttext_score": 0.4461495876312256,
"language": "en",
"language_score": 0.9419142603874207,
"url": "https://www.arborsci.com/blogs/cool-labs/similar-and-dissimilar-charges"
}
|
History of Pamplona
In mediaeval times, the city grew as three distinct communities, each behind its own defensive walls and often in confrontation with one another:
First, what remained of the original Vascon and Roman settlements gradually came to form the city of La Navarrería. Its inhabitants, who were from Navarre; were mostly engaged in farming. The common language was Basque, the only Pre-Indo-European language still spoken in Europe.
Thanks to the Way of St James, many Franks started to come to Pamplona, lured by the advantages that the monarchs of Navarre offered as part of a repopulation policy. And so, the borough of San Cernin or San Saturnino was born. These new inhabitants were mostly craftsmen and merchants, and their chief language was Occitan.
Finally, immigrants from different parts of Navarre and other foreigners formed a third community, that of San Nicolás. As in the case of La Navarrería, its population both farmed the land and worked in trades. Each of these three communities was completely walled and separated from its neighbours by moats or ditches. Their churches were their defensive bastions. In 1276, in the War of La Navarrería, the communities of San Nicolás and San Cernin joined forces against La Navarrería, which was completely razed to the ground. Definitive peace did not come until 8 September 1423 when King Charles III the Noble enacted the Privilege of the Union: the three communities joined to form ‘a single university, a municipal district and indivisible community.’ The Jurería, now the City Hall, was built in what had been no man’s land, where the three boroughs met.
At the dawn of the 16th century, the Kingdom of Navarre was greatly coveted by the neighbouring crowns of Castile, Aragon and France. To make matters worse, infighting had begun which would eventually lead to civil war between contending lineages in Navarre. Charles III the Noble had created the title of Prince of Viana for his grandson, the future Charles IV, the son of Blanche of Navarre and John II of Aragon. On the death of the prince’s mother, however, John did not allow his son to reign.
As a result, two sides formed: the Agramonteses and the Beaumonteses. John II of Aragon remarried, this time wedding Juana Enríquez, and with her fathered Ferdinand the Catholic. When, years later, Ferdinand allied with the English crown against the French, the monarchs of Navarre chose to side with the latter. Given the situation, Ferdinand, with the support of a papal bull, sent his troops, with the Duke of Alba at their head, to seize the kingdom in 1512. The last monarchs of Navarre, John III and Catherine, Queen of Navarre, fled and moved the Court to their domains on the other side of the Pyrenees to try to recover the kingdom from there. After several attempts, a decisive battle took place in Noáin in June 1521 and the monarchs of Navarre were defeated. The city’s strategic position in relation to France meant that great lengths were gone to in order to fortify the city appropriately with its Renaissance walls and the Citadel
The 18th century was the city’s golden age. The Enlightenment and concern for new concepts such as social well-being led to substantial improvements in the city: the streets were cobbled, the sewer system was improved and public lighting arrived in the form of candles. But perhaps the most significant project consisted of providing the city with a water supply via the aqueduct of Noáin, designed by Ventura Rodriguez. As a result, Luis Paret y Alcázar, a painter to the Court, could create his emblematic Neoclassical fountains. The century also saw many people leave Navarre, some to the Court in Madrid and others to America. The latter were known as indianos. After thriving, many returned to the city and built magnificent residences for themselves and generations to come.
The 19th century was marked by war: the War of Independence (1808-1814), the Royalist War (1822-1823) and the Carlist Wars (1833-1840, 1872-1876). In 1841 Navarre ceased to be a kingdom, the Pacted Law demoting it to the status of province. This was also when the local bourgeoisie came into being and industrialisation took its first tentative steps. In 1860 the railway reached Pamplona. The 19th century was very important for cultural life. The city could boast the international success of both the violinist Pablo Sarasate and the Roncal-born tenor Julián Gayarre. Significant music institutions were created, such as the Orfeón Pamplonés, La Pamplonesa and Orquesta Santa Cecilia, Spain’s oldest orchestra.
The century also saw a great increase in the city’s population, inexorably at odds with its fortified layout. Overcrowding had turned Pamplona into an unhealthy city. The urgent construction of the first extension of the city called for the demolition in 1888 of the Citadel’s two interior bastions. Although the small neighbourhood created, Primer Ensanche, consisting of just 6 blocks, barely helped solve the housing problem, it did leave us some remarkable modernist buildings. Demolition of the Front of La Tejería, finally enabling construction of the second extension, Segundo Ensanche and expansion of the city to the south, was not approved until 1915. The Taconera Gardens and the Media Luna Park became the favourite places for recreation in the city.
Pamplona and its surrounding district gradually grew to become the city we know today, with a population of 203,000 -350,000 if the metropolitan area is included: a city with its sights firmly set on the future thanks to the significant industrial and service belt which surrounds it. The University of Navarre was founded in the 1950s and the Public University of Navarre in the 1980s. The city’s medical and hospital services are also second to none, with the Hospital Complex of Navarre, which belongs to the Navarre Health Service – Osasunbidea, the University Clinic of Navarre and the Centre for Applied Medical Research (CIMA). Pamplona has a competitive secondary sector, chiefly driven by the automotive, pharmaceutical and renewable energy industries.
|
<urn:uuid:332e35d4-793c-4903-90e7-2db7f9a8bda0>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5,
"fasttext_score": 0.03913313150405884,
"language": "en",
"language_score": 0.9438192844390869,
"url": "https://www.pamplona.es/en/turismo/murallas/historiadepamplona"
}
|
Learning Exercise
The Circus in late 19th Century American History
The circus was once a cultural icon of the United States providing an entertainment for both large and small communities. This assignment will focus on how the circus fit into the 19th century culture of the United States and explain its importance to the American people.
Instructions: Read the following sections: Introduction, Circuses: P.T. Barnum, Ringley Brothers, Adam Forepough; Circus Posters, People, Animals, Acts, Music, Communication, Transportation, Venues, and Sounds of the Circus. Write a 5-7 page paper explaining the appeal of the circus to the American People in the 19th century and explain what the appeal of the circus tells us about the emerging American character before World War I.
Technical Notes
Software will have to be downloaded in order to view the film. The film is not required to complete the assignment.
United States Cultural History
Learning Objectives
To acquaint students with an important cultural icon of 19th century American life.
|
<urn:uuid:4d64abcd-acb7-4c19-9bcd-6e0a93c59127>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4,
"fasttext_score": 0.9871835708618164,
"language": "en",
"language_score": 0.9025303721427917,
"url": "https://www.merlot.org/merlot/viewAssignment.htm?id=317667"
}
|
Dissolution of the United Arab Republic
A coup in Damascus led to the dissolution of the United Arab Republic.
Dissolution of the United Arab Republic
The United Arab Republic was a sovereign state in the Middle East from 1958 to 1961. It was initially a political union between Egypt (including the occupied Gaza Strip) and Syria from 1958 until Syria seceded from the union after the 1961 Syrian coup d'état, leaving a rump state. Egypt continued to be known officially as the United Arab Republic until 1971.
Instead of a federation of two Arab peoples, as many Syrians had imagined, the UAR turned into a state completely dominated by Egyptians. Syrian political life was also diminished, as Nasser demanded all political parties in Syria to be dismantled. In the process, the strongly centralized Egyptian state-imposed Nasser's socialistic political and economic system on weaker Syria, creating a backlash from the Syrian business and army circles, which resulted in the Syrian coup of September 28, 1961, and the end of the UAR.
The immense increases in public sector control were accompanied by a push for centralization. In August 1961 Nasser abolished regional governments in favour of one central authority, which operated from Damascus February through May and from Cairo for the rest of the year. As a part of this centralization, Sarraj was relocated to Cairo, where he found himself with little real power. September 15, 1961, Sarraj returned to Syria, and after meeting with Nasser and Amer resigned from all his posts on September 26.
Without any close allies to watch over Syria, Nasser was unaware of the growing unrest of the military. On September 28 a group of officers staged a coup and declared Syria's independence from the UAR. Though the coup leaders were willing to renegotiate a union under terms they felt would put Syria on an equal footing with Egypt, Nasser refused such a compromise. He initially considered sending troops to overthrow the new regime, but chose not to once he was informed that the last of his allies in Syria had been defeated. In speeches that followed the coup, Nasser declared he would never give up his goal of an ultimate Arab union. However, he would never again achieve such a tangible victory toward this goal.
Sparks This Month
No any sparks to show.
facebook logo twitter logo
|
<urn:uuid:4e82509f-748c-48ff-ba20-52c0f25c62a2>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.84375,
"fasttext_score": 0.19919908046722412,
"language": "en",
"language_score": 0.9810205101966858,
"url": "https://www.merospark.com/spark/86/dissolution-of-the-united-arab-republic-september-8-1961/"
}
|
Glossary of Fencing Terms
The following is an explanation of some terms in fencing, but, for a complete list, you can also click on the following link: COMPLETE LIST.
This is the basic move that brings you closer to your opponent.
Key Points to remember:
1)From On Gaurd, lift your front toe up.
2)Lift front foot and take a small step forward, landing on heel, then bringing toes down.
3)Lift back foot up, and step an equal distance (Do not drag your foot and keep feet at 90 degrees) returning to On Gaurd.
4)Maintain the same weight on each foot by not rocking the body or arms while advancing 5)Don't stand up or lean forward.
-Direct Parry
To be added soon...
Fencing Distance
The Fencing measure, or distance, is the distance which a fencer keeps in relationship to his opponent It is such that he cannot be hit unless his opponent lunges fully at him. Its maintained by using the Advance and the Retreat.
-Holding the Foil
Holding the foil is like holding a bird: Don't squeeze it too hard, or you'll tire your muscles. Don't hold it too light or it may be knocked from your hands
Key Points to Remember:
1)Thumb and index finger are the manipulators
2)Other fingers are just helpers
3)Thumb on top on foil, and foil is resting on second segment of index finger
4)Curve of the foil goes into your palm
5)Pommel resting next to wrist
-Lines of Defense
To be added soon...
-The Lunge
The basic form of attack after the simple thrust
Key Points to Remember:
1)Start by moving the weapon hand forward while lifting the toe of the front foot
2)Accelerate the hand while driving off the rear leg
3)Lead with the heal of the front foot
4)Throw the rear arm back parallel to the rear leg for momentum & balance
1)Weapon Arm in line with your shoulder
2)The trunk of your body should remain nearly straight up, not tilted left or right
3)Rear leg straight
4)Rear foot flat on the floor, still at 90 degrees to front foot
5)Front thigh parallel to the floor
6)Front knee directly over the base of the toes
-On Gaurd (En Garde in French)
This is the basic fencing stance (the stance you are in right at the beginning of a bout, and during a bout when no one is moving.
Key Points to Remember:
1)Front toes pointing straight forward towards your opponent (right foot for right handers, left foot for lefties)
2)Back foot is about a foot length behind front foot, with both heels in the same line
3)Back toes pointing at 90 degrees to front toes
4)Knees slightly bent
5)Torso is straight up with shoulders relaxed, do not lean forward, backwards, or to either side. 6)Sword arm is resting at your side in a position so that the foil tip is pointing at a valid target area (torso of opponent) and your hand is about chest high.
7)Sword arm's elbow is about a hand's distance in front of your side.
8)Back arm is held so that the elbow is at the same level as your front elbow, and your back hand is resting about shoulder high.
9)Head and chin up, looking at opponents upper body.
- The Retreat
The basic movement used to increase the distance between you and your opponent. The moves are basically the opposite of the advance.
Key Points to Remember:
1)Lift back foot and take a small step back (do not drag foot and keep feet at 90 degrees)
2)Lift front toe up
3)Lift front foot, and step an equal distance back, landing on heel, then bringing toes down, bringing you back to On Gaurd.
4)Maintain the same weight on each foot by not rocking the body or arms while retreating 5)Don't stand up or lean backwards.
Rules of Fencing
Key Points to Remember:
1)The foil is a thrusting weapon, points only awarded for hits with the tip
2)The valid target area is the torso (not including neck, but including groin)
3)Points are only awarded to the fencer with "RIGHT OF WAY"
4)No points are awarded for simultaneous touches
5)In non-electric (dry) fencing, points are only awarded if there is a slight bend in the blade 6)Most fencing bouts are for 5 touches, within 6 minutes.
7)You must wear proper protective equipment, including a plastron, jacket, mask, and a glove.
Don't forget to salute your opponent and the director before fencing, as well as shaking your oponents hand after the match is completed.
Rule Infractions
Key Points to Remember:
1)Loss of balance
2)Body contact (corps-a-corps)
3)Turning the back and bringing rear shoulder forward
4)Covering the target area with unarmed hand, or catching blade The 3 weapons and target areas: Foil, Epee, and Sabre. The Frenching Strip (or piste in French) is 14 meters long (~40 feet) and 2 meters wide (~6 feet) On Gaurd lines are 2 meters from the center line and Warning lines are 5 meters from the center. The last 2 meters on each end of the strip should be marked differently.
1. Castello, Hugo and James Castello. "Fencing." The Ronald Press Company. New York. 1962.
2. Wyrick, Waneen. "Foil Fencing." W. B. Saunders Company. Philadelphia. 1971.
3. Crosnier, Roger. "Fencing with the Foil." Faber and Faber Limited. London. 1968.
|
<urn:uuid:5251bb56-bb4d-46ab-b021-6dda52a66689>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.515625,
"fasttext_score": 0.04533106088638306,
"language": "en",
"language_score": 0.8986193537712097,
"url": "https://owd.tcnj.edu/~fencing/glossary.htm"
}
|
The Transition from Old to New South
1288 Words6 Pages
The rise and industrialization of the South began with the end of the Civil War. This aided in the transition from Old to New South, from a time of poverty and slave labor to a more progressive time. The decline of the Old South was often unaccepted and ignored by southerners as they tried to cling to their past ways. Faulkner highlights the cultural shift from Old to New South through character relationships and personalities in his short stories “A Rose for Emily,” “That Evening Sun,” and “Red Leaves.” The main character in William Faulkner’s story “A Rose for Emily,” Miss Emily, is a representation of the Old South. While she is still alive, the townspeople have a certain respect for her because she has been there so long; they do not feel a need to change what has always been. Nevertheless, once she dies what is left of her, such as her house, is a disgrace to the town. “Only Miss Emily’s house was left, lifting its stubborn and coquettish decay above the cotton wagons and the gasoline pumps-an eyesore among eyesores... Alive, Miss Emily had been a tradition, a duty, and a care; a sort of hereditary obligation upon the town” (Faulkner, “A Rose for Emily” 119). In the same way, the people of the South followed tradition in their lifestyles. The Southerners were brought up with certain ideas and actions engrained in their minds, and they did not realize the shame behind what they did. After the transition to New Southern ways, however, the Southerners easily saw the disgrace behind these traditions. The inability to leave the past behind is a reoccurring theme in both the South and in “A Rose for Emily.” “Drawing on the tradition of Gothic literature in America, particularly Southern Gothic, the story uses grotesque imagery an... ... middle of paper ... ...tory Criticism. By Jelena Krstovic. Vol. 92. Detroit: Gale, 2006. 86-93. Literature Criticism Online. Web. 31 Mar. 2010. Kazin, Alfred. “Old Boys, Mostly American: William Faulkner, The Short Stories.” Contemporaries (1962): 154-58. Rpt. in Short Story Criticism. Ed. Laurie Lanzen Harris and Sheila Fitzgerald. Vol. 1. Detroit: Gale, 1988. 161-2. Literature Criticism Online. Web. 31 Mar. 2010. Parker, Robert Dale. “Red Slippers and Cottonmouth Moccasins: White Anxieties in Faulkner’s Indian Stories.” Faulkner Journal 18.1-2 (2002-03): 81-99. Rpt. in Short Story Criticism. Ed. Jelena Kristovic. Vol. 92. Detroit: Gale, 2006. 136-46. Literature Criticism Online. Web. 31 Mar. 2010. "William Faulkner (1897-1962)." Short Story Criticism. Ed. Jelena Krstovic. Vol. 97. Detroit: Thomson Gale, 2007. 1-3. Literature Criticism Online. Gale. Hempfield High School. 31 March 2010.
Open Document
|
<urn:uuid:cd150396-f250-4e15-a9f6-a5846bee29d6>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5,
"fasttext_score": 0.019831061363220215,
"language": "en",
"language_score": 0.9301934242248535,
"url": "https://www.123helpme.com/essay/The-Transition-from-Old-to-New-South-196422"
}
|
Logo der Bayerischen Akademie der Wissenschaften
Repertorium „Geschichtsquellen des deutschen Mittelalters“
Which Sources does the Repertorium include?
In the words of the still valid definition of historian Paul Kirn (1890-1965), historical sources are „all texts, objects or facts from which knowledge of the past can be gained”.
To satisfy this definition, our Repertorium would have to encompass an almost unlimited number of historical source materials, including archaeological finds, texts, art objects of all sorts, place names, and proverbs. Unfortunately, the scale and diversity of such an undertaking exceeds the resources of a single research project. We are thus obliged to significantly limit the scope of our coverage:
Only Texts
Firstly, the Repertorium is restricted to written source material. Our texts are almost exclusively transmitted in manuscripts, such as the Carolingian Annals of St. Emmeram in Regensburg, written 800/900 AD, depicted here to the right (see the corresponding Repertorium entry in:
www.geschichtsquellen.de (Opus 368).
Historical sources first began to appear in printed form in the late 15th century after the invention of the printing press. However, printing was only economically viable for works of general interest: This was rarely the case with historical accounts. A very beautiful example of a printed historical work is the so-called “Schedelsche Weltchronik” of 1493 depicted to the left,
cf. www.geschichtsquellen.de (Opus 4188).
Works of historical remembrance
Secondly, the Repertorium is mainly concerned with texts that were initially conceived and written with the intent of preserving historical remembrance, i.e. historiographical works in a narrower sense. Besides these, the Repertorium also encompasses texts such as letters, political treatises, legal texts, etc. By contrast, formalised documents such as charters and rent-rolls, inscriptions, and texts with a predominantly serial character or of purely religious or literary content are generally not included.
Written Between 750 and 1500
Thirdly, the Repertorium documents the Medieval history of the Holy Roman Empire with an approximate time-frame from 750 AD to 1500 AD, beginning with the rise to power of the Carolingians in the Frankish realm and ending with the reign of Emperor Maximilian I (†1519). Correspondingly, the Repertorium encompasses historiographical source material from the entire Frankish Realm until the end of the ninth century (i.e. including the later French Kingdom) and thereafter only from the East Frankish and German regions.
With the focus on the Holy Roman Empire
Finally, the focus on the Medieval Holy Roman Empire also delineates the geographical base of our source material. This area was considerably larger than that of the present-day Federal Republic of Germany, including the territories of the Netherlands and Belgium, Lorraine and Alsace, Switzerland, Austria, Bohemia, Silesia, and Pomerania. External sources are, however, taken into account if they are closely connected to German history, such as texts on the Teutonic Order in the Baltic lands, German kings in Italy, or the relationship between the Empire and the Papacy.
|
<urn:uuid:7f59c34b-3e4b-47ee-8f5e-fbf2e6a7f197>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.25565481185913086,
"language": "en",
"language_score": 0.9212047457695007,
"url": "http://geschichtsquellen.badw.de/en/geschichtsquellen.html"
}
|
Hulton Archive/Getty Images
(1797–1878). One of the first great American scientists after Benjamin Franklin, Joseph Henry was responsible for numerous inventions and discovered several major principles of electromagnetism, including the oscillatory nature of electric discharge and self-inductance, an important phenomenon in electronic circuitry.
Joseph Henry was born in Albany, N.Y., on Dec. 17, 1797. He came from a poor family and had little schooling. In 1829, while working as a schoolteacher at the Albany Academy, he developed a method greatly increasing the power of an electromagnet. In 1829 he constructed the first electric motor.
Although Michael Faraday is generally given credit for discovering electromagnetic induction in 1831, Henry had observed the phenomenon a year earlier (see Faraday, Michael; electricity). In 1831, before he assisted Samuel F.B. Morse in the development of the telegraph, Henry built and successfully operated a telegraph of his own design (see Morse, Samuel F.B.). Henry never patented any of his many devices because he believed that the discoveries of science were for the common benefit of all humanity.
Henry became a professor at the College of New Jersey (later Princeton University) in 1832. He continued his researches and discovered the laws on which the transformer is based. He conducted an experiment that was apparently the first use of radio waves across a distance. He also showed that sunspots radiate less heat than the general solar surface.
In 1846 Henry became the first secretary of the Smithsonian Institution in Washington, D.C., where he initiated a nationwide weather reporting system. He was a primary organizer of the National Academy of Science and its second president. He died in Washington, D.C., on May 13, 1878. In 1892 his name was given to the unit of electrical inductance, the henry.
|
<urn:uuid:f32293e0-1956-4b7d-845a-3292641f45df>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5625,
"fasttext_score": 0.042741477489471436,
"language": "en",
"language_score": 0.9747093319892883,
"url": "https://kids.britannica.com/students/article/Joseph-Henry/274842"
}
|
What is The National Flag of El Salvador?
What is The National Flag of El Salvador?
The Flag of El Salvador was first adopted on September 4, 1908. It has three horizontal stripes. The blue stripes of the flag represent the two oceans the border central America, the Atlantic and Pacific.
The central white stripe symbolizes peace. The coat of arms shows a triangle which represents equality and the three branches of El Salvador‘s government.
Inside the triangle are five volcanoes which symbolize the five former members of the federation, flanked by the blue of the ocean and sea.
The triangle contains symbols of liberty, ideals of the people and peace, which are represented by a red cap, golden rays, and the rainbow.
|
<urn:uuid:45f09c4f-7d37-4106-a372-bf3d64ec195e>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.53125,
"fasttext_score": 0.27388930320739746,
"language": "en",
"language_score": 0.9155039191246033,
"url": "https://whatsanswer.com/what-is-the-national-flag-of-el-salvador/"
}
|
What Are the Four Movements in a Symphony?
four-movements-symphony Vancouver 125 - The City of Vancouver/CC-BY-2.0
A symphony is divided into four movements; the first movement is usually fast, the second one is slow, the third is medium, and the fourth movement is fast. This pace is intended to keep the listener invested and interested in the progression of the music.
The four symphony movements are classified according to rhythm, key, tempo and harmonization. They include an opening sonata or allegro, a slow movement called adagio, a minuet with trio, and an allegro, sonata or rondo.
A symphony typically tells a story. It's structured like this to create a narrative arc with an attractive, interesting beginning and a slower, romantic mid section that's followed by a medium part that brings story to its peak and leads to a fast ending.
|
<urn:uuid:488aac33-e218-48c2-9d1c-71696b1f8d38>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5625,
"fasttext_score": 0.5507358908653259,
"language": "en",
"language_score": 0.9459728002548218,
"url": "https://www.reference.com/world-view/four-movements-symphony-1b49d8f0d9869fcc"
}
|
Continuous & Normal Distributions in Business: Uses & Examples
Instructor: Scott Tuning
When an organization seeks to analyze data for the purpose of taking action, the data must meet certain criteria. This lesson explores one of these criteria called a normal distribution.
Assessing Your Market
How would the people around you react if you asked them a question like, ''Do you think that we should expel all illegal aliens from the U.S. by 2025?'' You're likely to get quite a few answers depending on where you are and who you're talking with.
If you ask some UC Berkley students this question, you're likely to get answers that are very different than the ones from a group of 40-something, white males at the annual meeting of the National Rifle Association.
Businesses (and politicians) need accurate data about the market in which they operate so that they can meet the needs of their customers. Analyzing the distribution of that data helps ensure that business decisions are not made based on a group that doesn't actually represent the larger group as a whole.
When researchers choose a group to study, the selected group is called a sample and the larger group that the sample should represent is called a population. In the example, the college students and the NRA members are both samples and everyone in the United States is the population.
Continuous Distribution
For the major commercial airlines, fuel is often the most significant cost of doing business. Most airlines speculate and purchase fuel months in advance, hoping to secure as much fuel as possible at the lowest price possible.
The price of fuel is an example of a continuous distribution of data because it is never a fixed value. Analyzing continuous distributions, airlines look at industry trends over time in order to identify the best times to buy fuel. Using a continuous distribution in a statistical algorithm is a means of reducing the risk of price volatility.
The fact that fuel prices are constantly changing makes the price of fuel a continuous distribution.
The continuous distribution is essentially the price of fuel at any given moment in time. It is continuous because the 'supply' of fuel prices is never truly exhausted. However, if we said that there were 1,000 gallons of fuel in a tank that sells for $5.00 per gallon, the 'gallon' becomes discrete, since there are only 1,000 gallons of fuel. If airline A buys 200 gallons, there are only 800 gallons left.
Normal Distribution
Now, it's very important for businesses to make sure their data is normally distributed, in other words, verified to represent the population that is being studied.
Between 2000 and 2009, the price of jet fuel rose more than 260%, but toward the end of the decade, the price of jet fuel plunged as crude oil prices plummeted. If you were a financial executive at a commercial airline, it would be important to understand the impact of this short-term drop across the long-term backdrop.
If you were create a running average of jet fuel prices year-over-year, it might look something like the following.
A plot of jet fuel prices over time.
continuous distribution
You can see that the trend-line remains accurate despite the 2008-2009 price drop.
Imagine how this would change without the longer trend displayed. If an airline projected the upcoming year's costs based only on the year before, there would be years in which the airline lost tens of millions of dollars. These loses would be the result of calculations computing a simple average using a very limited data set that did not accurately characterize the price patterns.
An occurrence like this is a poignant example of why ensuring normal distributions is so important when making evidence-based decisions. The most common way to visually depict data that is normally distributed is using what is commonly referred to as the bell curve.
To unlock this lesson you must be a Member.
Create your account
Register to view this lesson
Are you a student or a teacher?
Unlock Your Education
See for yourself why 30 million people use
Become a member and start learning now.
Become a Member Back
What teachers are saying about
Try it risk-free for 30 days
Earning College Credit
To learn more, visit our Earning Credit Page
Transferring credit to the school of your choice
Create an account to start this course today
Try it risk-free for 30 days!
Create an account
|
<urn:uuid:fe56959f-eead-432e-a243-d91032554042>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5625,
"fasttext_score": 0.7178043127059937,
"language": "en",
"language_score": 0.9636659026145935,
"url": "https://study.com/academy/lesson/continuous-normal-distributions-in-business-uses-examples.html"
}
|
• Ronald
Short Zampoña Manual
The Zampoña
The zampoña is a wind instrument typical from the Pre-Hispanic cultures from South America. It is considered one of the first musical instruments in human history.
There is no agreement on the origins of the word “zampoña” but it is known that this name comes from a derivation of the Hispanic language. The original names of the zampoña are “antara” (in quechua language) or “siku” (in aimara language). The Pre-Hispanic Inca culture usually constructed their zampoñas with bamboo, reed, ceramic, wood, and bones.
In Peru, zampoñas were an important part of the cultural and religious life of Pre-Hispanic cultures such as the Mochicas, the Nascas, and the Waris. Archeologist have found more than 100 variations of zampoñas and similar wind instruments made of a variety of materials in archeologic excavations all across Peru.
Today, Zampoñas are still an important part of peruvian culture. Commonly known as Sikus in the highlands of Peru, they are used in religious festivals, parties, and rites related to the Inca cosmology. The performer of the siku is known as a sikuri. There are many different types of sikus and they are usually performed in ensemble.
Structure of the Zampoña
The zampoña is constructed aligning a series of hollow tubes of different sizes. Each tube produces a sound depending on its size. Larger tubes produce a lower tone and shorter tubes produce a higher tone. There are different types of zampoñas that varies on its size and distribution depending on the musician’s needs.
The zampoña is structured with two rows of pipes. The first row of pipes has 6 tubes and is known as “Ira” (which means male in Aimara). The second row of pipes has 7 tubes and is known as “Arca” (which means female in Aimara). Both rows structure a complete zampoña instrument with an alternate progression.
The diagrams presented in this manual portray a traditional zampoña. However, as mentioned before, there are many different types of zampoñas that vary in size and number of rows. This diagram presents the two parts of a traditional G major zampoña:
Holding the Zampoña
To hold the zampoña, the Ira (6 tubes row) always has to be in the front or closer to the mouth and the Arca (7 tubes row) always is positioned behind.
The traditional distribution of the zampoña has the long tubes (low tones) on the right and the short tubes (high tones) on the left. The right hand of the musician must hold the long tubes of the instrument and the left hand must hold the short tubes.
Since there are different sizes of zampoñas, the way to hold it varies depending on the size. A small zampoña (chilli) can be held with just one hand or with both hands and a big zampoña like a Toyo is always held with both hands.
The position of the body also plays a critical role when playing the zampoña. It is important to keep a straight position of the upper part of the body that allows the air flow from the diaphragm to the mouth smoothly.
How to blow the zampoña
The zampoña is meant to be played in an alternate vertical pattern following the musical scale of the instrument. In order to produce sound from the tubes, place the tube slightly under your lower lip. As you learn to blow the zampoña, you can slightly press the pipe opening against the lower part of your lips, this will allow you to control the air flow better. Then, tighten your lips creating a fine embouchure that allows you to control the air flow. It is important to direction the air flow inside the pipe and prevent air from escaping outside the tubes.
The attack is the movement of the tongue that aids in the insertion of the air flow inside the tube when playing a note that starts a phrase. To perform the attack, move your tongue from an idle state by moving it rapidly towards the mouth opening touching the interior part of the lip (see diagram). The tongue will help you to direct the air inside the pipe. Then, the tongue will return to its natural position. A way to do this, is by pronouncing the letter “T” or “D” while creating the embouchure and performing the attack.
The Zampoña is meant to be blow from the diaphragm. Take a blow of air from your nose trying to elevate your diaphragm and then push that air in a controlled manner and direct it inside the tube by using the attack. It is important that you practice how control the air flow and its direction, so you can produce sound from the tubes properly. The embouchure usually has to be tighter in the higher tones (shorter pipes) than in the low tones, so you will have to practice tightening your embouchure rapidly in order to perform.
Performing the attack
Since the zampoña is played on a vertical intercalate manner, the most important thing to do when learning to play it is to develop the skills to jump between pipes while playing them properly. There are some exercises that will allow you to become familiar with your instrument and skillful playing it. In order to perform those exercises, I recommend enumerating the tubes of your zampoña as shown in the diagrams (you can also include the notes so you can become familiar with the note that each pipe plays).
• Important: The number of pipes and note distribution may vary depending on your zampoña
Exercise 1
This exercise consists on getting familiar with the double-row distribution of the zampoña. Start playing from the left side. First, you will play the number 3 pipe of the Ira. Then, you will go up to the number 4 of the Arca, and then come back down the number 4 of the Ira. Continue this progression until you reach 6. The second progression consist on going in reverse. Practice this exercise multiple times. It is more important that you learn to blow the pipes properly and get a clear sound from them than doing it fast. Take your time and try to increase your speed progressively.
Exercise 2
This exercise consists on getting familiar with playing progression on the same row and jumping to the other one. Again, try to focus on playing each note properly instead than doing fast. As you get familiar with the embouchure and the instrument, you can increase your speed. Try to finish each progression one by one. Then, you can try to mix them or play them in sequence.
#peruvianflute #zampona #siku #howtoplaythezampoña #indianflute #whatazamponais
• Facebook
• YouTube - Grey Circle
© 2008-2018 The Pan Flute Store
Official PayPal Seal
|
<urn:uuid:19463b5a-84b5-488c-921d-133fa17acfd1>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.96875,
"fasttext_score": 0.06953036785125732,
"language": "en",
"language_score": 0.9263134598731995,
"url": "https://www.thepanflutestore.com/post/2019/06/17/short-zampo-c3-b1a-manual"
}
|
Area of a Triangle Using Determinants
Additional Resources:
In this lesson, we will learn how to find the area of a triangle given three vertices using determinants. This is done by labeling all the points as (x1, y1), (x2, y2), and (x3, y3). We then use this to set up a matrix where each row contains a given x as the first entry, a given y as the second entry, and a 1 as the final entry. The determinant is found and then multiplied by (+/-) 1/2. This will give us the area for our given triangle in square units. We will also prove that the order in which the points are chosen will not change our answer.
Area of a Triangle Using Determinants:
YouTube - Video YouTube - Video
Text Lessons:
Richland EDU - Text Lesson Math Planet - Text Lesson
+ Show More +
|
<urn:uuid:390131cb-9fa0-410b-84a9-a71e0c164762>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.953125,
"fasttext_score": 0.0322914719581604,
"language": "en",
"language_score": 0.874556839466095,
"url": "https://www.greenemath.com/College_Algebra/135/Finding-the-Area-of-a-Triangle-Using-Determinants.html"
}
|
Class Copy
Predator-Prey Simulation
To investigate how populations are affected by predator-prey relationships over multiple generations. Rubric
(1) lynx (6cm x 6cm square), 200 rabbits (2cm x 2cm square), 1 sheet of green paper (43 x 28 cm), Data Sheet, Graph Paper, Ruler, Pen or Pencil
1. Start with groups of 4 people: 1 data recorder, 1 lynx manager, 2 rabbit managers.
2. The meadow is the playing field. Keep all animals within the boundaries of the meadow.
3. Start the game with 3 rabbits evenly spaced in the meadow and 1 lynx.
4. To play a lynx, you must toss the lynx from outside the meadow boundaries.
5. All animals that are removed from the playing field are returned to the reserve pile.
6. Rabbit Managers:
1. Start with 3 rabbits evenly distributed on the playing field. If a rabbit card is touched by a tossed lynx card, the rabbit dies and is removed from the meadow. Return the rabbit to the reserve pile.
2. After all lynx have been tossed for the round, the remaining rabbits reproduce and are doubled. (i.e. if 4 rabbits remain, add 4 more rabbits).
3. Be sure to evenly distribute the rabbits over the meadow
4. The meadow has a carrying capacity of 200 rabbits. If the rabbit population doubles over 200 animals, the rabbits over 200 die of starvation. You can only have a maximum of 200 rabbits at one time.
5. If all rabbits die in a round, begin the next round with 3 rabbits that have migrated in from another area.
7. Lynx Managers:
1. Start the game with 1 lynx. Toss the lynx into the playing field. If the lynx lands on a rabbit, the rabbit is considered eaten. Remove the rabbit from the playing field. Use the following table for the requirements of the lynx’s survival.
Rabbits Caught Action Taken
0-2 Lynx Dies, new lynx migrates into area to start the next round
3-5 The lynx survives for the next round
6-8 The lynx reproduces 1 offspring, use 2 lynx in the next round
9-11 The lynx reproduces 2 offspring
12-14 The lynx reproduces 3 offspring
15-17 The lynx reproduces 4 offspring
For every 3 rabbits and so on The lynx reproduces an additional offspring
2. When you have more than 1 lynx playing at a time, toss the same card over and over until all lynx have been represented. Remove rabbits eaten after each toss.
3. If the lynx catches fewer than 3 rabbits, the lynx dies.
4. If all lynx die in a round, then begin the next round with 1 lynx that has migrated in from another area.
1. Record all data after each round until you have completed 25 rounds. To fill out the chart:
1. For Generation 1, start with 3 rabbits in the 1st column, 1 lynx in the 2nd column. After tossing the lynx into the meadow, record how many rabbits were eaten. In the following columns record the number of lynx that died, then the number of lynx that lived, any offspring of surviving lynx and the number of rabbits remaining.
2. Double the number of rabbits remaining and place that number into the 1st column of Generation 2.
3. Add surviving lynx to new baby lynx and record the number in the lynx column of generation 2
4. Continue recording in this manner until all generations have been completed.
Generation Rabbits Lynx Rabbits Caught Lynx Starved Lynx Surviving New Baby Lynx Rabbits Left
1 3 1 1 1 0 0 2
2 4 1
1. During the first round, it is probable that the lynx dies. How do you explain this?
2. Why is it important to continue the game for 25 rounds?
3. Explain why there is a maximum limit of 200 rabbits.
1. Graph your data (line graph) using the number of individuals as the dependant variable and the number of generations as the independent variable. Place both lynx and rabbits on the same graph. Study your graph lines for the two populations.
2. How are the lynx and rabbit populations related to each other? How do the sizes of each population affect one another?
3. Under what modifications can both populations continue to exist indefinitely?
4. What do you think would happen if you introduced an additional predator, such as a coyote that requires fewer rabbits to reproduce offspring?
5. What would happen if you introduced another type of rabbit, one that could run faster and escape its predators? (In the game, you could toss a coin for each rabbit caught to see if the rabbit escapes)
6. In the above question, which type of rabbit would predominate after many generations of predation?
1. How does this simulation relate to the human population and its interaction with its environment? Are there any predator-prey relationships?
2. What predator-prey relationships have you observed in your community?
3. If a population biologist visited your classroom, what are some questions about the human population you could ask?
Data table
Leonard, W.H. and Penick, J.E. 2003. Biology- A Community Context. Glencoe McGraw-Hill.
Predator-Prey Simulation Data Table
1 3 1
|
<urn:uuid:5a19444b-527a-42ff-98ad-b2782de3c771>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4.0625,
"fasttext_score": 0.9956932067871094,
"language": "en",
"language_score": 0.8382053971290588,
"url": "http://ricksci.com/eco/ecoa_predator_prey_lynx_rabbit_revised.htm"
}
|
African women in history
This is to influential women in our history who have left their marks in their respective industries. These women were great. Their courage surpassed their fear and they held steadfast in their fight for justice and equality for the human race.
The names of African women who made history are relatively unknown or do not come readily to mind as of African male heroes. They should not become forgotten in the annals of Pan-African history. This article is in honor of the powerful and great women who helped shaped the future of Africans.
Funmilayo Ransome-Kuti
Funmilayo Ransome-Kuti (1900–1978) was a leading activist during Nigerian women’s anti-colonial struggles. She founded the Abeokuta Women’s Union, one of the most impressive women’s organizations of the twentieth century (with a membership estimated to have reached up to 20,000 women) which fought to protect and further the rights of women.
Taytu Betul
Huda Shaarawi
Huda Shaarawi (1879–1947) was a pioneer Egyptian feminist leader and nationalist. She helped to organize Mubarrat Muhammad Ali, a women’s social service organization, in 1909 and the Intellectual Association of Egyptian Women in 1914. Her feminist activism was complemented by her involvement in Egypt’s nationalist struggle. She established the Egyptian Feminist Union in 1923, was founding president of the north of Africa Feminist Union and spoke widely on women’s issues and concerns throughout the Middle East and Africa.
Wangari Maathai
Nehanda Charwe Nyakasikana
Nzinga Mbandi
Njinga Mbandi (1581–1663), Queen of Ndongo and Matamba, defined much of the history of seventeenth-century Angola. A deft diplomat, skilful negotiator and formidable tactician, Njinga resisted Portugal’s colonial designs tenaciously until her death in 1663.
Yaa Asantewaa
The women soldiers of Dahomey
Aoua Keïta
Aoua Keita (1912 – 1980) was an award winning Malian independence activist and writer. Born in Bamako, she was admitted into Bamako’s first girls’ school in 1923. She later obtained a diploma in midwifery. She was a member of the African Democratic Rally (RDA) In 1959 she became a Member of Parliament, the first woman in Africa to be elected to the assembly governing her country.
Angie Elisabeth Brooks
Cesária Évora
Miriam Makeba
Queen Nanny
Queen Nanny was an eighteenth-century leader, warrior and spiritual adviser. Born in 1686 in present-day Ghana, Western Africa, she was sent as a slave to Jamaica, where she became leader of the Maroons, a group of runaway Jamaican slaves. She is believed to have led attacks against British troops and freed hundreds of slaves. She was also known as a powerful Obeah practitioner of folk magic and religion. She continues her legacy with her portrait gracing the Jamaican $500 bank note.
Mulatto Solitude
Luiza Mahin
Gisèle Rabesahala
As a celebrated Malagasy woman politician of the twentieth century, Gisèle Rabesahala (1929-2011) devoted her life to her country’s independence, human rights and the freedom of peoples. She was a journalist and political activist who founded the newspaper Imongo Vaovao. The first Malagasy woman to be elected as a municipal councilor (1956) and political party leader (1958), and to be appointed minister (1977), she is regarded as a pioneer in Malagasy politics.
Sojourner Truth
Salute all women out there making a difference and trying to bring about effective change to their various societies.
Pin It
|
<urn:uuid:603cbd98-1d04-4dd4-82f7-929bffa6d516>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.75,
"fasttext_score": 0.08907866477966309,
"language": "en",
"language_score": 0.9704415798187256,
"url": "https://54africa.com/tag/african-women-in-history/"
}
|
Knowledge_On-The-Go_Grade-4Grade 4 Module 6 Lesson 15
How are decimals connected to money? Join Miss DelFavero as she explores expressing money amounts given in various decimal forms. this lesson, you’ll need a paper with pencil. If you have access to a workbook, or a printer, we also suggest using the Problem Set available using the link below the video.
Student & Family downloads
Student Materials
Materiales para Estudiantes
Homework Helper
Ayuda para la tarea
|
<urn:uuid:696589f4-09e1-402f-86e5-bcdab8bcd8e3>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4.0625,
"fasttext_score": 0.23534542322158813,
"language": "en",
"language_score": 0.6771283149719238,
"url": "https://gm.greatminds.org/kotg-em/knowledge-for-grade-4-em-m6-l15?hsLang=en-us"
}
|
Standing Waves and Resonance - Part 2
by TMW
Save 35%
This program covers the important topic of standing waves and resonance (part 2) in Physics. We begin by discussing how waves can be made to reflect off of an object and interfere in the opposite direction to the original wave leading to a standing wave. The entire lesson is taught by working example problems beginning with the easier ones and gradually progressing to the harder problems. Emphasis is placed on giving students confidence in their skills by gradual repetition so that the skills learned in this section are committed to long term memory.
|
<urn:uuid:a0f76d2a-10b2-499f-b460-9109b67ffaaa>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.890625,
"fasttext_score": 0.2607700228691101,
"language": "en",
"language_score": 0.9296756982803345,
"url": "https://nestlearning.com/products/standing-waves-and-resonance-part-2"
}
|
6th Class English The Banyan Tree
• question_answer 10) Look at these sentences. • The tree was older than Grandfather. • Grandfather was sixty-five years old. How old was the tree? Can you guess? • The tree was as old as Dehra Dun itself. Suppose Dehra Dun is 300 years old. How old is the tree? When two things are the same in some way, we use as...as. Here is another set of examples. • Mr Sinha is 160 centimetres tall. • Mr Gupta is 180 centimetres tall. • Mrs Gupta is 160 centimetres tall. Mrs Gupta is as tall as Mr Sinha. Use the words in the box to speak about the people and the things below, using as... as or -er than tall - taller cold - colder hot - hotter strong - stronger short - shorter (Notice that in the word 'hot', the letter 't' is doubled when -er is added)
1. Heights Zeba is as tall as Rani. Ruby is shorter than Zeba and Rani, Zeba and Rani are taller than Ruby. 2. Weight litters Anwar is stronger than Vijay and Akshay. Vijay is as strong as Akshay. 3. City temperatures Shimla is as cold as Gangtok Srinagar is colder than Shimla and Gangtok. Shimla and Gangtok are hotter than Srinagar. 4. Lengths Romi's pencil is as long as Raja's pencil. Mona's pencil is longer than Romi's and Raja's pencils. Romi's and Raja's pencils are shorter than Mona's pencil. 5. City temperatures Delhi is as hot as Nagpur. Chennai is colder than Delhi and Nagpur. Delhi and Nagpur are hotter than Chennai.
You need to login to perform this action.
You will be redirected in 3 sec spinner
|
<urn:uuid:6f9700f6-28d2-4217-b157-a4d82bf5aa14>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.765625,
"fasttext_score": 0.38797080516815186,
"language": "en",
"language_score": 0.9587011337280273,
"url": "https://studyadda.com/ncert-solution/6th-english-the-banyan-tree_q10/199/25659"
}
|
The Paleogene (/ˈpæl.i.əˌdʒiːn, -i.oʊ-, ˈpeɪ.li-, -li.oʊ-/ PAL-ee-ə-jeen, -ee-oh-, PAY-lee-, -lee-oh-; also spelled Palaeogene or Palæogene; informally Lower Tertiary or Early Tertiary) is a geologic period and system that spans 43 million years from the end of the Cretaceous Period 66 million years ago (Mya) to the beginning of the Neogene Period 23.03 Mya. It is the beginning of the Cenozoic Era of the present Phanerozoic Eon. The earlier term Tertiary Period was used to define the span of time now covered by the Paleogene and subsequent Neogene periods; despite no longer being recognised as a formal stratigraphic term, 'Tertiary' is still widely found in earth science literature and remains in informal use.[2] The Paleogene is most notable for being the time during which mammals diversified from relatively small, simple forms into a large group of diverse animals in the wake of the Cretaceous–Paleogene extinction event that ended the preceding Cretaceous Period.[3] The United States Geological Survey uses the abbreviation PE for the Paleogene,[4][5] but the more commonly used abbreviation is PG with the PE being used for Paleocene.
This period consists of the Paleocene, Eocene, and Oligocene epochs. The end of the Paleocene (55.5/54.8 Mya) was marked by the Paleocene–Eocene Thermal Maximum, one of the most significant periods of global change during the Cenozoic, which upset oceanic and atmospheric circulation and led to the extinction of numerous deep-sea benthic foraminifera and on land, a major turnover in mammals. The term 'Paleogene System' is applied to the rocks deposited during the 'Paleogene Period'.
• 1Climate and geography
• 2Flora and fauna
• 3Geology
• 3.1Oil industry relevance
• 4References
• 5External links
Climate and geography[edit]
The global climate during the Paleogene departed from the hot and humid conditions of the late Mesozoic era and began a cooling and drying trend which, despite having been periodically disrupted by warm periods such as the Paleocene–Eocene Thermal Maximum,[6] persisted until the temperature began to rise again due to the end of the most recent glacial period of the current ice age. The trend was partly caused by the formation of the Antarctic Circumpolar Current, which significantly lowered oceanic water temperatures. A 2018 study estimated that during the early Palaeogene about 56-48 million years ago, annual air temperatures, over land and at mid-latitude, averaged about 23–29 °C (± 4.7 °C), which is 5–10 °C higher than most previous estimates.[7][8] Or for comparison, it was 10 to 15 °C higher than current annual mean temperatures in these areas; the authors suggest that the current atmospheric carbon dioxide trajectory, if it continues, could establish these temperatures again.[9]
During the Paleogene, the continents continued to drift closer to their current positions. India was in the process of colliding with Asia, forming the Himalayas. The Atlantic Ocean continued to widen by a few centimeters each year. Africa was moving north to meet with Europe and form the Mediterranean Sea, while South America was moving closer to North America (they would later connect via the Isthmus of Panama). Inland seas retreated from North America early in the period. Australia had also separated from Antarctica and was drifting toward Southeast Asia.
Flora and fauna[edit]
Mammals began a rapid diversification during this period. After the Cretaceous–Paleogene extinction event, which saw the demise of the non-avian dinosaurs, mammals transformed from a few small and generalized forms that began to evolve into most of the modern varieties we see today. Some of these mammals would evolve into large forms that would dominate the land, while others would become capable of living in marine, specialized terrestrial, and airborne environments. Those that took to the oceans became modern cetaceans, while those that took to the trees became primates, the group to which humans belong. Birds, which were already well established by the end of the Cretaceous, also experienced adaptive radiation as they took over the skies left empty by the now extinct pterosaurs.
Pronounced cooling in the Oligocene led to a massive floral shift and many extant modern plants arose during this time. Grasses and herbs such as Artemisia began to appear at the expense of tropical plants, which began to decline. Conifer forests developed in mountainous areas. This cooling trend continued, with major fluctuation, until the end of the Pleistocene.[10] This evidence for this floral shift is found in the palynological record.[11]
Oil industry relevance[edit]
The Paleogene is notable in the context of offshore oil drilling, and especially in Gulf of Mexico oil exploration, where it is commonly referred to as the "Lower Tertiary". These rock formations represent the current cutting edge of deep-water oil discovery.
Lower Tertiary rock formations encountered in the Gulf of Mexico oil industry usually tend to be comparatively high temperature and high pressure reservoirs, often with high sand content (70%+) or under very thick evaporite sediment layers.[12]
Lower Tertiary explorations include (partial list):
• Kaskida Oil Field
• Tiber Oil Field
• Jack 2
1. ^
2. ^
4. ^
5. ^
|
<urn:uuid:90c3ab51-c0f2-4905-9c86-b320777f8386>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.19122850894927979,
"language": "en",
"language_score": 0.8672968745231628,
"url": "https://dinopedia.fandom.com/wiki/Paleogene"
}
|
Children's Literature - Fort Wayne
Predictable Books
Predictable books help early readers predict what the next sentences are going to say by using repetitive language, sequences, rhythms, and rhymes. These include chain/circular stories, cumulative sequences, familiar/known sequences, pattern stories, question and answer, phrase/sentence repetition, and rhyme/rhythm repetition.
• Chain or Circular Stories
In chain or circular stories, the ending leads right back to the beginning.
• Cumulative Stories
In cumulative stories, each part builds upon the parts that come before. As each new part is added, ALL the rest of the story is repeated, building a string of events or ideas that can help early readers recognize patterns and words.
• Familiar or Known Sequence
Stories with familiar or known sequences include a common, easily recognizable theme (such as the days of the weeks, months, counting).
• Pattern Stories
In a pattern story, the scenes or events are repeated with a variation.
• Question and Answer Stories
In question and answer stories, one question is repeated throughout the story.
• Repetition of a Phrase or Sentence
In a similar manner to question and answer stories, these stories have a single phrase or sentence which is repeated.
• Repetition of a Rhythm or Rhyme
Through these stories, a recognizable rhyme, rhythm, or refrain is repeated.
Chain or Circular Stories
Cumulative Stories
Familiar or Known Sequence
Pattern Stories
Question and Answer Stories
Repetition of a Phrase or Sentence
Repetition of a Rhythm or Rhyme
|
<urn:uuid:f8ee4b46-8b50-40f8-8ff8-7e8e612e0ace>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4.5,
"fasttext_score": 0.08108001947402954,
"language": "en",
"language_score": 0.8999800682067871,
"url": "https://library.ivytech.edu/childrenslitfw/predictable_books"
}
|
Friday, June 28, 2019
By Katherine Jolliff Dunn, collections processor
The might of the Mississippi River has tested engineers for centuries. Few have approached its challenges more fearlessly than the self-taught James Buchanan Eads, who risked his career and even his life to exploit its potential.
Eads, whose family moved frequently until settling in St. Louis, worked as a clerk for a local merchant as a teenager and began his self-education reading from his employer's library. By 1842 the 22-year-old Eads had started his own business salvaging ships that had sunk in the river. He created a diving bell out of an open-ended barrel with an air hose that connected to a boat at the surface; it allowed him to descend to the bottom of the river in order to recover sunken goods that were previously unattainable due to the river’s strong current.
An 1880s photograph shows James Buchanan Eads shortly before his death. (THNOC, 1990.7)
In 1867 Eads joined the St. Louis and Illinois Bridge Company as chief engineer with the task of building a bridge across the Mississippi River. His proposed design was unprecedented in many ways: it was the first to exclusively use steel—not yet a common building material—employ cantilevers, and carry railroad tracks. Eads faced criticism, particularly from Army Corps of Engineers Chief Andrew Humphreys, a formally trained civil engineer who became a rival of the autodidact. Despite this, Eads’s bridge officially opened in 1874 as the world’s first all-steel construction.
An 1874 wood engraving shows Eads's bridge across the Mississippi River in St. Louis, the world's first all-steel construction. (THNOC, 1974.25.30.750)
Eads’s next feat took him to New Orleans where, once again, his unconventional ideas were disputed by Humphreys. Downstream of the city, the mouth of the Mississippi frequently silted in and formed sandbars that prevented ships from sailing in and out of the river basin. Humphreys and the Corps of Engineers proposed to Congress that a canal be built between the river and Gulf of Mexico to bypass the sandbars; Eads advocated for the construction of jetties.
An 1877 engraving shows the east jetty at the mouth of the Mississippi River under construction. Eads's jetty design eventually won out over Andrew Humphreys's plan for a canal. (THNOC, 1974.25.17.194)
After several years of persuasion, Eads made a deal with Congress: he would build jetties at his own expense, and Congress would pay him only as the channel reached depth milestones, with the ultimate goal being 30 feet. Eads built two parallel piers far out into the Gulf to extend the east and west banks of South Pass, building up each jetty wall with willow “mattresses”—trunks of willow trees bound together, stacked, and held down with stones and concrete. The crevices would fill with sand and mud as the river flowed past them, rendering the jetties impermeable.
Eads’s jetties were completed in 1879, with a channel at the targeted depth of 30 feet, eliminating the silt buildup, and permanently opening the mouth of the Mississippi to ships. The new jetties exponentially increased the flow of commercial traffic through New Orleans.
This map shows the location of the Eads jetties off of the South Pass of the Mississippi River delta. (THNOC, 1950.58.3 i)
Just a few years after completion of the jetties, Eads died at the age of 66. Today, the southernmost point in Louisiana, at the tip of the Mississippi River, is known as Port Eads, named after the man who died without knowing he had changed the course of the river's history.
A version of this article originally appeared in the Historically Speaking column of the New Orleans Advocate
Cover image: A wood engraving shows the entrance to the Eads jetties in 1884. (THNOC, 1974.25.30.732)
|
<urn:uuid:0cecc736-606a-4dd4-beef-1b4779bb8f7d>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.625,
"fasttext_score": 0.02183675765991211,
"language": "en",
"language_score": 0.9724805355072021,
"url": "https://www.hnoc.org/publications/first-draft/self-educated-engineer-who-helped-tame-mississippi-river"
}
|
Since the late seventies scientists have been cloning mammals using cells taken from embryos.In July 1996, medical history was made when a sheep named Dolly was cloned.The only thing that set Dolly the sheep apart from the other clones was that she was cloned from an adult sheep cell.Before Dolly was cloned, scientists thought adult cell cloning was impossible.A clone is defined as a group of genetically identical cells.The goals and purposes for cloning range from making copies of those that have deceased to better engineering the offspring in humans and animals.Cloning could also offer a means of curing diseases.Cloning has many benefits that could be used as an aid to the human race, but there is also the ethical debate over cloning.Many people believe it will do damage to mankind.Some feel it is against God's will, while others fear deformity and emotional stress in the human clone.Despite the ongoing debate, cloning produces many advances for the human race.
Cloning has already made advances from the cloning animals.Embryo splitting was developed in the eighties and is used by ranchers and livestock breeders.Through this process, farmers can twin their best animals and plants for better production."The Future of Cloning" states that "Mammalian cloning research would allow genetic manipulation to produce animals that are disease resistant" (1).Cloning techniques are also being used in agriculture.It is used to produce higher yield, better quality fruits and vegetables."Ethical Concerns" states that, "Transgenic animals (animals engineered to carry genes from species other than their own) can be made to produce a wide variety of protein that could be sold as drugs, as well as proteins, called enzymes, that can be used to speed up chemical reactions."(1).Animals could be mass produced to provide a much faster and leaner species using cloning technologies…
Leave a Reply
I'm Harold
Check it out
|
<urn:uuid:9de3b616-39d4-457c-8ea7-66465fbee09b>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.515625,
"fasttext_score": 0.6451886892318726,
"language": "en",
"language_score": 0.9728699326515198,
"url": "https://businessbooksummariesreview.com/cloning-74/"
}
|
Intelligent Design Icon Intelligent Design
Animals Set World Records
Evolution News
Photo: Mexican free-tailed bats, by dizfunkshinal, CC BY 2.0 , via Wikimedia Commons.
Animals of all phyla, many of them small and hardly noticeable, continue to astound researchers who study them. In 2019, scientists were amazed that the froghopper (an insect) could jump with an acceleration of 400 g’s, one of “the highest accelerations known among animals.” Nymphs of the related planthopper astounded everyone with the first discovery of gears in the animal world. It’s time to look at more animal champions that show design with Olympic flair.
A bat so small it could fit into a matchbox has flown 2,224 kilometers (1,382 miles) — and that could be an underestimate, says Nature. Weighing “less than a toothbrush,” the 10-gram bat Nathusius’ pipistrelles (Pipistrellus nathusii) flies between summer feeding grounds in Latvia and warmer climes in Spain.
Mexican free-tailed bats are not much bigger, and they migrate annually from Colorado to Mexico or as far as South America. That’s in addition to their nightly hunts for insects, which could cover many miles. These “jets of the bat world” live in colonies of up to 20 million individuals, and can live for 18 years, says the Arizona-Sonora Desert Museum website.
Gymnastics: Water Striders
Olympic gymnasts seem to defy gravity, but they know their limits. Their twists and turns must fit within the ballistic time of flight according to their body mass and the springiness of the mat. Years of experience lets them attempt more complex moves within what is physically possible. But could they flip like that from water? Water striders can. Researchers from Seoul National University, reporting in Nature, decided to see if the Korean water striders push their limits:
Can individuals modify their leg movements based on their body mass and locomotor experience? Here we tested if water striders, Gerris latiabdominis, adjust jumping behaviour based on their personal experience and how an experimentally added body weight affects this process. Females, but not males, modified their jumping behaviour in weight-dependent manner, but only when they experienced frequent jumping. They did so within the environmental constraint set by the physics of water surface tension. [Emphasis added.]
Why don’t the males train like this? The summary here explains that females have to bear the weight of males during mating on the water. To be prepared, they practice jumping as high as they can without breaking the water. When researchers added extra weight to them, the bugs adjusted their leg movements and jumping velocities. This is a human-like ability; “There are many examples of animals, including insects, adjusting their behavior to changing environmental conditions through developmental or behavioral plasticity,” the article says, “but this study clearly shows that they can do it through personal experience, just like we do.”
Slingshot Tongue: Chameleon
The chameleon, a reptile, has one of the fastest tongues known, and that requires one of the fastest muscles in the animal world. The Florida Museum of Natural History describes how the slingshot tongue works:
Now, the museum curators, looking at fossil albanerpetontids, “mercifully called ‘albies’ for short,” found this ability in these ancient amphibians. Said to have lived from 160 million years ago, albies have the skull and tongue-bone structure that suggests they had rapid-fire tongues 45 million years before chameleons “evolved” them — maybe even much earlier. The article shows a digital scan of the skull of an albie that was probably only two inches long in life.
Target Shooting: Spider
Speaking of slingshots, there’s a spider that, like Spiderman, flings its web over its victim instead of waiting for the prey to find it. We met it here beforeThe Scientist says that spiders in the Theridiosomatidae family build their orb webs with a trigger strand that launches spider and web over passing insects.
Leg over leg, a furry brownish-black spider tugs on a single silk thread, tightening the frame of its web. It pulls and pulls, as if removing slack from a slingshot, and then it waits. Minutes pass, sometimes hours. Then, when an unsuspecting insect flies by, the spider releases the thread, springing itself and its satellite dish–shaped web toward its prey. All of this happens in the blink of an eye, with the spider and its web hurtling through the air at more than 4 meters per second (9 miles per hour) with accelerations exceeding 130 g. That’s 130 times the acceleration experienced in freefall, and an order of magnitude greater than that of a sprinting cheetah.
This little 2-millimeter spider may not best the froghopper’s 400 g, but 130 g is not bad when it has to launch itself and its web accurately at a target. Researchers at Georgia Tech who observed the spiders in the Peruvian rainforest learned that snapping their fingers made the spider release the web, which they captured on a high-speed camera. Sheila Patek, a biologist from Duke who was not involved in this work, was humbled by watching the video:
“We humans always think we’re the best at everything, and in the natural world, those spiders are doing something that’s pretty difficult.” Other critters, including trap-jaw spiders and mantis shrimp, can move certain appendages such as claws or mouthparts at speeds of 30 to 80 kilometers per hour (20 to 50 miles per hour), she says. But slingshot spiders are using an external tool, a web, to snare their prey, and they’re working at speeds faster than their nervous systems can monitor, so they have to plan ahead and essentially let their spring and latch system control what happens after they let go of the tension thread. “It’s superpower-type stuff,”Patek says.
The spider, tricked by the finger snap, was able to reset and reload its slingshot “pretty quickly.”
A different species, the ogre-faced net-casting spider (Deinopis spinosa) has a similar but different method of casting nets. This species, 25 mm in size (10 times larger than the other one), builds a small orb web between its four front legs that it can cast over its prey as it dangles from a single silk line. New Scientist shows its ogre face in a color photo. It resembles a villain with sunglasses on, but during the day it disguises itself as a stick. Slow-motion video in the article shows it stretching out its net and quickly collapsing it around a potential meal. Researchers determined by blindfolding the spider that it “listens with its legs” at night for passing insects and has remarkable aim. It can even do backflips to catch prey overhead. Thankfully, this hunter that lives in the southern states and Latin America is relatively harmless to humans.
Obstacle Course: The Cockroach
Roaches are not welcome in the house, but one has to admire how they get around. They can push, climb, run, and slide around unfamiliar terrain with ease. That’s hard for robots to do. Researchers at Johns Hopkins University built an obstacle course for cockroaches so that they could film them in slow-mo and learn their tricks. The results are shown in a short video clip in the article. One trick they learned is the “leg jitter” (shake a leg, so to speak). The leg jitter “shook the body to give it enough energy to overcome the barrier from a more strenuous pitch to an easier roll movement, facilitating traversal.” Watch the bugs get around obstacles comparable to Navy Seals getting through fences; it’s impressive.
Inviting Darwin In
Here we are in 2020 and scientists are still finding animals doing amazing tricks. Evolutionary scientists and reporters try to invite Darwin into the story at times, but he has little to say, because most of these capabilities appear fully formed and already in use at high performance levels. Science can do fine without the old-fashioned, obligatory just-so stories about How the Olympic Animal Earned Its Medal. The simple awe of nature is sufficient to propel kids to want to become scientists, scientists to become biomimicry advisers, and engineers to take the knowledge gained by studying animal feats all the way to the patent office. Don’t ever forget that nature had it first.
|
<urn:uuid:e0457ce4-23ee-4665-ac12-2396b94699da>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.828125,
"fasttext_score": 0.1022384762763977,
"language": "en",
"language_score": 0.9404258728027344,
"url": "https://evolutionnews.org/2020/11/animals-set-world-records/"
}
|
Vous êtes sur la page 1sur 2
How does the deep exploration of Julius Caesar develop and strengthen our
appreciation of control in The Prince.
Control must be maintained when an individual is able to conquer his ambition.
The dissertation The Prince by Niccolo Machiavelli, written in 1513, illustrates
the requirement for control upon a new reign which reflects the instability of the
Italian government due to their lack of control. The playwright, written in 1599,
by William Shakespeare exemplifies the concept of control to highlight the issues
faced in the Elizabethan Era. Through the characters Brutus and Cassius,
Shakespeare explores the usage of control between them and how they are
unable to maintain what they have gain which connects with the idea of a
Machiavellian villain as both characters fail to be characterised to Machiavellis
ideal leader.
An individual must be malicious when they are to achieve what is necessary for
their achievement. The Prince constantly dictates the qualities that a ruler must
have in order to maintain control of what they have gained. Machiavelli
highlights the particular importance of a rulers prowess in deciding the fate of
his new conquest, a ruler mustnt worry about being labelled cruel when its a
question of keeping his subjects loyal and united. By being cruel and inflicting
public punishment, the ruler is able to project his control and restrict chaos from
occurring. Machiavelli insists that rulers must be able to establish their own
abilities which would create a stronger foundation which would provide the
opportunity for them to succeed. Therefore he is stating that cruelty is an act of
compassion whilst being lenient is an act of malice. Similarly this act of control
needed is emulated in Julius Caesar when Cassius manipulates Brutus. Cassius
initially displays the qualities acceptable for Machiavellis prosperous ruler
however further along the play he begins to disintegrate being impulsive and
unscrupulous. Despite his intelligence and knowledge about how politics is
applicable to success, his temper is shown to be his downfall as well as his
greed, I, an itching palm? the tone is interpreted as irritated and defensive
which corresponds to his impetuous actions leading to his death. Whilst Brutus is
a character who remains inflexible to changing and receiving counsel
emphasising him as an ineffectual leader which is why he was incapable to
controlling Rome. Both characters have the qualities to succeed the however
their major flaws drove them to their destruction. Hence one must have a
malicious quality to be able to control what they want.
To be able to control a country, one must gain the respect of the people. The
Prince explains that honour is necessary for a ruler to control of his gain.
Machiavelli states, Nothing wins a ruler respect like great military victories and a
display of remarkable personal qualities which would emphasise the prestige
display of prowess providing the means of securing ones state and winning the
goodwill of the people. A modern example use is the Spanish King, Ferdinand
who uses his prowess to conquer and control his reign. Machiavellis historical
references substantiate his argument about how a prince should rule hence
throughout his dissertation his belief that control is a necessity shows the
significance of it. Whilst in Julius Caesar, Cassius mocks Caesars physical being
due to his jealously which is evident in his anecdote, this weakling Caesar, who I
had to rescuer. He underlines the difference between Caesar as a great man
compared to the petty man which rouses Brutus to fear that Caesar is not fit to
have victory. Therefore the broadcast of Caesar as weak had indoctrinated
Brutus to abandon the great leader which reinforces Machiavellis argument that
respect is needed for absolute control otherwise there is a risk of deviation.
Caesars admirable qualities are offset by his depiction of his public and private
persona. Whilst to the public he is modest and a great leader; he is depicted with
paradoxical traits which are embedded into his language and action. His refusal
of the crown three times was truthfully detected by the nobles motivating them
to deceive him. Consequently rulers must be able to strongly gain the respect of
their subjects to control them.
To sustain control a leader must always keep his people in contempt and not be
hated. The Prince stresses the importance of avoiding the hatred and scorn of
the subjects. Machiavelli instructs A ruler must avoid any behaviour that will
lead to his being hated or held in contempt which exude that a ruler must be
shrewd of his efforts to temper the hatred of people and the goodwill of
influential classes. Machiavelli references an example of this principle, when the
people of the Annibale Bentivoglio rose up against the Canneschi due to the
goodwill that existed for the House of Bentivoglio at that period. This proves that
satisfying the subjects would enable the ruler to have control over them even
whilst he is being overruled. Concurrently, Julius Caesar displays a tension
between Caesar and the nobles which led to a group of conspirators created.
Whilst Caesar could have had the support of his people, the plebeians, the
rejection of the crown had confused the plebeians as recalls, He would not take
the crown. Therefore tis certain he was not ambitious. By not satisfying their
hopes Caesar had lost their support and they had happily accepted Brutus as
their new leader. As a result if a leader does not satisfy their people then they
would lose their control.
Ultimately, the power of control has been proven to be very importance in
achieving ones goal of ruling. Machiavelli argues the reason why control is a
necessity and through Julius Caesar audiences are able to see the justification of
how the lack of control can lead to a tragedy. Therefore Julius Caesar delves
deeper into the need of control and through the characters and plot readers are
able to understand and appreciate why Machiavelli believes control is important.
|
<urn:uuid:d31608c7-38c8-4776-9a42-a53b4a1ef1ab>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.671875,
"fasttext_score": 0.7752043604850769,
"language": "en",
"language_score": 0.9673338532447815,
"url": "https://fr.scribd.com/document/327148237/English-MOD-a-Essay-About-Control"
}
|
Hoovervilles in Depression-Era New York
Manhattan, New York City, New York State, USA --- Hooverville in Central Park 1933 --- Image by © Bettmann/CORBIS
A photo of the Hooverville settlement on the Croton Reservoir in Central Park Image by © Bettmann/CORBIS
One of the most pressing issues during the Depression was the thousands of people who faced the struggle of finding shelter after being evicted from their homes and being forced out onto the streets. Many New Yorkers took to living in make-shift huts and homes located in parks or in alleyways. Large settlements of these make-shift homes often became referred to as a “Hooverville,” based on the idea that President Hoover’s lack of action toward sheltering the people forced them to make up these little settlements on their own and therefore the fault for their existence was his.[1] The largest Hooverville settlement was located in the heart of Central Park, near the Croton Reservoir. In an article titled Shantytown, U.S.A., two men named Delehanty and Bill lead a reporter named Boris Isreal around the Hooverville next to the Croton Reservoir explaining to him the dynamics of the settlement. Delehanty explains that many of the men who lived in the Hooverville were trade-workers. It became common for these trade-workers, such as masons, engineers and architects to construct elaborate brick or wooden shacks within their settlements. Bill explains that “there are three hundred to three hundred forty men who have built themselves homes in this one Hooverville,” and that “men who can build houses like this from salvaged materials are the men who built the buildings now standing empty in all those cities.”[2] Despite being popular places for the indigent to settle, Hoovervilles were illegal settlements and forces from the local and federal government often raided Hoovervilles, destroying shelters and scaring the people out of the settlements to ward off crime. Despite the fact that their “homes” were illegal, many New Yorkers felt that Hoovervilles were the “foundation of our nation” during the depression.[3]
[1] Israel, Boris. “Shantytown, U. S. A.” New Republic 75.964 (1933): 39. Points of View Reference Center. Web. 6 Oct. 2016.
[2] Ibid, 40
[3] Ibid, 41
Leave a Reply
|
<urn:uuid:6b4c567d-ff5d-4c44-a8b7-17227182e934>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.01922839879989624,
"language": "en",
"language_score": 0.9742827415466309,
"url": "https://blogs.shu.edu/nyc-history/2016/12/12/hoovervilles-in-ny/"
}
|
IOI '07 - Zagreb, Croatia
A new pirate sailing ship is being built. The ship has N masts (poles) divided into unit sized segments - the height of a mast is equal to the number of its segments. Each mast is fitted with a number of sails and each sail exactly fits into one segment. Sails on one mast can be arbitrarily distributed among different segments, but each segment can be fitted with at most one sail.
Different configurations of sails generate different amounts of thrust when exposed to the wind. Sails in front of other sails at the same height get less wind and contribute less thrust. For each sail we define its inefficiency as the total number of sails that are behind this sail and at the same height. Note that "in front of" and "behind" relate to the orientation of the ship: in the figure below, "in front of" means to the left, and "behind" means to the right.
The total inefficiency of a configuration is the sum of the inefficiencies of all individual sails.
This ship has 6 masts, of heights 3, 5, 4, 2, 4 and 3 from front (left side of image) to back.
This distribution of sails gives a total inefficiency of 10. The individual inefficiency of each sail is written inside the sail.
Write a program that, given the height and the number of sails on each of the N masts, determines the smallest possible total inefficiency.
The first line of input contains an integer N (2 ≤ N ≤ 100 000), the number of masts on the ship. Each of the following N lines contains two integers H and K (1 ≤ H ≤ 100 000, 1 ≤ KH), the height and the number of sails on the corresponding mast. Masts are given in order from the front to the back of the ship.
Output should consist of a single integer, the smallest possible total inefficiency.
Note: use a 64-bit integer type to calculate and output the result (long long in C/C++, int64 in Pascal).
Sample Input
3 2
5 3
4 1
2 1
4 3
3 2
Sample Output
Note: In test cases worth a total of 25% of the points, the number of ways to arrange the sails will not exceed 1 000 000.
All Submissions
Best Solutions
Point Value: 30 (partial)
Time Limit: 1.00s
Memory Limit: 64M
Added: Sep 09, 2009
Languages Allowed:
Comments (Search)
I test my program using the downloaded IOI test data and it works well. But it gets WA on PEG. Any hint?
Use %llu or %Lu, then you will get perfect. %I64 is win32 platform specific, and the judge uses Linux.
|
<urn:uuid:a13bcd67-d836-4315-8ed0-9c8880c4e44d>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.9375,
"fasttext_score": 0.6613900661468506,
"language": "en",
"language_score": 0.8789331912994385,
"url": "https://wcipeg.com/problem/ioi0713"
}
|
Visit A Civil War Era Iron Furnace
At the Buckeye Furnace in southeastern Ohio you can see how pig iron was made during the Civil War era. The furnace is a recreated, charcoal fired blast furnace. This is just one of many that once operated in southeastern Ohio in the Hanging Rock Iron Region. Visitors will learn how these so-called iron making towns helped win the Civil War for the Union.
This 270-acre site contains lots of things to explore. The furnace is the main attraction. It was originally built in 1852 and went cold in 1894. There are other reconstructed buildings and a museum to visit. And if you’ve still got the energy the site has beautiful nature trails to explore.
After the down slide of salt0making in the area (from about 1795-1826) the local economy defaulted to agriculture. Despite the fact that natural resources were abundant in the area no one was taking advantage of them. Specifically, there were isolated parts of southeastern Ohio with iron deposits. This of course led to a limited production period of iron. Between the 1830s and 1840s a total of sixteen furnaces were built to take advantage of these resources.
While several of these original furnaces still stand, Buckeye’s is the only one that remains as it was during its operation.
The Squirrel Hunters
In the second year of the Civil War, September 1862 General Kirby Smith had captured Lexington Kentucky. Smith then sent General Henry Heth to capture Covington, Kentucky and Cincinnati, Ohio just across the river. This could have been the South’s first invasion into Ohio. On the side of the North, General Lewis Wallace was tasked to prepare both Covington and Cincinnati to defend themselves against Heth’s army.
Wallace immediately declared martial law upon arriving in Ohio as well as put out a call in Ohio, Indiana and Michigan for volunteer militia. Business owners were on order to close their businesses. Civilians were to report for duty in defense of Ohio’s border. Civilians helped build defensive structures like trenches.
David Tod, Ohio Governor, came to Cincinnati from the state capital. Wallace called for all available troops not currently guarding the border to repot to Cincinnati and for the Ohio quartermaster to send five thousand rifles to equip Cincinnati’s militia.
Some Ohio counties offered to send their able-bodied men to defend the southern border. Tod immediately accepted the offer for Wallace. Wallace instructed that only armed men come to their aid and that the railroads should provide their transport at no cost (Ohio later paid for the transport). 65 total counties sent over fifteen thousand men. This state militia would soon be known as the Squirrel Hunters.
Their name came from the weapons these volunteers brought with them, most of which were outdated and best suited for hunting small game rather than warfare.
Heth reported a force of seventy thousand men along the border and the South’s advance was soon dispelled—with no direct conflict or bloodshed. By September 13th word came that the enemy forces were withdrawing and Cincinnati was no longer in danger.
|
<urn:uuid:7de00089-81a8-49f5-8af5-47f213863733>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.625,
"fasttext_score": 0.02013617753982544,
"language": "en",
"language_score": 0.9758813381195068,
"url": "https://jodyvictor.blogcreek.com/tag/civil-war/"
}
|
Scientists at the University of California, Los Angeles discovered new Nazca lines. Although not exactly in Nazca, but in the Chincha Valley, located in the vicinity of Nazca. The geoglyphs are associated with the Paracas culture, which was located around 350 kilometers south of Lima, and is 300 years older than the Nasca culture. (About 400 – 100BC).
New Nazca Lines
In the “proceedings” of the US National Academy of Sciences [1], Charles Stanish, the head of the academic team, reported about 71 geoglyphs, all spanning across platforms and pyramidal, running along lines reported to be several kilometers in length. Phenomenal! And because the new lines are older than the network of Nazca, the mystery is finally solved. The Nazca Indians simply copied what had already been done by their predecessors of Paracas.
This could be. There are several scientific datings of the Nazca geoglyphs, which point to the lines being fairly young (between 200 and 600 AD) [2]. But none have found which line is the oldest, or which is the origin of all the lines so to speak. Who began the lines and when? And why did the later generations copy the exhausting work in the hot desert? Experts note that many of the new lines in the area of Palpa aim toward a point where the sun went down 2300 years ago on December 21. It was part of ritual acts. The Paracas cultures created an artificial landscape in the desert to celebrate recurring social events.
New Nazca Lines 2
Once again, it comes to the calendar. To operate their agriculture, the “social societies” needed to know when the climate changed. As if they could not read out on a wooden stake, a rock wall marker, or simply the annually recurring changes of nature. And if you already have a kilometer-long line on the angle of the winter sun or another turn, why build hundreds more of them? Also, it’s not only about lines – narrow in width, with similar slopes – but also figures that can only be seen from the air: spider, hummingbird, monkey, or a 29 meter-high figure carved into a hill. The latter is not far from the city of Ica, in the Paracas area, where the natives hammered a giant helmeted figure into a rocky plateau so that it is visible only from above. Lastly, many of the Nazca Palpa lines are not on a calendrical point at all. So what were they doing?
Nazca Lines Monkey
The calendar option and the “social events” may be honorable proposals, but they are not scientifically as important as other lines and skyward-oriented figures that are not included in the model. The most important part of Nazca Palpa is excluded. Sure, there are some lines on calendrical points. I do not dispute that. But what about the skyward-oriented geoglyphs in Jordan, Saudi Arabia, the Aral Sea, or South Africa? What about the hundreds of figures from the Colorado River (United States) to Mexico? From the Rockies to the Appalachians on the north side of America? Worldwide, this involves tens of thousands of lines, characters, figures, and wheels [3]. Many cultures, which were not related, who knew nothing of each other, created huge figures in the ground. Did they all have the same needs, the same whimsy? When will we finally understand the global nature of this phenomenon?
Surely it cannot be true that the thousands of carved drawings on other continents be compared with the Nazca? And how long will it take the academic world, how long until clever scientists finally involve the ancient texts? Especially those texts, which came from many ancient cultures and report prehistoric aviation? [4, 5]. The images, lines, and figures on the ground were clearly reflected in ancient literature. The relationships are obvious and the texts compelling. I suppose the ideologies of our Stone Age ancestors were similar all over the world. These were always signs for the gods to signal those who were moving in the sky.
Scientific statements should be compulsory – but with respect o the Nazca however, they are not. On one hand, it is unscientific to make the proper assessment by showing some lines on calendrical points, while on the other hand excluding the questions about why the Palpa Indians leveled an entire hilltop and then carved a zigzag line on the flattened surface. Additionally, this wide Pista zig-zag line has no calendrical cardinal point. Years ago, geomagnetic measurements proved strong magnetic changes under this Pista along the angled lines. A quote from the scientific report states: “the geomagnetic measurements revealed clear indications of subsurface structures that differ from the surface geoglyphs. The high-resolution geoelectric images show unexpected resistivity anomalies underneath the geoglyphs down to a depth of about 2 meters.” [6] These additional scientific findings, published in Science magazine, find no resonance of social events.
They leave us with calendar and social cults as explanations, while revealing something mysterious in the soil along several lines. We are left to wonder what methods and tools the indigenous peoples used to chip away at the mountaintop. What was their motive for the hard work, and where is the excavation material? And the same science that delivers the eternal calendar conclusion cannot tell us why lines were created that are entirely unrelated to the calendar.
In Nazca Palpa, academic archaeology always turned a blind eye when it came to the gods. Involving gods in their model simply does not fit with their ideological thinking. Or flying crafts that once actually existed and have nothing to do with their religious-psychological dream models. We are not spirits that merely nod and assent if the offered answers are unsatisfactory. We keep researching and the drill continues.
[1] PNAS 1406501111/ 2014.
[2] Lambers, Karsten: The Geoglyphs of Palpa, Peru. German Archaeological Institute. Aichwald 2006.
[3] von Däniken, Erich: Impossible truths. Rottenburg 2013.
[4] Laufer, Berthold: The Prehistory of Aviation. Field Museum of Natural History, Anthropological Series Vol XVIII, No. 1. Chicago 1928.
[5] Kanjilal, Dileep Kumar: Vimanas in Ancient India. Calcutta 1985.
[6] Hartsch, Kerstin, et al: The Nasca and Palpa Geoglyphs: Geophysical and Geochemical data. Science, July 2009
Erich von Däniken Phenomena magazine event promo
by - [-]
Rank/Rating: -/-
Price: -
by - [-]
Rank/Rating: -/-
Price: -
by - [-]
Rank/Rating: -/-
Price: -
Pin It on Pinterest
Share This
Share This
Share this post with your friends!
After Hours AM is changing talk radio Listen now!
Hello. Add your message here.
|
<urn:uuid:f5ae3b9d-168f-47e7-875e-5d6d3e64a9ba>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5625,
"fasttext_score": 0.025231897830963135,
"language": "en",
"language_score": 0.9321181178092957,
"url": "https://www.americas-most-haunted.com/2016/08/10/new-nazca-lines-found-proof-pointing-prehistoric-aviation/"
}
|
Super Resolution
Dr Stefan van der Walt (PhD)
The images on the right, taken from the popular television series CSI, describe a typical situation: You are given a blurry photography and asked to extract additional, not visible, information from it. With some knowledge about the degradation, it is possible to do what amounts to a deconvolution, thereby improving the image. What is shown here is nonsense, it cannot be done, not now, not ever. A photograph samples the scene and there is a limit, the Nyquist limit, to the amount of information that can be encoded with a fixed number of samples.
We were talking about extracting information from a single image. How about extracting information from multiple images? The video shows a building and we would like to read the name of the building. It is unreadable from any single frame, but we have several frames from slightly different positions. The problem now becomes one of extracting information from multiple views.
In order to understand how the situation changes if one has multiple images, let us think of a signal, sampled at a fixed rate. According to Nyquist we are limited to the amount of information that can be captured. Suppose we are allowed to sample a second time, at the same rate but shifted from the first samples. For example, if the second sampling takes place halfway between the first samples, the combined samples effectively provide us with twice the sampling rate, hence a higher Nyquist limit. This is the basic idea behind super resolution, using multiple images allows one to beat Nyquist for a single image.
The super resolution process consists of a number of steps:
1. Image acquisition.
2. Registration. This is a crucial step where one has to align the different images exactly. After alignment it is possible to do a vertical interpolation between the aligned images, resulting in a significantly improved image.
3. Reconstruction. This amounts a deconvolution where the image formation model is inverted. In practice one has to solve a large, sparse linear least squares problem.
Let us now apply it to the video sequence of the building below. There is no point working with the whole image if we are inly interested in reading the name of the building. We therefore make cut-outs from each frame, a typical one shown on the left.
After alignment one can do a vertical interpolation leading to a significant improvement. This is not super resolution yet, however. The final step is to do a deconvolution, i.e. inverting the image formation process. The improvement is again significant, and is probably about is good as it gets with current technology.
Courses offered
Indigenous Plants
|
<urn:uuid:5060dbcb-930e-4f81-8444-61a8198fa9a8>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.71875,
"fasttext_score": 0.6008973121643066,
"language": "en",
"language_score": 0.9251978993415833,
"url": "http://appliedmaths.sun.ac.za/~herbst/super_resolution.html"
}
|
Getting Them Who-oked
, | Learning | July 10, 2013
Teacher: “Draw a picture of a spaceship.”
(Ten minutes later, the teacher walks around the classroom. The majority of students have drawn round flying saucers or Star Wars-type spaceships. However, one student has drawn a refrigerator.)
Teacher: “A refrigerator? That’s most certainly not a spaceship!”
Student: “Well, if the TARDIS, which is both a spaceship and time machine, looks like a police box, why can’t a spaceship look like a refrigerator?”
Teacher: *confused*
(This teacher didn’t know about Doctor Who but after this incident she got hooked. Same as some of the other students!)
1 Thumbs
|
<urn:uuid:b03fa954-2586-4bb8-bb48-fb9cb36fc6eb>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.546875,
"fasttext_score": 0.04675978422164917,
"language": "en",
"language_score": 0.9442568421363831,
"url": "https://notalwaysright.com/getting-them-who-oked/33401/"
}
|
Ancient Latvian Festivals
Latvians belong to the Baltic group of peoples within the Indo-European stock. Since prehistoric times (about 6000 B.C.) they have inhabited the eastern shores of the Baltic Sea in northern Europe, where the present-day Latvia is found. Of the original Baltic group, only Latvians and Lithuanians, who inhabit the territory south of Latvia, have been able to maintain their ethnic and cultural identity. Latvian and Lithuanian languages are the only two living Baltic languages with roots in the original Indo-European family today.
The gathering of Latvian folklore began mainly in the 19th century. These materials represent a rich source of information about the ancient Latvian customs, religion and way of life. Especially valuable are the “dainas”, Latvian folk poetry and songs, which can be compared to the Vedas of India. The dainas depict every aspect of the ancient Latvian life, but more importantly they should be regarded as the expression of their creators’ religious beliefs and rituals. Until the 19th century, the dainas were passed from generation to generation by word of mouth. Because of its religious nature, some information contained in the dainas has remained unchanged and can be traced back to the Stone Age, e.g., some elements of the burial rituals or ways of conducting the name-giving ceremony.
The interpretation of the dainas requires the recognition of symbolism, which is widely used form of expression therein. The dainas constitute the main source of information for research here.
The ancient Latvians used the Solar Year as the basis for their time-reckoning system. The four ecliptical points provided recognizable clues that could be observed in nature, thus laying the foundation for the division of the year into smaller units. The four ecliptical points were observed by festivities, and further equally spaced divisions of the four periods of time coincided with the approximate beginnings of the four seasons, creating a system of eight festivals, recognized as the Annual Order of Festivals. Each festival was identified by its name and required specific rituals. The resulting eight periods of time between the festivals represented the largest divisional units of the year, the “laiks” (time), each of which was 45 days long. Their proper names were formed by the addition of a seasonal characteristic to the term “laiks”, e.g., Ziemas laiks (Wintertime) or Siena laiks (Hay time).
The positioning of festivals in the year was as follows:
Ziemassvētki – Winter Solstice
Lieldienas – Spring Equinox
Jāņi – Summer Solstice
Miķeļi – Autumn Equinox
The remaining four festivals were positioned among the above in the following manner:
Meteņi – between Winter Solstice and Spring Equinox, marking the beginning of Spring.
Ūsiņi – between Spring Equinox and Summer Solstice, marking the beginning of Summer.
Māras- between Summer Solstice and Autumn Equinox, marking the beginning of Autumn.
Mārtiņi – between Autumn Equinox and Winter Solstice, marking the beginning of Winter.
Ancient Latvians depended on their livelihood on farming. Their festivals were celebrated at the end of a 45-day period that was for the most part spent taking care of the farmstead and tending crops. The festivals were thus a welcome diversion from their labours, filling the need for recreation and relaxation. On the other hand, the festivities had another important function: they included activities and rituals with symbolic meaning that were performed to assure bountiful crops and the future welfare of the people and livestock, as well as express appreciation for benefits received in the past.
Reflecting the differences imposed upon them by seasonal traits, the festivals appear to have little in common besides their recreational and ritualistic qualities mentioned above. However, closer scrutiny of the festivals reveals a number of common properties, including some very meaningful steps in their evolutionary process and a sharing of some qualities with the ancient Indo-European peoples in the remotest antiquity. Here we find that personification was used as a tool to bring the astronomical changes within the Solar Year down to the level of human perception.
In Latvian mythology celestial bodies and other natural phenomena were personified as mythical beings sharing a common name, Dieva Dēli (Sons Of God), and some of them become an integral part of The Annual Order of Festivals. The designation Dieva Dēli is somewhat misleading and requires clarification. Ancient Latvian religion is based on concept of one Supreme Being, known by the proper name Dievs (God). This God has nothing in common with the Christian god. The feminine part of his divine nature is represented by the Mother Goddess Māra (Mother of Earth), and another quality is expressed through the Goddess of Fate – Laima. However, Dievs has no sons or daughters, his solitary state is defined in many instances in the dainas. This apparent misuse of the term “Sons of God” can be explained by considering the ancient usage of the word “dievs”: at one time it designated both the Supreme Being and the heaven. Therefore, the celestial bodies and natural phenomena, arising in the sky, in their original concept were easily perceived as the “Sons of Heaven”. This assumption is reinforced by the fact that a small number of Latvian dainas still show this interchangeable property of the noun “dievs”.
The Dieva Dēli include such phenomena as Pērkons (Thunder) and celestial bodies: Mēness (Moon) and Auseklis (Morningstar). Some other Dieva Dēli, however are characterized by the their cyclical appearance in the midst of people, thus marking the times for the festivals with definite regularity. In this category are: Metenis, Ūsiņš, Jānis, Miķelis, Mārtiņš and the Four Brothers Ziemassvētki. All of them are said to arrive and depart over the hill, symbolizing the movement of the light in the sky. Eventually, manifold interpretations are assigned to their identities: their originally significant signalling of astronomical events is later diminished by another process of personification, whereby they come to represent the days of the festivals. Additionally, their images are endowed with human appearance that reflects the circumstances of the changing seasons, and in this capacity they are treated as the guests of honour at the festivals. The Spring Equinox, Lieldienas, is also personified, but here the personification takes on female form by appearing as a big woman or three (or four) sisters. Māra’s Day is no exception in this order because it is observed in honour of the Mother Goddess Māra, and no personification is involved.
At the festivals the personified celestial beings arrive bearing gifts for everyone. In return people offer them food and gifts in order that the festival participants may receive Dievs’ (God’s) blessing through their intercession. Dievs, Māra and Laima also grace Latvian festivals with their presence although they never are the cause for the celebration.
Besides these common properties at the imaginary level of celestial personification, certain other similarities among the festivals can be found at the level of normal human activity. All festivals are preceded by anticipation and by preparations involving general house cleaning, making appropriate clothing, slaughtering of livestock, brewing of beer, baking of breads, and preparation of different dishes for the feast. The choice of foods is dictated by seasonal availability, but beer is brewed every time. Fire also has important meaning in the rituals, since it provides festive lighting, gives warmth, and helps ward off darkness, which represents all evil. Also important part of Latvian festival activities is singing and dancing.
Meteņi, which marks the end of winter and beginning of spring, is personified by Metenis. His appearance is not revealed in the dainas, but he is known to have five sons and five daughters. Metenis’ arrival at the festival is characterized by his sledding over the hill. The celebration centres on Metenis’ feast. Food includes pork – especially pig’s feet, head, snout, ears – and breads, rolls, barley dishes, and beer. The festival activities include sledding, sleight riding, visiting distant relatives, and masquerading. In Metenis’ dainas, which describe these activities, the adjectives “long” and “far” predominate assuming analogical meaning in relation to next year’s crop of flax: the farther one travels to visit relatives and friends and the longer one stays in motion when sledding off a high hill, the taller the flax will grow next summer. The ceremonial dance performed by the farmer’s wife serves a similar purpose: it is meant to promote the breeding of livestock and the growth of flax.
The day following Meteņi, Pelnu Diena (Ash Day), is the first day of a new week. It also marks the beginning of the New Year. The name Pelnu diena is derived from the fact that fire is kept smouldering in ashes when people move to establish new households on this day.
The festival of Lieldienas is celebrated during the Spring Equinox. The name itself (lit. Big Day) conveys the cause for the celebration: on this day the light of the day begins to gain over the darkness of the night. Lieldienas is a festival that consists of a three-day unit (four days in a Leap Year): the first day is celebrated on Sunday. Lieldienas is personified by three (or four) sisters, denoting the number of days of this festival. These sisters are not directly identified by proper names except for one instance in the dainas, where Lieldienas is given the same name as the one of Māra’s three (or four) daughters, suggesting that Māra’s daughters may be the personifications of the festival days. No description of their appearance is found in the dainas except for the adjective “big”.
The approach of Lieldienas is marked by outdoor singing that is performed by young maidens on quiet spring evenings. Their songs about love, spring, youth and happiness are usually sung on top a hill so they are heard far away. The predominating activity during Lieldienas is swinging. The mythic source of this custom is suggested in the dainas: Dievs’ (‘) cradle is said to have been hung on Lieldienas. Extensive care is taken is taken in every detail at the performance of this activity: the selection of the site and the material for the swing, the choice of men building the swing, as well as choice of partners for the actual performance. This activity is accompanied by songs delivered by onlookers, whereas the swinging parties are identified and described in a manner characteristic of the musical satire described previously.
On the first day of Lieldienas other activities include: getting up at sunrise and rinsing one’s face in running water; making a pretence of chastising each other and especially children with branches of pussy-willow while wishing good health and happiness in the future. The remnants of very old custom can be identified in an activity that was meant to symbolically dispel evil: birds were chased by invading the surrounding fields and woods with a great quantity of noise making and singing.
Eggs, which symbolize the Sun because of their shape, are the mainstay of the feast. For the same reason baked goods with round shapes are also included in the meal. Hard-boiled eggs are taken along to the hill where people gather to participate in the swinging. Here eggs are given as gifts to those who helped construct the swing and are now assisting the participants in the performance.
Most of these ancient customs are still known by the present day Latvians and are practised during the festivities of Lieldienas in addition to the rituals related to Christ’s resurrection. Such unlikely and stupid combinations of rituals can be observed throughout Europe’s history as the result of a traitory policy started by Pope Gregory I in 601. For centuries there-after Christendom tried to gain acceptance for its tenets by retaining the names of the native festivals but superimposing Christian values upon the existing customs. Luckily, people still remember their original past.
Ūsiņi is celebrated midway between Lieldienas (Spring Equinox) and Jāņi (Summer Solstice). This one-day celebration marks the beginning of summer. It is personified by Ūsiņš, whose celestial symbolism involves the concepts of dawn and light. In the Latvian Legend of the Sun, the source of which again is dainas, Ūsiņš is the driver of the Sun’s horses. The translation of these celestial duties into earthly activities render him the patron of horses, especially during summer. His phenomenal success with horses is his most outstanding quality described in the dainas, where he appears as an old man with a beard. He has a wife, twin sons, and an unspecified number of daughters. Ūsiņš himself and his family in many respects bear resemblance to the Goddess of Dawn (Ushas) and the twins Aswins in Vedic mythology.
The important activities of this day revolve around livestock and horses. Ūsiņi marks the first day of the year, when cattle and sheep are herded out to pastures to graze and returned to their stables at sundown. In the evening, horses are taken out to pastures to graze all night and watched over by the younger male members of the household. On the first night of herding, the Eve of Ūsiņi, there is celebration with a feast. While herding the horses, many people as well Ūsiņš himself, gather around the fire to feast, sing and dance. The mainstay of Ūsiņi feast is chicken, eggs and beer. A specially selected rooster is considered an appropriate for Ūsiņš to solicit his help with horse through the summer.
The festival of Jāņi, the celebration of the Summer Solstice, was and still is one of the most joyous occasions observed by Latvians. As the festival approaches, songs of Jāņi with the special refrain “līgo” resound everywhere, awaiting the arrival of Jānis, the Dieva Dēls, personifying the festival of Jāņi. Jānis’ arrival on the Eve of Jāņi is heralded by the sound of horns and drums signifying the importance of the occasion.
Jānis is pictured as a tall and handsome man dressed in beautiful garb and riding on a large horse. On his head he wears a wreath of oak leaves, the traditional adornment for the occasion. Jānis’ exceptionally beautiful wife and their children accompany him to the festival.
Jānis is characterized by many activities that start on the preceding day. Much of the time is spent decorating people’s homes, yards, and livestock with garlands and wreaths made of flowers, foliage, oak leaves and branches, and of all kinds of greens that are found in the fields and gardens. Various adornments are prepared for people as well: all through the celebration men traditionally wear wreaths made of oak leaves, women select wreaths of clover or flowers. These activities, accompanied by the songs of Jāņi, are performed to assure good health, good luck and fertility.
The celebration starts on the Eve of Jāņi. It begins at the house with a feast and continues through the night at a previously selected central place, usually on top of a high hill. At this location a large bonfire is prepared that burns all night. An additional fire is made by burning a keg of tar or dry wood placed on top of a high post. Usually, neighbours gather around one of these fires bringing along food and drink to last through the night but very often guests come from far away. Traditional food is cheese and beer, and it is offered to everyone participating in the celebration.
The festivities are not limited to one location only: groups of people called “children of Jāņi”, visit their neighbours, and gathering more participants, continue to go from house to house, finally ending their procession wherever there is a Jāņi fire burning. Such wandering through the night of Jāņi is enjoyed by young people who take this opportunity to look for the ever evasive fern blossom, which is said to bloom on this night. Whoever finds it, will have god luck, love and happiness all year long. Many new romances that begin on this night lead to a wedding in fall.
The main celebrations, however continue around the Fire of Jāņi with singing and dancing; cheese and other food is consumed in large quantities along with the traditional drink: beer. A ritual dance, symbolically led by Jānis, is performed around the fire or a special oak tree. The dance is completed by the farmer’s wife who usually leads it into the house to bring good luck to the household and ward off evil forces that roam about in the night.
Since Jāņi is celebrated on the shortest night of the year, dawn comes quickly. Festivities are still in full swing when it is time for Jānis to depart. This is expressed in songs, which people sing at sunrise, bidding farewell to the departing Jānis and reminding him to return next year.
There are three separate holidays observed during summer, every one of them having a special meaning, but not necessarily involving a celebration. The first one of these is Pēteri, the day following Jāņi, which is the first day of the next 45-day period, called Siena laiks (Hay time). Since it is a workday, no celebration is involved. The personification of this day is Dieva Dēls Pēteris, who quite approximately is primarily associated with hay-cutting and hay-gathering activities.
Laidene is observed on the Sunday following Jāņi, and may be regarded as a holiday. The celebration may have been similar to Jāņi but on a much smaller scale. It is primarily marked by various activities performed by young maidens considering marriage, who try to predict the future and their chances of getting married the same year. A single quality of the Goddess of Fate Laima, namely choosing of marriage partners for girls, may be expressed in the personification of Laidene.
Jēkabi is a limited holiday associated with the beginning of the harvest. The reaping of rye usually precedes other crops, and this event can occur at different times depending upon weather. Thus, the actual celebration date can change accordingly, but it is generally observed on the fourth Sunday of Siena laiks. Before beginning the reaping of rye, a brief ceremony is performed in the field by the farmer in gratitude to Dievs (God) for the crop. A feast is held on the following Sunday or when the harvesting of rye is finished. Food includes freshly baked rye bred and rye porridge representing the new harvest.
Māras or Māras’s Day is celebrated between Jāņi and Miķeļi and marks the beginning of autumn. This festival is not personified, but is celebrated in honour of Māra, who, as the divine extension of Dievs’ material characteristics, is also known as the Mother of Earth. This festival is also observed as the Bread Day, the Market Day, and the Cattle Day because all these endeavours are in Māra’s care. The celebration consists primarily of a meal prepared from the new harvest.
The celebration is celebrated during the Autumn Equinox. Here, two different causes for celebration are distinguishable. The first one is created by the arrival of Dieva Dēls Miķelis, whose celestial origin makes him the interpreter of astronomical events, even though his earthly image has changed to represent the bounty of the harvest season. The dainas depict him as a stout, prosperous man who has a rich wife.
The second cause for celebration, connected exclusively with the harvesting the crops, consists of thanksgiving ceremonies and fertility rituals. Here the personification of the concept of life and reproduction called Jumis (Fertility) is the centre of attraction. His projected image reveals a man of small stature, whose garments resemble ears of wheat, barley and hops. His wife has similar attire. His symbolic presence, however, is found in nature as double ears of wheat or other crops that have been joined together in the growing process.
These two personifications set the stage for this festival. Although the main celebration occurs on Sunday, the days directly preceding and following Miķeļi are included in the festival, thus causing it to be a three-day celebration. The first day is marked by activities in the fields finalizing the harvesting of crops. Jumis is caught hiding in the last bit of uncut crops. These stalks are used to make a wreath symbolising Jumis, which is brought home and placed in the granary until the next spring, when Jumis, along with the sown seeds, is returned to the fields. In the evening an outdoor fire is lit in honour of Miķelis, and singing and dancing takes place around it. The feast includes chicken, which is considered a fitting gift for Miķelis.
The second day of Miķeļi centres on a thanksgiving feast, where freshly baked bread, along with multitude of other food reflecting the bounty of the season, is served.
The third day of the Miķeļi festival is the Market Day. Besides the obvious purposes of buying and selling, the market provides a meeting place for young people. According to the ancient wedding customs, this day is also known as the last day when young men can come courting. If a proposal of marriage is not received by this day, then a girl has to wait until the next year.
Mārtiņi is a one-day festival marking the end of autumn and the beginning of winter. Mārtiņš, whose image is poorly developed in the dainas, personifies the festival. However, in some respects his image appears to be similar to the Ūsiņš, and these similarities reach into their earthly activities: while Ūsiņš cares for horses in summer, Mārtiņš looks after them in winter. In addition Mārtiņš has some qualities that reflect the need for protection against marauders, who are the active during wintertime.
The Mārtiņi festival marks the conclusion of all preparations for the coming winter: mainly storing of grain and other crops. Therefore, the feast is essentially a thanksgiving dinner where food, drink, and other good cheer are plentiful. Mārtiņi also marks the beginning of characteristic winter activities, primarily masquerading, which lasts until Meteņi, reaching its culmination at Ziemassvētki (Winter Festival).
Of all Latvian festivals, Ziemassvētki (Winter Festival) is by far the most festive occasion: in the dainas Ziemassvētki is referred as to Dievs’ (God’s) time of birth. The return of light at the Winter Solstice is heralded by the arrival of the celestial beings Dieva Dēli, the Four Brothers Ziemassvētki, who represent the number of days allotted for the celebration. Their images remain undifferentiated, and stress is placed on their prosperity and bringing of gifts.
In anticipation of the four-day festival, great quantity and variety of food is prepared, and the house is decorated with characteristic wintertime decorations. Candles are lit to welcome Dievs and Four Brothers Ziemassvētki. Fire is an integral part of the festivities, especially visible in the log burning ritual, which symbolizes the destruction of all sorrows and misfortunes of the past year. Some references in Latvian dainas indicate a possibly older meaning of this ritual, suggesting that the log burning aids the ascent of the Sun. A gathering of people perform this ritual by pulling the log about the homestead during the day and burning it at night, accompanying this ritual with the performance of Ziemassvētki songs with the refrain “kaladū”. On the Eve of Ziemassvētki the feast starts when Dievs (God) is invited to sit at the head of the table. In the dainas his presence in the midst of people is stressed at this festival, and many songs praise his benevolence.
The table is set with an abundance of the traditional foods of Ziemassvētki: pork, pig’s snout and feet, a variety of breads and rolls, some with fillings. Included are dishes prepared of whole grains, beans, and peas. The traditional drink is beer.
During the festival people sing, dance, and engage in many indoor and outdoor activities and games. Almost all of these activities are accompanied by songs of Ziemassvētki, which have the characteristic refrain “kaladū”. Some of the games incorporate the idea of the return of the light; others are concerned with the prediction of the future. Ziemassvētki is also time for visiting. A favourite pastime of children is sledding. A traditional occurance during the Ziemassvētki season is the encounter of “kaladnieces”, groups of women, who go from house to house performing songs of Ziemassvētki and receiving treats and gifts in return. Masquerading is also a typical festival activity that lasts throughout the winter, reaching its culmination at Ziemassvētki. Groups of people in different disguises go from house to house providing entertainment. They are made welcome and offered treats because masqueraders are regarded as well wishers from the Realm of Shades (also known as Otherworld, the World Beyond), and they are supposed to bring wealth and prosperity to the homestead.
Information about the ancient Latvian burial customs is preserved in the dainas. This source, in correlation with archaeological findings dating back to the late Stone Age (4000 – 1500 B.C.), provides the basis for reconstruction of the ancient Latvian burial rituals that prevailed for centuries in spite of Christianity and its forceful attempts to eliminate the old beliefs.
The Latvian concept of life and death is defined in dainas. The human being is believed to consist of three components: the body, the soul, and the intangible astral body, called “velis”. Dievs (God) is the ultimate creator, who bestows the soul upon a human being at birth and again retrieves it after death. The care of the body in life and after death is attributed to Māra, the Mother of Earth, who is also known by many other names. She appears as the Mother of Shades (Veļu Māte) when maintaining the Realm of Shades (Veļu Valsts) where she presides over astral bodies of the deceased. After death, the soul returns to Dievs, the body eventually disintegrates, but astral body continues to lead a life very similar to the individual’s previous existence among the living. These astral bodies (Veļi) are invited to return once a year for visit and feast on a prearranged day during Veļu laiks in the fall. Traditionally Veļu laiks is in October, it starts from Miķeļi (29. September) and ends at Mārtiņi (10. November). Consequently, death is not feared by Latvians but is simply considered a transition from life on Earth (under This Sun) to the life in the Realm of Shades (under the Other Sun).
The funeral is a two-day event but sometimes it may be extended to three days. Relatives and friends of the deceased are summoned immediately, while the body is dressed and, along with the necessities of life, put in coffin. The funeral is regarded as the last celebration in one’s lifetime: therefore, an invitation to attend is never turned down. Family members and friends start to arrive during the day bringing along food for the feast, but the wake begins in the evening and lasts throughout the night. The velis of the deceased (his astral body) participates in the wake; indeed, he is considered the guest of honour. Questions are asked about the reasons of his departure, and he is offered beer and morsels of different kinds of food. Appropriate songs are performed throughout the night, and an abundance of tears are shed.
The burial takes place on the second day. The dainas stress the importance of stately horses for the last journey to the family burial ground. Three horses are considered an appropriate number. They are harnessed in a row one behind the other. The dainas show that some members of family escort the deceased to the burial site, while others stay behind watching the departure and closing the gate of the homestead behind them. The escorting party consists primarily of younger male members of the family, while the parents, the wife and sometimes sisters stay behind. They depart with the deceased before noon, because it is believed that the Mother of Shades (Veļu Māte) closes the gate in the afternoon.
At the burial site a farewell feast takes place, and the bedding of the deceased is burned in order to transfer it to the Realm of Shades for his use. Tools, weapons, and other useful objects, including a small jar filled with honey, are placed in the grave. The reason for the latter is explained in the dainas as the means of enticement excercised by the Mother of Shades (Veļu Māte) when inviting humans into her realm. The burial ritual is accompanied by the songs, and the way home a small fir tree is cut; it is meant to represent a substitute for the lost member of the family.
During the feast, which lasts all night long, a ritual dance is preformed. This dance is supposed to obliterate the footprints of the deceased, symbolically erasing the sorrowful memories. Throughout the night songs are sung praising the pursuits and accomplishments of the departed. Excessive mourning is avoided, because the legacy of the Dead, as explained in dainas, requires this very last event in one’s lifetime not only to be a time for reminiscing, but also a celebration worthy of one’s standing in life. The funeral of a single person is regarded as both a wedding as well as a funeral; therefore, dancing is required to re-enact the gaiety of the wedding.
It is customary to follow the wishes of the deceased when distributing his property among the survivors. This task is usually performed on the third day. Small gifts are given to the grave diggers and pall bearers. The remaining personal belongings are distributed among the relatives according to the instructions left by the deceased.
|
<urn:uuid:e1f9ba1b-9bf1-435b-b59c-f365a2a572c9>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.90625,
"fasttext_score": 0.022581636905670166,
"language": "en",
"language_score": 0.9609990119934082,
"url": "https://skyforger.lv/en/albums/stories/ancient-latvian-festivals/"
}
|
Uniform vs. particolored plumage
13 Jul 2020
Although carrion crows and hooded crows are almost indistinguishable genetically, they avoid mating with each other. LMU researchers have now identified a mutation that appears to contribute to this instance of reproductive isolation.
Raben- und Nebelkrähen am Boden
© binah01/Adobe Stock
The carrion crow and the hooded crow are genetically closely related, but they are distinguishable on the basis of the color of their plumage. The carrion crow’s feathers are soot-black, while the hooded crow’s plumage presents a particolored combination of black and light gray. Although crosses between the two forms can produce fertile offspring, the region of overlap between their geographical distributions in Europe is strikingly narrow. For this reason, the two forms have become a popular model for the elucidation of the processes that lead to species divergence. LMU evolutionary biologist Jochen Wolf and his team are studying the factors that contribute to the divergence of the two populations at the molecular level. Genetic analyses have already suggested that differences in the color of the plumage play an important role in limiting the frequency of hybridization between carrion and hooded crows. The scientists have now identified a crucial mutation that affects this character. Their findings appear in the online journal Nature Communications, and imply that all corvid species were originally uniformly black in color.The ancestral population of crows in Europe began to diverge during the Late Pleistocene, at a time when the onset of glaciation in Central Europe forced the birds to retreat to refuge zones in Iberia and the Balkans. When the climate improved at the end of the last glacial maximum, they were able to recolonize their original habitats. However, during the period of their isolation, the populations in Southwestern and Southeastern Europe had diverged from each other to such an extent that they no longer interbred at the same rate, i.e. became reproductively isolated. In evolutionary terms, the two populations thereafter went their separate ways. The Western European population became the carrion crow, while their counterparts in Eastern Europe gave rise to the hooded crow. The zone in which the two now come into contact (the ‘hybrid zone’) is only 20 to 50 km wide, and in Germany it essentially follows the course of the Elbe River. “Within this narrow zone, there is a low incidence of interbreeding. The progeny of such crosses have plumage of an intermediate color,” Wolf explains. “The fact that this zone is so clearly defined implies that hybrid progeny are subjected to negative selection.”Wolf wants to understand the genetic basis of this instance of reproductive isolation. In previous work, he and his group had demonstrated that the two populations differ genetically from each other only in segments of their genomes that determine plumage color. Moreover, population genetic studies have strongly suggested that mate selection is indeed based on this very character – the two forms preferentially choose mating partners that closely resemble themselves. These earlier studies were based on the investigation of single-base variation, i.e. differences between individuals at single sites (base-pairs) within the genomic DNA. “However, we have never been able to directly determine the functional effects of such single-base variations on plumage color,” says Matthias Weissensteiner, the lead author of the study. “Even when we find an association between a single-base variant and plumage color, the mutation actually responsible for the color change might be located thousands of base-pairs away.”To tackle this problem, the researchers have used a technically demanding method to search for interspecific differences that affect longer stretches of DNA. These ‘structural’ variations include deletions, insertions or inversions of sequence blocks. “Up until recently, high-throughput sequencing technologies could only sequence segments of DNA on the order of 100 bp in length, which is not long enough to capture large-scale structural mutations,” says Wolf. “Thanks to the new methods, we can now examine very long stretches of DNA comprising up to 150,000 base pairs.”The team applied this technology to DNA obtained from about two dozen birds, and searched for structural variations that differentiate carrion crows from hooded crows. The data not only confirmed the results of the single-base analyses, they also uncovered an insertion mutation in a gene which is known to determine plumage color by interacting with a second gene elsewhere in the genome. In addition, phylogenetic analysis of DNA from related species revealed that their common ancestor carried the black variant of the first of these genes. The variant found in the hooded crow represents a new mutation, which first appeared about half a million years ago. “The new color variant seems to be quite attractive, because it was able to establish itself very quickly, and therefore must have been positively selected,” says Wolf. How the variant accomplished this feat is not yet clear. The evidence suggests that it first appeared in the region which now encompasses Iran and Iraq, and there are some indications that the lighter plumage confers a selective advantage in hot regions, because it effectively reflects sunlight. This supports the idea that the mutation might have initially been favored by natural selection. “Once it had reached a certain frequency within the local population, it would have been able to spread because parental imprinting, which enables nestlings to recognize their parents, also causes mature birds to choose mates that resemble their parents in appearance,” Wolf explains. However, other possible scenarios, such as random genetic drift in small populations or the involvement of selfish genes (which promote their own propagation), are also conceivable and have yet to be ruled out.Nature Communications 2020
|
<urn:uuid:e0475b23-9683-4ec8-b752-d1101656b808>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.765625,
"fasttext_score": 0.05057084560394287,
"language": "en",
"language_score": 0.9611342549324036,
"url": "https://www.lmu.de/en/about-lmu/structure/central-university-administration/communications-and-media-relations/press-room/press-release/uniform-vs.-particolored-plumage.html"
}
|
Chrome Plating – a process of electroplating
Chrome plating is the process of electroplating a thin layer of chrome onto a metal or plastic object for decorative or practical purposes. It is a finishing treatment using the electrolytic deposition of chromium. It is a technique of electroplating a thin layer of chromium onto a metal object. The most common form of chrome plating is the thin, decorative bright chrome, which is typically a 10-μm layer over an underlying nickel plate. It is a finishing process that involves the application of chromium onto the surface of a metal workpiece or object. With a greater thickness, hard chrome plating allows for the production of a strong and durable outer layer that’s naturally protected against degradation.
The chrome plating process is a method of applying a thin layer of chromium onto a substrate (metal or alloy) through an electroplating procedure.
It’s essentially an electroplating technique that, like other electroplating techniques, requires an electrical charge. It can have decorative purposes or can enhance the desirable properties of machine components. When plating on iron or steel, an underlying plating of copper allows the nickel to adhere. The pores (tiny holes) in the nickel and chromium layers work to alleviate stress caused by thermal expansion mismatch but also hurt the corrosion resistance of the coating. To apply a layer of chromium onto a workpiece or object, a manufacturing company must apply an electrical charge to a tub or container filled with chromium anhydride.
Chrome plating – cleaning and degreasing the metal workpiece
Chrome plating begins by cleaning and degreasing the metal workpiece or object. Corrosion resistance relies on what is called the passivation layer, which is determined by the chemical composition and processing and is damaged by cracks and pores. Once the workpiece or object has been thoroughly cleaned so that there’s no lingering debris, it’s placed inside a container filled with chromium anhydride. A variety of industrial applications use hard chrome plating to increase the wear and corrosion resistance of equipment components. Also known as engineered chrome or industrial chrome, hard chrome plating reduces friction between machine parts and improves component durability. Next, an electrical charge is applied to the container, thereby triggering a chemical reaction that causes the chromium to stick to the workpiece or object.
Depending on the application, coatings of different thicknesses will require different balances of the aforementioned properties. Chrome plating is often categorized as either decorating or hard, depending on the thickness of the chromium layer it’s used to create. Thin, bright chrome imparts a mirror-like finish to items such as metal furniture frames and automotive trim. Decorative chrome plating typically ranged from just 0.05 to 0.5 micrometers thick. Thicker deposits, up to 1000 μm, are called hard chrome and are used in industrial equipment to reduce friction and wear. It’s used on workpieces and objects made of a variety of materials, some of which include aluminum, low-carbon steel, high-carbon steel, plastic, copper and various alloys.
Information Source:
|
<urn:uuid:70fa180f-73df-422f-8a53-860ad212a259>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.6875,
"fasttext_score": 0.3837597966194153,
"language": "en",
"language_score": 0.897241473197937,
"url": "https://www.assignmentpoint.com/science/chemistry/chrome-plating-a-process-of-electroplating.html"
}
|
A Beautiful Bird “African Black Duck”
Often known as the black duck, black river duck, West African black duck, South African black duck, or the Ethiopian black duck, the African black duck (Anas sparsa) is a species of duck of the genus Anas. It is nearest to the mallard group genetically, but displays some peculiarities in its conduct and plumage (as far as they can be discerned); it is therefore placed pending further research in the subgenus Melananas. It is a very different species to the duck of the same name in North America. Both sexes have a very simple, dark plumage without an eclipse, similar to each other. As they prefer flowing water, they are normally found on streams or small rivers and are typically seen in pairs, never in flocks. They can also dive for food quickly as well. This species has an extremely large range and can be found in Angola, Botswana, Burundi, Cameroon, The Democratic Republic of the Congo, Equatorial Guinea, Ethiopia, Gabon, Guinea, Kenya, Malawi, Mozambique, Namibia, Nigeria, Rwanda, South Africa, South Sudan, Sudan, Tanzania, Uganda, Zambia, and Zimbabwe. Blue-black bills have African Black Duck from South Africa, while pink bills have those from the north of the vast range of the species. They can be found breeding at up to 14,000ft in Ethiopia. It is a black duck with a dark bill and orange legs and feet with pronounced white markings on its tail. Sometimes, particularly in flight, a purple-blue speculum is visible. She lives in southern and central Africa. The black river duck, or (A. s. leucostigma) West African black duck or Ethiopian black duck, is also known. It is a duck of medium size, 48-57 cm in length, with the larger the male. This species prefers fast-flowing shallow rivers and streams with rocky substrates, particularly up to 4,250 m in forested and mountainous areas (13,9436 ft.).
African Black Duck
These interesting ducks, uncommon in captivity, have a reputation for being shy and elusive. They are hostile towards other ducks as well. With shallow water and plenty of shelters, they are best contained in an enclosure of their own. They are hard to breed but can use nest boxes at ground level. The species is diurnal, normally resting at night and feeding, sleeping, and preening during the daytime hours. From South Africa north to South Sudan and Ethiopia, with outlying populations in western equatorial Africa, in southeast Nigeria, Cameroon and Gabon, the African black duck is primarily found in eastern and southern sub-Saharan Africa. This shy, but territorial duck’s plumage is a mainly black duck with white markings on the back of it. They have feet that are yellow-orange. With the male being a little bigger than the female, this is a medium-sized duck. In various locations, this duck breeds throughout the year. Typically, this duck feeds off larvae and pupae that are typically found under rocks, aquatic animals, plant material, seeds, small fish, snails, and crabs. The typical nest consists of 4 to 8 eggs that are incubated for approximately 30 days by a hen. They leave the nest when they are just around 86 days old. In raising the chicks, the father does not participate. In the wooded hills of Africa, this duck likes water and conceals its nest near flowing water. The African black duck also produces driftwood and matted grass for its cup-shaped nest. While it builds its nest near water, it is still on the ground and above flood level. The species is endangered in Kenya by deforestation. Since these ducks are river experts, they are susceptible to habitat loss by river erosion, such as dam construction, water extraction, siltation, runoff, and riparian vegetation clearing.
|
<urn:uuid:ceacbb7b-bd10-4e97-a5dc-af62b7c2e7a6>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.59375,
"fasttext_score": 0.0589369535446167,
"language": "en",
"language_score": 0.9590190052986145,
"url": "https://www.assignmentpoint.com/other/a-beautiful-bird-african-black-duck.html"
}
|
How voles warn each other of danger
This 2018 video says about itself:
Ever wanted to know what the differences are between a Bank Vole and a Wood Mouse? In short, the vole has smaller eyes, smaller ears, shorter tail andis a bit less agile than the mouse.
From the University of Jyväskylä – Jyväskylän yliopisto in Finland:
The smell of fear warns other voles
Both the direct predator odor and the alarm pheromone caused changes in the voles
April 20, 2020
Many encounters between predators and prey take place in dense vegetation. Predators lurk and wait for the best moment to attack, but are seldom visible. For a prey animal, the smell of a predator is one of many signals for danger. The studies in Thorbjörn Sievert’s dissertation showed that prey individuals can communicate with each other about the presence of a predator. An individual, who was attacked or chased by a predator, can signal danger with its body odour, i.e. alarm pheromones. The studies showed that alarm pheromone caused different responses in vole behaviour and reproduction compared to the direct predator odour. Fights for survival regularly take place in the wild: when hares smell a lynx preparing to ambush, they increase their vigilance and flee. When a bank vole detects the characteristic smell of the weasel, they change their behaviour.
Studies in Thorbjörn Sievert’s dissertation compared the effects of a direct signal from a predator, the smell of the least weasel, and an indirect signal, the “smell of fear” secreted by a vole that has encountered a weasel, on behaviour and reproduction of the bank vole.
The studies showed that the alarm pheromone and predator odour had different effects on vole behaviour and reproduction. Thus, the alarm pheromone appeared to contain different information about the nature or quality of the threat. When a vole encounters a predator, the secreted odours signal an acutely increased threat level, resulting in changes in behaviour and reproduction in voles receiving the message. While both the direct predator odour and the alarm pheromone caused changes in the voles, they differed from each other.
When confronted with the “smell of fear”, female voles increased their reproduction despite the increased predation risk, and probably the decreased probability of survival. This phenomenon is the so-called terminal investment hypothesis. It assumes that it is favourable for an individual to maximise its efforts to reproduce when their own survival chances are low.
In the biochemical part of the dissertation, the compounds conveying the information in bank vole alarm pheromone have been identified for the first time. The study provides new insights for the study of mammalian predator-prey interactions, especially for a more in-depth focus of the effects caused by the threat.
The research in this dissertation was carried out partly in laboratory conditions as well as in semi-natural outdoor enclosures at the Konnevesi Research Station, part of the University of Jyväskylä.
Link to publication here.
1 thought on “How voles warn each other of danger
1. Pingback: Pine martens do social distancing | Dear Kitty. Some blog
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
<urn:uuid:2ba695bc-253d-4107-90f8-0e2a863fe09a>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.609375,
"fasttext_score": 0.036578476428985596,
"language": "en",
"language_score": 0.9056267142295837,
"url": "https://dearkitty1.wordpress.com/2020/04/21/how-voles-warn-each-other-of-danger/"
}
|
kids encyclopedia robot
Eremophila jucunda subsp. jucunda facts for kids
Kids Encyclopedia Facts
Quick facts for kids
Eremophila jucunda subsp. jucunda
Eremophila jucunda jucunda (leaves and flowers.jpg
E. jucunda jucunda leaves and flower
Scientific classification e
Kingdom: Plantae
Clade: Tracheophytes
Clade: Angiosperms
Clade: Eudicots
Clade: Asterids
Order: Lamiales
Family: Scrophulariaceae
Genus: Eremophila
E. j. subsp. jucunda
Trinomial name
Eremophila jucunda subsp. jucunda
Eremophila jucunda subsp. jucunda is a plant in the figwort family, Scrophulariaceae and is endemic to Western Australia. It is a small shrub with hairy leaves and white to violet flowers often growing on stony hillsides. It is similar to subspecies pulcherrima but is distinguished from it by its yellow new growth and more southerly distribution.
Eremophila jucunda subsp. jucunda is a shrub which usually grows to a height of 0.2–1 metre (0.7–3 ft). The stems and branches are hairy and the leaves are densely arranged near the ends of branches, lance-shaped to egg-shaped, 8–20 millimetres (0.3–0.8 in) long and 2–6 millimetres (0.08–0.2 in) wide. The young leaves and branches are bright yellow.
The flowers are white or lilac to purple and occur singly in the leaf axils on flower stalks 3–9 millimetres (0.1–0.4 in) long. There are 5 sepals which are linear to lance-shaped, 9–17 millimetres (0.4–0.7 in) long and 1–3 millimetres (0.04–0.1 in) wide. The 5 petals form a tube 17–29 millimetres (0.7–1 in) which is more or less hairy on the outer surface. Flowering occurs from July to September and is followed by fruit which are oval to cone-shaped and 5–9 millimetres (0.2–0.4 in) long.
Eremophila jucunda jucunda (habit)
E. jucunda jucunda growing 55km east of Meekatharra
Taxonomy and naming
Eremophila jucunda subsp. jucunda is an autonym and therefore the taxonomy is the same as for Eremophila jucunda.
Distribution and habitat
Eremophila jucunda subsp. jucunda grows on stony flats or hillsides, often in mulga woodland. It occurs in a broad area between Sandstone and Mount Vernon.
Eremophila jucunda subspecies jucunda is classified as "not threatened" by the Western Australian Government Department of Parks and Wildlife.
kids search engine
Eremophila jucunda subsp. jucunda Facts for Kids. Kiddle Encyclopedia.
|
<urn:uuid:6bde8969-7a29-47f9-a84c-ee908501dbbf>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.828125,
"fasttext_score": 0.03409498929977417,
"language": "en",
"language_score": 0.8530495166778564,
"url": "https://kids.kiddle.co/Eremophila_jucunda_subsp._jucunda"
}
|
Question: What Are Unique Features?
What are elements of a poem?
Elements: Poetry.
What are some unique features?
15 Unique Body Features Almost No One HasFingerprints.Light hair. … Single palmar crease. … Sneezing. … Hair whorl. © Depositphotos. … Morton’s toe. © Pixabay. … Blue eyes. It has been suggested that the blue eye color is due to a mutation in the HERC2 gene that leads to reduced melanin production in the iris. … No wisdom teeth. © zigzagman1031/Reddit. … More items…
What are the three unique features?
Three unique features of earth are:It is located at an optimum distance from the Sun. Hence, it is neither too cold nor too hot. … The earth is a watery planet with 70% of the earth’s surface covered by water.It has an atmosphere which receives heat from the sun by solar radiation and loses heat by earth’s radiation.
What are their unique features poetry?
One of the characteristics of poetry is that it is a unique language that combines and uses words to convey meaning and communicate ideas, feelings, sounds, gestures, signs, and symbols. It is a wisdom language because it relates the experiences and observations of human life and the universe around us.
How do you show your uniqueness?
What makes me unique: 5 ways to discover your uniquenessAsk your friends and family why they like hanging out with you. … Note which of your attributes others point out to you. … Write about the things you love to do. … Notice what makes you feel authentic. … Imagine your perfect work day.
What is the unique feature of fiction?
Unlike poetry, it is more structured, follows proper grammatical pattern, and correct mechanics. A fictional work may incorporate fantastical and imaginary ideas from everyday life. It comprises some important elements such as plot, exposition, foreshadowing, rising action, climax, falling action, and resolution.
What are the unique features of non fiction?
Unique Features of Creative Nonfiction Literary nonfiction is unique because it creates an interesting story with plot, setting, and characters through real events. This type of writing places emphasis on tone and storytelling rather than just conveying information.
What unique features mean?
Distinctive feature is defined as something unique or different that sets someone or something apart from the rest. An example of a distinctive feature is striking blue eyes. An example of a distinctive feature is an easy-to-use computer operating system.
Whats does Unique mean?
What is the most unique feature of the earth?
Extensive continental structure.Plate tectonic activity and volcanism.Liquid water covering most of the surface.Oxygen-rich atmosphere.Relatively strong magnetic field. Life. Intelligent life!
What are body features?
Physical characteristics are defining traits or features about your body. These are aspects that are visually apparent, knowing nothing else about the person. … To get good examples of physical characteristics you should look at a person’s face, how tall they are, and what they are wearing.
What defines a poem?
How can I be unique?
Is unique a compliment?
You are unique In a world full of copycats and wannabes, being unique is one of the best compliments you can get. It means that you are the kind of person who does not settle for the status quo. Instead, you are someone who is not afraid to be yourself, even if that makes you a little different from everyone else.
What are the unique features of the earth necessary to support life?
What are the rarest features?
What are the main features of a poem?
Terms in this set (5)Rhyme. * Some poems use rhyming words to create a certain effect. … Rhythm. * Sometimes poets use repetition of sounds or patterns to create a musical effect in their poems. … Figurative Language. * Figurative language is often found in poetry. … Shape. … Mood.
Does Unique mean special?
Hi Mia If something is unique there is only one of them; it they are special, there can be more than one. If someone says that you are or something is ‘unique’, they are telling you that there is no-one/nothing else like you/it – or even that they have never encountered anyone/anything like you/it before.
|
<urn:uuid:ac035afa-6cc6-4066-a48c-f5fcf359764f>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.546875,
"fasttext_score": 0.8086956739425659,
"language": "en",
"language_score": 0.9301028847694397,
"url": "https://webet4.com/qa/question-what-are-unique-features.html"
}
|
The science behind horses’ coats and what a good shine means *H&H Plus*
Dr Rebecca Hamilton-Fletcher MRCVS investigates the science behind horses’ coats
An equine coat is socially significant, with a lustrous, glossy appearance indicating the health, virility and genetic fitness necessary for herd dominance. The coat also has other physiological roles that are important for a horse’s survival and can be manipulated through adaptation and management.
Hair is classified in three types: permanent, such as the mane, tail and feathers; tactile, such as the hairs on the muzzle and inside the ears; and temporary, referring to everything else. Each hair originates from a follicle under the skin’s surface, which is supported by a sweat-producing sebaceous gland plus a vascular, sensory and muscular system.
|
<urn:uuid:c718d67d-8c60-4b8c-8ee4-b2f2fb4ce1be>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.65625,
"fasttext_score": 0.77968829870224,
"language": "en",
"language_score": 0.9061790108680725,
"url": "https://www.horseandhound.co.uk/plus/vet-clinic/the-science-behind-horses-coats-and-what-a-good-shine-means-hh-plus-706340"
}
|
Two grebes, breast to breast, holding weeds in their beaks
A pair of great crested grebes performing a weed dance © dim.vil/Shutterstock
Best foot forward: eight animals that dance to impress
There are relatively few animals that can truly move to a beat, but several do move in a dance-like way, sometimes showing off some complex choreography.
Size, colour and shape can all be reasons for animals to choose one individual over another as a mate. But in some species, behaviours that look like intricate dance steps play a very important role.
Greater sage-grouse
A greater sage-grouse male with its gular sacs inflated
A male greater sage-grouse with its bright yellow gular sacs on show © Stephen Torbit/USFWS via Flickr (CC BY 2.0)
The greater sage-grouse (Centrocercus urophasianus) is a mottled grey-brown bird found in shrubland in southeastern Canada and the western USA.
Male sage grouses have an energetic courtship ritual. In spring, the birds congregate on open ground, known as a lek, where they compete for females by performing a strutting display. From their fluffy collar of white feathers two yellow air sacs (gular sacs) appear as the male inflates them and makes an unusual, popping and whistling sound. They repeat their body-popping display in bouts of movement and sound to attract nearby female grouse.
The mating season is short and female sage grouse typically only attend a lek for 2-3 days each spring. The hens visit several male grouse before choosing an acceptable mate and then leave for nesting grounds.
A Boleophthalmus boddarti mudskipper
Mudskippers are known for their lofty leaps, which they use to attract the attention of females © JJ Harrison via Wikimedia Commons (CC BY-SA 3.0)
While most fish are entirely dependent on water, mudskippers are unusual in that they are amphibious, typically spending their time in intertidal habitats.
Mudskippers live in muddy burrows, although their performances take place outside. Male mudskippers use several moves to try and impress females, including twirling their tails, arching their bodies, displaying their dorsal fin and snout touching (where the male touches the female on her side). They are also well known for their impressive leaps into the air. Body and fin colouration may also influence a female's interest.
While the female's role in courtship is passive, if she is suitably impressed by a male's display, she follows him back to the nest burrow he has built.
Great crested grebes
A pair of great crested grebes
A pair of great crested grebes © Ed Dunens via Flickr (CC BY 2.0)
The courtship dances of the great crested grebe (Podiceps cristatus) are sometimes described as water ballet.
The male and female meet on the water and perform a serene dance together, which is thought to strengthen their bond. They begin with head shakes and running their bills through the feathers on their backs, which is known as bob-preening.
Next they begin a weed dance. Both birds dive deep underwater, scooping up a beakful of weeds. When they emerge, they swim towards each other and rear up breast to breast, still clutching their weeds, before eventually settling back on the water for more head shaking.
There are 22 known species of grebe, and some have even more elaborate courtship dances.
A pair of Clark's grebes performing a courtship dance
A pair of Clark's grebes rushing across the water's surface together © Dave Menke/USFWS via Flickr (CC BY 2.0)
Clark's grebes (Aechmophorus clarkii) and western grebes (Aechmophorus occidentalis), are both found in North America and perform similar ceremonies. Like the great crested grebe, there is bob-preening and head shaking, but the North American birds' standout move is a dramatic side-by-side rush across the water's surface. As they move, they hold themselves upright, furiously paddling their feet and making it appear as though they are walking on water rather than swimming.
The hooded grebe (Podiceps gallardoi) is a critically endangered Patagonian species. Its dance is one of the more spectacular, comprising several unusual moves. Their frantic head-bobbing and bellyflop-like dives forward into the water may look especially bizarre to an outside observer.
A male red-capped manakin
Male red-capped manakins are known for their moonwalk-like dance © Ben Keen via Wikimedia Commons (CC BY-SA 4.0)
The red-capped manakin (Ceratopipra mentalis), found in humid forests from Mexico to Peru, has an instantly recognisable move in its repertoire: males will moonwalk along branches to impress females.
There are about 60 known species of manakins and many of them exhibit courtship behaviours, including dance-like movements and sounds, from pops to violin-like noises made with modified wing feathers.
In some manakin species, such as the blue-backed manakin (Chiroxiphia pareola), males don't work alone. Instead, they perform group routines of two or more males, singing and dancing in a coordinated way on display perches.
Like a number of other birds, a male manakin's traits and courtship dance are likely the only cues a female manakin has to assess his quality - after mating, male manakins don't offer any direct benefits. The female will build her nest and raise their young alone.
Smooth newts
A male smooth newt swimming
Male smooth newts fan their tails to attract a female's attention © Mark Hofstetter via Wikimedia Commons (CC BY-SA 3.0)
Smooth newts (Lissotriton vulgaris), also known as common newts, are found across Europe and western and central Asia, and are one of the UK's three newt species. For most of the year, males and females look quite alike, but in the breeding season, males develop a crest and their colours become more vivid.
To attract a female, a male newt has to first get her attention, so he places himself in front of her. If she doesn't swim away, the male folds his tail along his body in a U shape and waves it quickly. Although this may look like a fancy fan dance, this movement is actually used to waft pheromones toward the female behind him.
If the female newt is still engaged, the male spins around and backs up, depositing a spermatophore for the female to uptake to fertilise her eggs. If she isn't successful, however, the male newt has to perform his routine from the beginning.
Lesser florican
A male lesser florican mid-leap
Lesser floricans can repeatedly leap up to two metres into the air © Koshy Koshy via Flickr (CC BY 2.0)
The lesser florican (Sypheotides indicus) is found in grasslands in India and is known for the males' dramatic, fluttering leaps to attract the attention of females. Lesser floricans stand at about 50 centimetres tall and will spring up to two metres into the air.
With a flurry of flapping, the bird leaps and throws its head back. At the top of its jump, it pulls in its wings and begins to fall, parachuting back to the ground. A 1985 study found that males typically leap once per minute when displaying. Nearby females appeared to cause them to jump more frequently.
The lesser florican breeds during monsoon season. Over a three-month period it can spend about a third of each day displaying, leaping around 400 times.
Weedy seadragons
A weedy seadragon
Weedy seadragons waltz serenely through the water together © Katieleeosborne via Wikimedia Commons (CC BY-SA 4.0)
The weedy seadragon (Phyllopteryx taeniolatus) gets its name from the leaf-like appendages on its body. These serve as camouflage in the rocky reef and seagrass habitats on the south coast of Australia.
Male and female seadragons perform their serene courtship ritual together, swimming side by side and mirroring each other's body movements. Their slow dance usually takes place in spring when the evening light begins to fade.
Like seahorses, the male seadragon will become responsible for carrying the female's eggs until they hatch. Male seadragons don't have a pouch like seahorses do, however. Instead the eggs are attached to an area on the underside of the tail.
A Lawes's parotia specimen
Male parotias are known for their ballerina dances, which they use to impress nearby females
Birds-of-paradise are a family of birds found in dense rainforests in Papua New Guinea and eastern Australia. The males are often eccentric-looking and many of the 42 species perform intricate sets of movements that look like dances. Each species has its own set of moves, and it can take years for the birds to fully master their choreography.
Parotias, also sometimes called six-plumed birds-of-paradise, such as the Carola's parotias (Parotia carolae) or Lawes's Parotia (Parotia lawesii), use the ground as a dancefloor and have some of the most complex sets of steps of any birds-of-paradise, with several elaborate moves performed in a specific order. Parotias perform what is sometimes called the ballerina dance. The male spreads his feathers out like a tutu, dances from side-to-side and waves his head to try and impress the females looking down at him.
A watercolour of a greater bird-of-paradise
A watercolour of a greater bird-of-paradise from the John Reeves Collection of Zoological Drawings from Canton, China. All rights reserved.
Greater birds-of-paradise (Paradisaea apoda) perform higher in the canopy using a collection of branches for their display court. They use a variety of calls during their display and complete several bows and poses before the dance. The dance involves shuffling and bouncing along the branches of the court with their long, colourful plumes cascading forwards over their backs.
A 2018 study found that the height that birds-of-paradise perform at may influence the complexity of their displays. Those that perform on the ground tend to have larger behavioural repertoires, whereas those that display in the canopy rely more on sounds.
|
<urn:uuid:6f4c5bb9-790c-4e38-ac50-cfc89ad79d48>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.15675586462020874,
"language": "en",
"language_score": 0.9525764584541321,
"url": "https://www.nhm.ac.uk/discover/animals-that-dance-to-impress.html"
}
|
The flashcards below were created by user aphy101 on FreezingBlue Flashcards.
1. List the accessory structures of the eye.
Eyelids, eyelashes, and superficial epithelium, and the structures of the lacrimal apparatus
2. This is the transparent area on the anterior surface of the eye through which light travels to enter the eye.
3. This is the opening at the center of the colored iris through which light passes into the eye after it passes the cornea.
4. This is the covering of the inner surface of the eyelids and outer surface of the eye.
5. This structure produces, distributes, and removes tears.
Lacrimal Apparatus
6. The lacrimal apparatus is composed of what 6 structures?
• 1) Lacrimal Gland
• 2) Lacrimal Puncta
• 3) Lacrimal Canaliculi
• 4) Lacrimal Sac
• 5) Nasolacrimal Duct
• 6) Tear Ducts
7. This structure produces tears which lubricate, nourish, and oxygenate the corneal cells.
Lacrimal Gland
8. This is an antibacterial enzyme found in tears.
9. This structure delivers tears from the lacrimal gland to the space behind the upper eyelid.
Tear Ducts
10. This structure consists of 2 small pores that drain the lacrimal lake.
Lacrimal Puncta
11. This structure is a small canal that connects the lacrimal puncta to the lacrimal sac.
Lacrimal Canaliculi
12. This structure is a small chamber that nestles within the lacrimal sulcus of the orbit.
Lacrimal Sac
13. This structure originates at the inferior tip of the lacrimal sac and allows tears to pass through it to the nasal cavity.
Nasolacrimal Duct
14. What is the name of the disease that causes inflammation of the conjunctiva?
Conjunctivitis (or Pinkeye)
15. The wall of the eye has 3 layers, what are they?
1) Fibrous 2) Vascular 3) Inner
16. The outermost layer of the eyeball, which consists of the cornea and sclera.
Fibrous Layer
17. What are the 3 main functions of the fibrous layer of the eyeball?
• 1) Supports and protects
• 2) Attachment site for the extrinsic eye muscles
• 3) Contains cornea, whose curvature aids in focusing, and light first enters the eye through it
18. The layer of the eyeball that contains numerous blood vessels, lymphatic vessels, and smooth muscles of the eye.
Vascular Layer
19. What are the 4 main functions of the vascular layer of the eyeball?
• 1) A route for blood vessels and lymphatics that supply tissues
• 2) Regulating the amount of light that enters
• 3) Secreting and reabsorbing aqueous humor fluid
• 4) Controlling shape of the lens for focusing
20. The vascular layer is composed of what 3 structures?
1) Iris 2) Ciliary Body 3) Choroid
21. This structure in the vascular layer contains pigmented cells which give eye color, and smooth muscle fibers which control the size of the pupil.
22. This structure in the vascular layer is a thickened region that bulges into the eye and acts as an anchor for the suspensory ligaments which hold the lens in place.
Ciliary Body
23. This structure in the vascular layer is covered by the sclera, and has an extensive capillary network that delivers oxygen and nutrients to the neural tissue within the neural layer.
Choroid or Choroid Coat
24. The layer of the eyeball that is the innermost layer of the eye and is where light energy is gathered.
Inner Layer (or Retina)
25. Cells that are sensitive to light; located in the inner layer.
26. The ciliary body and lens divides the eye into 2 substructures, what are they?
• 1) Anterior Cavity (in front of lens)
• 2) Posterior Cavity (behind lens)
27. The anterior cavity is divided into what 2 structures?
• 1) Anterior Chamber (in front of iris)
• 2) Posterior Chamber (between iris and lens)
28. What type of fluid is found in the anterior cavity?
Aqueous Humor
29. What type of fluid is found in the posterior cavity?
Vitreous Humor
30. This fluid provides route for nutrient and waste transport, and cushioning, it also helps retain the eye's shape and retina position; it goes by 2 names depending on what cavity it is in.
Aqueous Humor (in anterior cavity) or Vitreous Humor (in posterior cavity)
Card Set
Mod 15.13-15.14, Eye
Show Answers
|
<urn:uuid:4b3dd449-29fd-4152-93ed-7b9adeab7ff4>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.53125,
"fasttext_score": 0.40196317434310913,
"language": "en",
"language_score": 0.8523303866386414,
"url": "https://freezingblue.com/flashcards/273123/preview/15-13-15-14"
}
|
Position and Orientation: Level 6
The key idea of position and orientation at level 6 is that the interactions between loci can be used to solve real problems.
Loci can be used to describe relationships in the real world, for instance the cost of producing a certain number of a product can be described as an equation which can be represented as a graph. Problems involving multiple criteria can be solved by using multiple loci and finding their intersections.
Diagrams give us a way to visualize these loci and enable us to see how different loci intersect both point- and area-wise.
This thread develops from the key idea of position and orientation level 5 by moving from constructing loci to finding points of intersection and areas in common between two loci. There is no further extension of these ideas as far as achievement objectives are concerned. However, the concept of locus occurs from time to time in areas such as calculus.
|
<urn:uuid:09bc133d-cf9e-4480-a684-708ce04ada65>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.546875,
"fasttext_score": 0.19620323181152344,
"language": "en",
"language_score": 0.9591132402420044,
"url": "https://nzmaths.co.nz/position-and-orientation-level-6"
}
|
On a bare rocky area the following steps are bound to occur. The first organisms to inhabit the area would be lichens. Lichen are the only organism that can subsist on bare rocks, lichens break up the rock surface which enables dust and humus particles to accumulate in crevices. This provides a foothold for mosses and later ferns and grasses which cause further breakdown of rock and build-up of humus. Small shrubs take root followed by trees which can thrive on relatively poor soil e.g conifers move in, later more exacting deciduous species come in.
At each stage a dominant species can be recognised, this influences the environment in such a way as to make it suitable for another species and this replaces the former as the dominant species etc. Eventually, a climax community is established and no further influx of new species. Climax community would not after the environment in a manner injurious to itself. It remain balanced and self-perpetuating; major environmental changes may after the climax, such alteration might revert it to a more primitive stage
|
<urn:uuid:8342bdef-28a0-4144-ab8e-e47f914d41db>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.984375,
"fasttext_score": 0.16934198141098022,
"language": "en",
"language_score": 0.952901303768158,
"url": "https://www.sulhazan.com/2017/05/major-steps-involved-in-succession-on.html"
}
|
kids encyclopedia robot
Thopha saccata facts for kids
Kids Encyclopedia Facts
Quick facts for kids
Thopha saccata
dorsal view of a single mounted cicada on a plain background
T. saccata male specimen on display at the Australian Museum
Scientific classification
Map of Australia with green range marked down most of the eastern coast of New South Wales, and some disjoint areas on the coast of Queensland.
Thopha saccata range
• Tettigonia saccata Fabricius, 1803
• Cicada saccata (Fabricius, 1803)
Thopha saccata, commonly known as the double drummer, is the largest Australian species of cicada and reputedly the loudest insect in the world. Documented by the Danish zoologist Johan Christian Fabricius in 1803, it was the first described and named cicada native to Australia. Its common name comes from the large dark red-brown sac-like pockets that the adult male has on each side of its abdomen—the "double drums"—that are used to amplify the sound it produces.
Broad-headed compared with other cicadas, the double drummer is mostly brown with a black pattern across the back of its thorax, and has red-brown and black underparts. The sexes are similar in appearance, though the female lacks the male's tymbals and sac-like covers. Found in sclerophyll forest in Queensland and New South Wales, adult double drummers generally perch high in the branches of large eucalypts. They emerge from the ground where they have spent several years as nymphs from November until March, and live for another four to five weeks. They appear in great numbers in some years, yet are absent in others.
Danish naturalist Johan Christian Fabricius described the double drummer as Tettigonia saccata in 1803, the first description of an Australian cicada. The type locality was inexplicably and incorrectly recorded as China. It was placed in the new genus Thopha by French entomologists Charles Jean-Baptiste Amyot and Jean Guillaume Audinet-Serville in their 1843 work Histoire naturelle des insectes Hemipteres ("Natural History of Hemiptera Insects"). The generic name is derived from thoph (Hebrew: תּוֹף), meaning "drum". They maintained it as native to China. The specific name is derived from the Latin saccus, meaning "sac" or "bag", and more specifically "moneybag".
In 1838, Félix Édouard Guérin-Méneville pointed out that the double drummer is native to Australia and not China. John Obadiah Westwood designated it the type species of the genus in 1843, and it is also the type species for the tribe Thophini. The common name is derived from the male cicada's sac-like tymbal covers ("drums") on either side of its abdomen.
Face on, showing small red ocelli and eyes – southeast Queensland
Female T. saccata on carpet
The adult double drummer is the largest Australian species of cicada, the male and female averaging 4.75 and 5.12 cm (1.87 and 2.02 in) long respectively. The thorax is 2 cm (0.79 in) in diameter, its sides distended when compared with the thorax of other Australian cicadas. The forewings are 5–6.6 cm (2.0–2.6 in) long. The largest collected specimen has a wingspan of 15.1 cm (5.9 in), while the average is 13.3 cm (5.2 in). The average mass is 4.0 g (0.14 oz). The sexes have similar markings, but males have large dark red-brown sac-like structures on each side of their abdomens. These cover the tymbals—specialised structures composed of vertical ribs and a tymbal plate, which is buckled to produce the cicada's song. The head is much broader than that of other cicadas, and is broader than the pronotum behind it. The head, antennae and postclypeus are black, with a narrow broken pale brown transverse band across the vertex just behind the ocelli. The eyes are black in young adult cicadas upon emerging, but turn brown with black pseudopupils at the posterior edge of the eye. The ocelli are deep red. The proboscis is 1.26 cm (0.50 in) in length—very long compared with other Australian cicada species. The thorax is brown, becoming paler in older individuals. The pronotum is rusty brown with black anterior borders, while the mesonotum is a little paler with prominent black markings, with paired cone-shaped spots with bases towards the front on either side of a median stripe; lateral to these spots are a pair of markings resembling a "7" on the right hand side of the mesonotum and its reverse on the left. The abdomen is black between the tymbal covers and red-brown and black more posteriorly. The underparts of the double drummer are red-brown and black, and covered in fine silvery velvety hairs. The female's ovipositor is very long, measuring 1.76 cm (0.69 in). The wings are vitreous (transparent) with light brown veins. The legs are dark brown and have grey velvety hairs.
There is little variation in colour over its range, though occasional females are darker overall than average, with markings less prominent or absent. The double drummer is larger and darker overall than the northern double drummer (T. sessiliba); the latter has a white band on the abdomen, while the former has black markings on the leading edge (costa) of the forewing extending past the basal cell.
Male cicadas make a noise to attract females, which has been described as "the sound of summer". The song of the double drummer is extremely loud—reportedly the loudest sound of any insect—and can reach an earsplitting volume in excess of 120 dB if there are large numbers of double drummers at close range. Monotonous and dronelike, the song is said to resemble high-pitched bagpipes. The sound of the buckling of the tymbal plate then resonates in an adjacent hollow chamber in the abdomen, as well as in the exterior air-filled sacs, which act as Helmholtz resonators.
Singing can cease and restart suddenly, either rarely or frequently, and often ends abruptly. The song has been described as "Tar-ran-tar-rar-tar-ran-tar-rar", and consists of a series of pulses emitted at a rate of 240–250 a second. The tymbal covers are much larger than other species and also make the call louder and send it in a particular direction. There are two distinct phases of song, which the double drummer switches between at irregular intervals. One phase is a continuous call that can last for several minutes; during this period the frequency varies between 5.5–6.2 kHz and 6.0–7.5 kHz 4–6 times a second. In the other phase, the song is interrupted by breaks of increasing frequency resulting in a staccato sound. These breaks can be mistaken for silence as the difference in volume is so great, though the song actually continues at a much lower volume. During this staccato phase, which lasts for several seconds, the frequency remains around 5.75–6.5 kHz. The frequency of the song is a high harmonic of the pulse repetition frequency, which makes for a particularly ringing sound. Double drummers congregate in groups to amplify their calls, which likely drives off potential bird predators. Male double drummers also emit a distress call—a sharp fragmented irregular noise—upon being seized by a predator.
Life cycle
Pair of mating double drummers, Southeast Queensland
The narrow spindle-shaped eggs are laid in a series of slits cut by the mother's ovipositor in branches or twigs, usually of eucalypts. On average about twelve eggs are laid in each slit, for a total of several hundred. These cuts can cause significant damage to the bark of tender trees. The eggs all hatch around 70 days later—usually within a day or two of one another—but take longer in cold or dry conditions. The larvae then fall to the ground and burrow into the soil. Though the timing of the double drummer's life cycle is unknown, nymphs of cicadas in general then spend from four to six years underground. Unusual for Australian cicadas, double drummers emerge during the daytime. Emerging en masse generally, nymphs are covered in mud. This mud remains on their exuviae, which emerging cicadas leave at the bases or in burnt out hollows of eucalypts. Within a forest, successive broods may emerge in different locations each year. The cicada's body and wings desiccate and harden once free of the exuvia.
The adult lifespan of the double drummer is about four or five weeks. During this time, they mate and reproduce, and feed exclusively on sap of living trees, sucking it out through specialised mouthparts. Female cicadas die after laying their eggs.
Distribution and habitat
Blackbutt St Ives
Blackbutt (Eucalyptus pilularis) in sclerophyll forest, Sydney
The double drummer has a disjunct distribution, found from northern tropical Queensland, near Shiptons Flat and Cooktown south to Ingham and Sarina, and then from Gympie in southeastern Queensland to Moruya in southern New South Wales. It is found in areas of higher elevation in the northern segment of its range, as the climate there is similar to that in southeast Queensland. Walter Wilson Froggatt and Robert John Tillyard erroneously included South Australia in its distribution.
Female T. saccata (behind, left) male (front, right)
Adults are present from November to early March, prolific in some years and absent in others. They are found in dry sclerophyll forest, preferring to alight and feed on large eucalypts with diameters over 20 cm (7.9 in) and sparse foliage concentrated at a height between 10 and 25 m (33 and 82 ft), particularly rough-barked species, apples (Angophora) and Tristania. Associated trees include the grey box (Eucalyptus moluccana), snappy gum (E. racemosa) and narrow-leaved apple (Angophora bakeri) in a study at three sites in western Sydney. At Hawks Nest in coastal swampy sclerophyll woodland, adults were observed mainly on swamp mahogany (Eucalyptus robusta) and sometimes blackbutt (E. pilularis), as well as Allocasuarina littoralis and introduced pine (Pinus radiata). Nymphs feed primarily on the roots of eucalypts.
The double drummer has not adapted well to city life; distribution of the species in cities is limited to natural stands of large trees.
In hotter weather, double drummers perch on the upper branches of trees, while on overcast or rainy days, they may be found lower down on trunks near the ground. Double drummers on tree trunks are skittish, and can fly off en masse if disturbed. Relative to other Australian cicadas they have excellent perception, fly at a moderate cruising speed of 2.5 m/s (8.2 ft/s), with a similarly moderate maximum speed of 4.0 m/s (13 ft/s), and are exceptionally adept at landing. The double drummer has been known to fly out to sea, effectively on a one-way trip as their bodies have later been found washed up on beaches. A swarm of double drummers were reported 8 km (5.0 mi) off the coast of Sussex Inlet in January 1979, in and around the boat of a local fisherman.
As the adult cicadas emerge in the daytime, large numbers are consumed by birds. Thopha cicadas have also been found in the stomachs of foxes. The double drummer is one of the large cicada species preyed on by the cicada killer wasp (Exeirus lateritius), which stings and paralyses cicadas high in the trees. Their victims drop to the ground where the cicada-hunter mounts and carries them, pushing with its hind legs, sometimes over a distance of 100 m (330 ft). They are then shoved into the hunter's burrow, where the helpless cicada is placed on a shelf in an often extensive "catacomb", to form food-stock for the wasp grub growing from the eggs deposited within.
Interactions with humans
Thopha saccata Kirby 1885
This illustration of Thopha saccata appeared in the 1885 Elementary Text-book of Entomology by William Forsell Kirby.
Schoolchildren climb trees to collect live cicadas and keep them as pets in shoeboxes. However, they cannot easily be kept for longer than a day or two, given that they need flowing sap for food. Live adults brought into classrooms by their captors would startle the class with their piercing sound. Poems dedicated to the double drummer appeared in the Catholic Press in 1933 and 1936, describing bird predation and its life cycle to children.
kids search engine
Thopha saccata Facts for Kids. Kiddle Encyclopedia.
|
<urn:uuid:aef5b90a-46c4-4481-843a-4f0827343f8f>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.65625,
"fasttext_score": 0.07123172283172607,
"language": "en",
"language_score": 0.9351393580436707,
"url": "https://kids.kiddle.co/Thopha_saccata"
}
|
What is Peltandra?
L.K. Blackburn
Peltandra is a genus of flowering plants from North America in the Araceae family, known for their spiked inflorescence, meaning that the flowers all grow from one main stemmed axis. This is called a spadix, and it is usually contained within an overarching leaf, called a spathe. The genus includes three prominent species: Peltandra sagittifolia, Peltandra virginica and Peltandra primaeva .
Man mowing the grass
Man mowing the grass
The root system on the plants is rhizomatous or tuberous, horizontally spread out beneath the plant. The spathe enclosing the inflorescence can be brightly colored and contain small fruit. Collectively, as members of the Araceae family, these plants can be generally referred to in colloquial terms as aroid. Within the Araceae family, peltandra belongs to the Aroideae subfamily and shares its defining trait of spiny pollen that lacks a outer tough wall of spores and grain pollen layers.
One species of plant within the genus is Peltandra virginica, a marsh-dwelling plant that flowers from late spring to mid-summer. It is native to the northeastern and southeastern United States and Canada, and it grows to about 2 feet (0.6 m) high. Within the marsh area, it favors shallow streams and muddy shore areas. It has a slim spathe within its spadix, typical of the plants of this genus. The common name of this species is arrow arum, and it produces green flowers.
Another species of the genus is Peltandra sagittifolia, or white arrow arum. This species is also sometimes known by the name Peltandra alba. It is found in the marshland, native to the eastern portion of the U.S. Like the Peltandra virginica, the leaves are shaped like arrows, but the flowers on Peltandra sagittifolia are white instead of green. As an aquatic plant with a tuberous root system, Peltandra sagittifolia is sometimes used in gardening to stabilize damp and unstable mud slopes.
Peltandra primaeva, another species, is thought to have been extinct since the Eocene epoch, 34 million years ago. Fossil remnants of the plant were found in North Dakota, and it is believed to have been native to North America like the rest of the genus. The fossil was placed in Peltandra as a result of vein structuring that is unique to plants of this genus.
These plants bloom generally in the summer months and need full sun to thrive. As marshland plants, they prefer to grow in standing or running water, with sand or clay soil. The root system on the plants allows a strong enough hold for growth in slow-moving streams. The leaves of the plants are coarse, and they often contain small berries.
You might also Like
Readers Also Love
Discuss this Article
Post your comments
Forgot password?
|
<urn:uuid:89d4056e-a587-4acc-90a0-a277b83c7db2>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.054208576679229736,
"language": "en",
"language_score": 0.937900960445404,
"url": "https://www.wise-geek.com/what-is-peltandra.htm"
}
|
Definitions for "opposite"
Keywords: twig, stem, whorl, stamen, buds
Arranged along a twig or shoot in pairs, with one on each side.
Situated on the other end of an imaginary line passing through or near the middle of an intervening space or object; -- of one object with respect to another; as, the office is on the opposite side of town; -- also used both to describe two objects with respect to each other; as, the stores were on opposite ends of the mall.
Extremely different; inconsistent; contrary; repugnant; antagonistic.
One who opposes; an opponent; an antagonist.
The additive opposite of b is -b. The additive opposite of -b is b.
The result of taking a number and changing its sign (e.g., the opposite of 5 is –5, the opposite of –12 is –(–12) or 12; a number and its opposite are equidistant from zero in a number line, but on opposite sides of zero.(See additive inverse)
Keywords: plaza, stood, concert, hall, theater
Placed over against; standing or situated over against or in front; facing; -- often with to; as, a house opposite to the Exchange; the concert hall and the state theater stood opposite each other on the plaza.
Applied to the other of two things which are entirely different; other; as, the opposite sex; the opposite extreme; antonyms have opposite meanings.
Keywords: went, sue, direct, face, thought
en face Asking for directions
Keywords: contestant, matched, you
a contestant that you are matched against
an action that interferes with you getting what you are fighting for
Keywords: sign, examples, number
The opposite of a number is the number with the opposite sign. Examples: The opposite of 3 is -3. The opposite of -5 is 5. The opposite of n is -n. The opposite of -2 is -(-2) = 2.
|
<urn:uuid:93f716ab-52a0-48a8-a8a4-23483d22ec18>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.578125,
"fasttext_score": 0.8517004251480103,
"language": "en",
"language_score": 0.9488128423690796,
"url": "https://metaglossary.com/define/opposite"
}
|
Cleveland Pioneers Persevered Through The Ague
Lorenzo Carter was one of Cleveland's pioneers who survived the ague.
Lorenzo Carter was one of Cleveland's pioneers who survived the ague. [“Sketches of Western Reserve Life” by Harvey Rice]
Featured Audio
Long before today's COVID-19 crisis the people of Northeast Ohio had a different sickness to fight.
We're talking even before the Spanish flu of 1918 and the 1832 cholera epidemic.
Cleveland's earliest pioneers battled and eventually overcame a disease known as the ague.
When Moses Cleaveland and his team of surveyors arrived along the shores of Lake Erie in the summer of 1796, they stepped out near the mouth of the Cuyahoga River where Whiskey Island is today.
Sketch of the mouth of the Cuyahoga ca. 1800 by Allen Gaylord from memory 1860. [Western Reserve Historical Society]
The river channel that runs along Whiskey Island was silted up and closed off, according to John Grabowski, a Cleveland historian who works for both Case Western Reserve University and the Western Reserve Historical Society.
"The river was flowing through what is its current mouth, but it wasn't flowing very strongly. In the old channel they probably saw a lot of weeds, grass and whatever was growing at that time in this stagnant pool of water," Grabowski said.
Because the "crooked river" didn't run straight into Lake Erie, the river's mouth was like a swamp, which bred mosquitoes carrying malaria.
"It would be the chief impediment to the early settlement of Cleveland. That's the result of that water as people moved into the area very slowly really. In the late 1700s and early 1800s, many of them came down with something they called the ague, and it was a series of chills and sweats and fevers that would be cyclical. They would go on and on," Grabowski said.
Plan of the City of Cleveland by Seth Pease 1796 [Cleveland Memory Project]
It was as if the mouth of the river itself was sick and made anyone who lived near it gravely ill.
"We don't have good data on death rates or anything, but clearly quite a few people were taken by the malaria," said Bob Wheeler, a retired professor of history from Cleveland State University and co-author of "Cleveland: A Concise History." "Most of them did not die of it, but they were severely incapacitated."
In 1802, Connecticut landowner Jonathan Law visited the Western Reserve, and Grabowski said his diary explains how the anticipated growth of the settlement was stymied.
"John looked around the young community, and he noted that there was a 'doleful cloud hanging over this settlement,'" Grabowski said.
Many of the city's earliest settlers moved away from the sluggish water at the river's mouth, but a few sturdy souls remained, led by a man named Lorenzo Carter.
"He began trading with Native Americans who lived on the west side of the Cuyahoga River, and he did a fairly robust business in that trade. He was a boat builder, and he owned land down there. Carter also, I think, had access to a whiskey still. He becomes sort of the John Wayne of Cleveland, if I can use that cliche? He hangs in there, that's why you can see a model of a replica, supposedly, of Lorenzo Carter's cabin down in the Flats today. So he stuck it out," Grabowski said.
A replica of Lorenzo Carter's cabin along the Cuyahoga River in Cleveland's Flats [Dave DeOreo / ideastream]
Carter had quite the reputation according to Wheeler.
"He was accused by the much more stalwart Connecticut Congregationalists of being sort of a rascal. But my sense is rascals often persist in this kind of environment if they're lucky enough to survive the disease, because they don't have the same demands that they make on their lives and their person," Wheeler said.
Grabowski agreed.
"If you want to make something special, and I think you'd probably be correct on this, of Lorenzo Carter... he was in the worst part of it, and he had a tough road to hoe," he said.
Carter and the other early settlers persisted and persevered through the ague to help found the city we know as Cleveland today, Wheeler said.
"They came out here, and they did their best to succeed by themselves, but persistence and also a strong constitution, which allowed many of them to escape or to persist after the ague bothered them or laid them down. In some cases by the way, sometimes these people would be in bed for six months with it. They couldn't work, they couldn't harvest, they couldn't do this or that. So it was a very challenging time," Wheeler said.
People also survived without medical attention, Grabowski said.
Meanwhile, Carter's log cabin in the Flats became the focal point of the settlement despite the ague.
"The first Fourth of July celebration is in Lorenzo Carter's cabin. The first dance in Cleveland is in Lorenzo Carter's cabin. It's kind of the social center of that time," Grabowski said.
But Cleveland could not prosper without changes to the Cuyahoga.
"What really needed to be done was the river needed to be cleaned up and straightened a little bit in the mouth so it would flow freely and the water wouldn't back up and continue to create this problem, and that took federal money. That's what the early power people in Cleveland lobbied for," Grabowski said.
Grabowski sees a parallel today.
"The only thing that really would clear this up for Cleveland though interestingly was federal action in the 1820s, which basically cleared the sand out of the mouth of the Cuyahoga River and began to allow it to be more free flowing or more rapidly flowing. That would begin to dispel some of the miasma that was around there," he said.
Wheeler sees a correllation, too.
"I would say almost everyone that I had contact with through diaries or letters was affected by the ague within the first two years of their arrival. Sometimes bouts of it would come back and then it would disappear, and then it would come back again. I would say I don't think there were many people who escaped it. I'm hoping more people escape COVID-19 of course," Wheeler said.
Support Provided By
More Wcpn Schedule
More Wclv Schedule
90.3 WCPN
WCLV Classical 104.9
NPR Hourly Newscast
The Latest News and Headlines from NPR
This text will be replaced with a player.
This text will be replaced with a player.
This text will be replaced with a player.
|
<urn:uuid:6522b476-f46d-4b1f-b3e8-22ec765ddea7>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.59375,
"fasttext_score": 0.2271362543106079,
"language": "en",
"language_score": 0.9847073554992676,
"url": "https://www.ideastream.org/news/cleveland-pioneers-persevered-through-the-ague"
}
|
The High German Consonant Shift
The High German Consonant Shift or Second Germanic Sound Shift (Zweite Deutsche Lautverschiebung) is a linguistic phenomenon which sets the High and Upper German Dialects apart from their Northern counterparts as well as from Dutch, English, Frisian and the Scandinavian Languages, or, in short, virtually all of the other Germanic languages.
German Dialects after WW2
German Dialects after WW2. Yellow (Low German), Blue (Dutch) and Purple (Frisian) coloured dialects have not been affected by the second Germanic sound shift. By Rex Germanus – This map has been uploaded by Electionworld from to enable the Wikimedia Atlas of the World . Original uploader to was Rex Germanus, known as Rex Germanus at Electionworld is not the creator of this map. Licensing information is below., Public Domain,
This article aims to deal with the origins and causes of this sound shift. To clarify it should be stated in the beginning that historical linguistics is a controversial field because of the high degree of reconstructions used, meaning that some of the theories in the field are based on words that may have never existed. Nevertheless, we can assume that there is at least some degree of accuracy in the most well established of these reconstructions. Because the High German Consonant shift occurred relatively recently (probably from around the year 600 CE onwards), and because of the comparatively large amounts of text available in this earliest form of the German language, this isn’t a particularly big problem here. Caution is still advised, however: Even if Old High German is known relatively well, it’s direct ancestor, Proto-Germanic, is mostly known by reconstruction and a few early runic inscriptions found across Northern and Central Europe. Furthermore, although sound shifts, in general, can be understood somewhat methodically and seem to follow rules (for the most part), it is rarely known what exactly “caused” any particular sound shift.
Thus it should be clear that the theories presented in this article are just that: Theories. And it remains doubtful, whether a consensus on this issue will ever be reached. Despite this, historical linguistics and how sound shifts shape a language remain fascinating topics, and theorizing about the causes that gave rise to the languages we speak today could yield a better understanding of the processes which led to the formation of these languages yet.
The High German Consonant Shift
As mentioned in the introduction, the HGCS occurred from about the year 600 onwards. Some of the major changes included a shift in pronunciation of the following sounds: p > f, pp > pf, t > ts (spelled “z” in modern German), ss, k > x (spelled “ch” in modern German) b > p, d > t. Thus German has words like “Wasser” instead of English, Dutch and Low German “Water” or “Apfel” instead of “Appel/Apple”, “sitzen” instead of “sit(en)”, “Tag” instead of “Dag/Day”.
The Lombards
The HGCS’s earliest attestation is found within Langobardian, an extinct Germanic language spoken by the Lombards or Langobards, in the northern half of the Italian peninsula around 650 CE. Unfortunately, our knowledge about this language is too poor to even safely classify it as a West or East Germanic language, although most scholars nowadays believe it belongs to the former group and is probably most closely related to the Bavarian and Alemannic Dialects of Southern Germany, Austria, and Switzerland. Interestingly, the Langobards as well as the majority of the ancestors of the Bavarians and Alemanni seem to have originated in the Northeastern part of Magna Germania and probably belonged to a group known as Elb-Germanic. At first glance, it is tempting to assume that the HGCS was a strictly Elb-Germanic phenomenon and possibly was set in motion even before these peoples migrated southwards. This seems unlikely, however, seeing as Runic finds from the Alemannic area dating to the 6th and early 7th century show no signs of this sound shift.
The Franks and Gallo-Romans
It has been postulated that the second sound shift originated with the Gallo-Roman population on the western banks of the River Rhine when the Franks settled there on mass early during the migration period. Apparently, these local Latin speakers had a hard time pronouncing the West Germanic language of the newcomers and changed some of the sounds subconsciously, as often happens as a result of creolization. With the increasing prestige and political influence as well as military conquest of the Franks these changes in pronunciation may have spread southwestwards towards the territories of the Alemanni, Bajuwari, and Langobardi. If this scenario was true, however, we would expect to find the earliest attestations of the HGCS among the Franconians, followed by the Alemanni and Bajuwari and finally the Langobardi. We have seen, however, that the opposite is true and the characteristic High German sound shifts are attested in Langobardian first.
Frankish Empire
The Frankish Empire and its provinces in the 8th century. Source: (48) Pinterest
The Origin of the High German Consonant Shift
It thus seems more likely that the sound shift in fact originated with the Langobardians in Northern Italy, especially seeing as the indigenous population is Gallo-Roman in origin as well. From there on it spread northwards and eventually came to a standstill in Northern Germany. The fact that there was a substantial Gallo-Roman population left in Southern Germany probably helped intensify these developments, although the proportion seems to have been smaller than in the far south, as can be seen by the absence of some of these changes. This would also explain why the Swiss regions closest to modern Lombardy show the greatest amount of change.
Intensity Map Consonant Shift
The intensity of the HGCS with dark green being the most intense and light yellow being the least intense. Source: Zweite Lautverschiebung in Deutsch | Schülerlexikon | Lernhelfer
The Lombards, alongside the Franks, were also one of the first Continental West Germanic tribes to convert to Christianity. This may have helped in the spread of the sound shifts to monasteries across the Germanic world as well, with the notable exception of the Saxons, who, at the time, still firmly believed in the old gods.
The political influence of the Lombards in the 6th and 7th centuries could have also played a role in the development of the HGCS in Southern Germany. Political unions such as weddings are attested between the Lombards and the Bajuwari, which may be one of the reasons why modern Bavarian retained the shift from b to p, other than the rest of the High and Central German dialects.
As I have stated at the beginning of this article this theory can’t be proven, as there simply isn’t enough Langobardian material left. Given what little knowledge we possess, however, I assume this to be the most likely scenario. This may change with research over the coming years and decades. Especially interesting would be a genetic analysis of past and present populations of the Southern German-Speaking and Northern Italian populations to establish an estimate of the Gallo-Roman vs. the Germanic populations. To my knowledge, there hasn’t been any large scale study in this field regarding Southern Germany as of yet, however. Thus we have to solely rely on the historical, archaeological, and linguistic evidence, a small part of which has been presented here.
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
<urn:uuid:052ce9f0-acf7-444c-ad20-0b0d71127775>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.90625,
"fasttext_score": 0.07759284973144531,
"language": "en",
"language_score": 0.9521412253379822,
"url": "https://european-origins.com/2020/12/23/high-german-consonant-shift/"
}
|
Science Worksheets and Study Guides First Grade. Life cycles
The resources above correspond to the standards listed below:
Maryland College and Career-Ready Standards
3.B.1. Cells: Describe evidence from investigations that living things are made of parts too small to be seen with the unaided eye.
3.B.1.a. Use magnifying instruments to observe parts of a variety of living things, such as leaves, seeds, insects, worms, etc. to describe (drawing or text) parts seen with the magnifier.
|
<urn:uuid:605888d9-dbe4-4e28-bbca-cf337cbce0c3>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.734375,
"fasttext_score": 0.9709178805351257,
"language": "en",
"language_score": 0.9022257924079895,
"url": "https://newpathworksheets.com/science/grade-1/life-cycles/maryland-common-core-standards"
}
|
DEV Community
Cover image for What is the OSI Model? 7 layers explained in detail
What is the OSI Model? 7 layers explained in detail
amandaeducative profile image Amanda Fawcett Originally published at ・6 min read
Networking is a vast topic. The OSI model helps us better understand it. In this article, we will cover the OSI model. The Open Systems Interconnection (OSI) model is a conceptual framework that describes the functions of a networking or telecommunication system in seven layers.
The OSI model describes how a network functions and standardizes the way that systems send information to one another. In this article, we will introduce you to the OSI model and discuss each layer in detail.
What is the OSI Model?
Developed in 1984, the Open Systems Interconnection or OSI model is a seven-layer model used to describe networking connections. It was initially developed by ISO, the International Organization for Standardization in 1984 and is now common practice for learning networking concepts.
The OSI models specifies how information is transmitted from a network device like a router to its destination through a physical medium and how it interacts with the application. In other words, it provides a standard for different systems to communicate with each other.
Alt Text
We will go through the different layers in detail below, but keep in mind that the upper layers (first 4) are about transport issues like the physical characteristics of the network and data transmission.
The lower layers (last 3) are about application issues like data formatting and user interfacing.
Why should you learn this?
Some people argue that the OSI model is obsolete because it is less important than the four layers of the TCP/IP model, but this is not true. The OSI model is essential theory for understanding modern computer network technology in a connection-oriented way.
Most discussions on network communication include references to the OSI model and its conceptual framework.
The purpose of this model is to enhance interoperability and functionality between different vendors and connectors. It describes the functions of a networking system. From a design point of view, it divides larger tasks into smaller, more manageable ones.
The OSI model allows network administrators to focus on the design of particular layers. It is also useful when troubleshooting network problems by breaking them down and isolating the source.
Layer 1: Physical Layer
At the lowest layer of the OSI reference model, the physical layer is responsible for transmitting unstructured data bits across the network between the physical layers of the sending and receiving devices. In other words, it takes care of the transmission of raw bit streams.
The physical layer may include physical resources like cables, modems, network adapters, and hubs, etc.
Layer 2: Data Link Layer
The data link layer corrects any errors that may have occurred at the physical layer. It ensures that any data transfer is error-free between nodes over the physical layer. It is responsible for reliable transmission of data frames between connected nodes.
The data is packaged into frames here and transferred node-to-node. The data layer has the following sub-layers
• Media Access Control (MAC): The MAC address layer is responsible for flow control and multiplexing devices transmissions over the network.
• Logical link control (LLC): The LLC layer provides error control and flow control over the physical medium and identifies line protocols.
Layer 3: Network Layer
The network layer receives frames from the data link layer and delivers them to the intended destination based on the addresses inside the frame. It also handles packet routing. The network layer locates destinations using logical addresses like the IP. Routers are a crucial component at this layer as they route information to where it needs to go between different networks.
The main functions of the Network layer are:
• Routing: The network layer protocols determine which routes from source to destination.
• Logical Addressing: The network layer defines an addressing scheme to uniquely identify devices. The network layer places the IP addresses from the sender and receiver in the header.
Layer 4: Transport Layer
The transport layer is responsible for delivering, error checking, flow control, and sequencing data packets. It regulates the sequencing, size, and transfer of data between systems and hosts. It gets the data from the session layer and breaks it into transportable segments.
Two examples of the Transport Layer are the UDP (User Datagram Protocol) and TCP (Transmission Control Protocol) that is build on top of the Internet Protocol (IP model), which work at layer 3.
Layer 5: Session Layer
The session layer will create communication channels, called sessions, between different devices. This layer is responsible for opening those sessions and ensuring that they're functional during data transfer.
In other words, the session layer is responsible for establishing, managing, and terminating communication sessions with the lower layers with the presentation and application layer. It is also responsible for authentication and reconnections, and it can set checkpoints during a data transfer—if.
Layer 6: Presentation Layer
The presentation layer is responsible for ensuring that the data is understandable for the end system or useful for later stages. It translates or formats data based on the application's syntax or semantics. It also manages any encryption or decryption required by the application layer. It is also called the syntax layer.
Layer 7: Application Layer
The application layer is where the user directly interacts with a software application, so it is closest to the end user. When the user wants to transmit files or pictures, this layer interacts with the application communicating with the network. The application layer identifies resources, communication partners, and synchronizes communication.
Other functions of the application layer are the Network Virtual Terminal and FTAM-File transfer access, and mail/directory services. The protocol used depends on the information the user wants to send. Some common protocols include:
• POP3 or SMTP for emails
• FTP for emails
• Telnet for controlling remote devices
Examples of communications that use Layer 7 are web browsers (Chrome, Firefox, Safari).
Data flow example
Here is how data flows through the OSI model. Let's say you send an email to a friend. Your email passes through the application layer to the presentation layer. This layer will compress your data.
Next, the session layer initializes communication. It will then be segmented in the transportation layer, broken up into packets in the network layer, and then into frames at the data link layer. It will then be sent to the physical layer where it is converted to 0s and 1s and sent through a physical medium like cables.
When your friend gets the email through the physical medium, the data flows through the same layers but in the opposite order. The physical layer will convert the 0s and 1s to frames that will be passed to the data link layer. This will reassemble the frames into packets for the next layer.
The network layer will assemble the segments into data. The data is then passed on to the presentation layer that ends the communication session. The presentation layer will then pass the data to the application layer. The application layer feeds the human-readable data to the email software that will allow your friend to read your email.
What to learn next
Congratulations on making it to the end. I hope you now know what the OSI model is, how the OSI layers work, and why you need to know about it. These concepts are essential for understanding how networks function.
This was just the beginning, and there is a lot you can learn. You can start with:
• Access networks
• Protocols
• Socket programming
• and more
To get hands on with these concepts, check out Educative's course Grokking Computer Networking for Software Engineers. You will learn about networks, command-line tools, socket programming in Python, all in a hands-on environment. Any software engineer can benefit from a solid grasp on these concepts.
Happy learning!
Continue reading about networking
Discussion (3)
Editor guide
ssimontis profile image
Scott Simontis
Thanks for a great writeup. I consider myself a lot more network-knowledgeable than many developers I have worked with, but I really only ever end up dealing with Layer 4 or Layer 7 issues. Does anyone else have a different experience?
thorstenhirsch profile image
Thorsten Hirsch
Layer 8 issues are the worst. ;-)
joaorbrandao profile image
João Brandão
“All People Seem To Need Data Processing” is a good way to remember it! 🙂
|
<urn:uuid:8727abfe-44db-4e01-92af-5dce7087ecdf>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.8125,
"fasttext_score": 0.3875288963317871,
"language": "en",
"language_score": 0.8861923217773438,
"url": "https://practicaldev-herokuapp-com.global.ssl.fastly.net/educative/what-is-the-osi-model-7-layers-explained-in-detail-227i"
}
|
You are here
Tongatapu's astonishing burial mounds
Nuku'alofa, Tonga
Lithograph by Louis de Sainson, artist on the 1826 visit to Tongatapu by the French expedition under Jules Dumont d’Urville. This portrays a burial mound with grave house in Hihifo. Published in Voyage de la Corvette L’Astrolabe exécuté pendant les Années 1826-1827 - 1828-1829, Paris (1833).
By Travis Freeland and David V. Burley
Burial mounds are widely scattered across the Tongatapu landscape. When you land at Fua’amotu airport, they line the runway. When you drive into Nuku’alofa, they can be seen to either side of the road. And if you take a bicycle down any field track in the countryside, they are continuously encountered and occasionally ridden over.
All but a few are without name, and the individuals buried within are long lost to history. Yet these mounds represent a story that literally is inscribed on the Tongatapu landscape. It is a historical geography that we can now record in its entirety through the technological advances of airborne LiDAR.
Tongatapu LiDAR
The collection of Tongatapu LiDAR data was funded by the Australian government in 2011 and given to Tonga as part of a tsunami risk assessment project. Its intended use was not for archaeology, but it offers a powerful tool to study the archaeological past. The data provide a laser scan of the island’s land surface taken from an aircraft. Laser pulses of light were beamed to the ground with their reflection providing accurate measures of distance.
Landscape topography could then be mapped in 3-dimensional imagery, with vegetation cover and modern structures removed through computer manipulation. This reveals in detail an astonishing array of mounds, ditches, and other earthworks constructed in Tongan antiquity.
Burial mounds near Hoi. LiDAR imagery with shading to give 3-D visualization effect. Image: Travis Freeland and David Burley.
Landscape of mounds
Our immediate observation of the LiDAR images for Tongatapu was to note a stippled appearance, one comparable to the surface cover of a basketball. There were hundreds or even thousands of identifiable bumps spread across the island from east to west. These were burial mounds, some large, up to 60 metres in diameter and several metres high, others much smaller with heights of a half metre or less.
We did field checks in 2014 and 2015, visiting a sample of the sites to record more detail, or ensure they were what we thought they were. A computer application based on shape recognition was developed to identify, record and count the mounds. The accumulated total was astounding, including upwards of 10,000 of these sites. Given the land area of Tongatapu (257 km2), there are close to 40 mounds for every square kilometre.
Burial mound locations on Tongatapu as identified by feature recognition software applied to LiDAR. Image: Travis Freeland and David Burley.
Not all mounds were used for burial; some were constructed for pigeon hunting (sia heu lupe), some as sitting platforms (esi) and a few potentially were constructed as a house platform. The vast majority though were built for interment of the dead.
Increasing Use of Mounds from the 12th to 18th Centuries AD
There has been limited research conducted on Tonga’s ancient burial sites, save for the massive complex of langi at the former capital of Lapaha.
Excavations at two mounds in ‘Atele in the 1960s, and occasional disturbance of mounds by modern development, illustrate that the number of burials in each is large, perhaps ranging up to a hundred or more.
The beginnings of mound building occurred sometime in the first millennium AD with widespread use by the 12th or 13th century AD. The early mounds are not elaborate. They have no burial vaults, use of coral sand, or kilikili. Each of these features came later, as chiefly tombs became symbols of rank and authority.
Rectangular, squared and tiered burial mounds faced with stone for the Tongan élite reflect further upon the growing political landscape of an emerging Tongan state. These sites, too, are observed on the LiDAR images.
Overlay of LiDAR image showing burial mounds at ‘Alaki onto an aerial photograph with modern structures. Image: Travis Freeland and David Burley.
Different settlement landscape
The plot of burial mound locations on a map of Tongatapu tells us much about the pattern of settlement from 1200 AD into the 19th century. These features occur everywhere, from coastal margins across interior field systems. They represent a time when villages, if they existed at all, were rare.
The early capitals of Tonga at Toloa, Heketa and Lapaha are the exceptions. Chiefs established themselves on their estates and scattered their people across the land. The mounds are the resting places for these kainga. This pattern of settlement ensured a loyal following while preventing encroachment from chiefly rivals.
That arrangement continued to be present on Tongatapu in 1777, when Captain Cook was given to write - Here are no towns or villages; most of the houses are built in the plantations, with no other order than what conveniency requires. It was the time of fanongonongotokoto, where news could be shouted from one household to another, and travel from one end of the island to the other. The map of Tongatapu burial mounds stands as visible testimony to the structure of this settlement.
People missing from history.
Where there are significant numbers of dead, we assume there was an equivalent number of living. The Tongatapu burial mounds provide a relative measure for gauging the distribution of people across the island over the past 900 years.
The map of burial mound locations was turned into a density distribution plot with colour codes used for mound concentrations or areas where mound occurrences are lower. The densest swath of mounds, not surprisingly, extends from Lapaha inland to the south, illustrating the extent of the Tu’i Tonga’s people.
What is surprising is an almost equal sized density cluster across the eastern end of Tongatapu. The relatively thin population here today begs the question of what happened in history?
Colour-coded density distribution plot of Tongatapu burial mounds based on LiDAR data. Image Travis Freeland and David Burley.
The answer may be far more involved, but events of the 19th century civil war provide at least a partial explanation. This was the 1801 battle of Poha, as later written about by the Missionary John Thomas. The western chief Vaha’i waged war across this area in revenge for the murder of Tu’i Kanokupolu, Tuku’aho. The dead became so numerous that they were laid across each other in large piles and eventually burned. The consequence was described as “Ko e Tunu ’o Vaha’i” (the broil of Vaha’i). Survivors fled quickly.
Pyramid of Nukunukumotu
LiDAR image of the Pyramid of Nukunukumotu. Image: Travis Freeland and David Burley.
Scrutiny of the LiDAR images for Tongatapu has identified one mound as significantly standing out. This is the Pyramid of Nukunukmotu, as we have begun to call it. It is located above the eastern shore at the entrance to Fanga ‘Uta Lagoon.
The pyramid is formed as a squared mound with the base 60 m on a side. It has sloped sides rising to a height in excess of 6 metres with a flat platform top. It is not a burial mound, at least as could be determined in the field. Nor does it appear to have been constructed in the modern era. The pyramid has an estimated volume of 12,500 cubic metres of fill, making it one of the most substantial structures in all of Tonga.
We can only guess as to what it was used for. Its prominent position at the mouth of the lagoon suggests an observatory to identify in-coming canoes. Perhaps it held a beacon, where the same canoes might identify the channel entrance to Lapaha. But whatever the function it might have had, it must have been important. The scale and labour in its construction speak clearly to that.
Travis Freeland is a consulting archaeologist in Terrace, BC, Canada. He completed his PhD in 2018 at Simon Fraser University with a dissertation study on the mounds of Tongatapu.
Dr. David V. Burley is a Professor of Archaeology at Simon Fraser University in Burnaby, BC, Canada. He has carried out archaeological studies in Tonga since 1989.
More articles in this series:
|
<urn:uuid:102e30ed-11a2-4693-8975-d13b4e301a8f>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.734375,
"fasttext_score": 0.07645487785339355,
"language": "en",
"language_score": 0.9541385769844055,
"url": "https://matangitonga.to/2020/06/27/tongatapu-burial-mounds"
}
|
Crack your assignment with step-by-step assignment guide
• Step-by-step guide
• List of credible sources
• An outline of arguments
Billie Holiday Famous Singer
Categories: Billie HolidaySinger
One of the most prolific, phenomenal, and influential forces of the twentieth century, Billie Holiday’s music and voice was and remains a staple in jazz music. Her resilience and refusal to conform to the norm, but instead choosing to stay true to who she was as an artist is something to be deeply admired and is without doubt what propelled her into the legend she is today. She used her voice as an instrument to convey deep emotions that others seldom could.
Her song Strange Fruit which is considered the pivotal point of her success shed light on the racial act of lynching in the South. With her rhythmic voice, phrasing, and melodic singing she exerted power with her voice as if she were simply breathing. Her laid back and cool style on stage was a breath of fresh air to the often-expected choreographic style of Jazz singers, creating a class of her own that has become an inspiration to new Jazz singers even today.
Billie Holiday was a force that could not be swayed for she knew her voice and embraced it in a way that showcased her ability to bend notes and change up melodies along with rhythms to conform to her voice, an ability that was unmatched during her time. Her skill, influence, and life is one that can only be described as nothing less than legendary.
Determined “to never scrub floors or keep house for white folks.” Billie began her singing career in Harlem towards the end of the Harlem Renaissance.
Top Writers
Dr. Karlyna PhD
Verified expert
4.7 (235)
Verified expert
5 (339)
Prof. Laser
Verified expert
4.8 (435)
hire verified writer
During the night she would make her way around local popular spots. Eventually she got her first professional gig at the Great Dawn, a cabaret in Queens. Her performance was so well loved, the audience threw money at her feet. From then on, she would work in clubs in both Brooklyn and Harlem. (Greene 2007) Influenced by Jazz singers before her such as Louis Armstrong and Bessie Smith, Billie Holiday created her own individual style. Perhaps the best translation of this is depicted in a quote from Jazz historian Gunther Schuller interpretation found in Billie Holiday the Musician and The Myth.
[Holiday’s] art transcends the usual categorizations of style, content, and technique. Much of her singing goes beyond itself and becomes a humanistic document; it passed often into the realm that is not only beyond criticism but in the deepest sense inexplicable. We can, of course, describe and analyze the surface mechanics of her art: her style, her technique, her personal vocal attributes; and I suppose a poet could express the essence of her art or at least give us, by poetic analogy, his particular insight into it. But, as with all truly profound art, that which operates above, below, and all around is outer manifestations is what most touches us, and also remains ultimately mysterious. (98)
Even when she was first discovered by John Hammond it was more than evident that there was something special about Billie Holiday. He writes about seeing and hearing Billie perform for the first time.
“My discovery of Billie Holiday was the kind of accident I dreamed of…the sort of reward I received now and then by traveling to every place where everyone performed.” (23)
Billie’s musical evolution began in the thirties when she first began recording with Benny Goodman. She sang songs by other artist such as “What A Little Moonlight Can Do” and “Them There Eyes”, but her voice turned songs that were seldom remembered into high selling and popular successes. Again, tapping into the influence of Louis Armstrong she improvised songs by improving and streamline melodic lines, infusing a more freely swinging rhythm. A deeper description of this skill is found in the PBS article “Billie Holiday: The Long Night of Lady Day”.
What makes this so significant is despite drawing influences from artist who came before her, Billie was still able to distinguish her voice and isolate her own style. The forties became the years that Billie blossomed into her success. During that time her voice was considered at its richest and most expressive. It’s no surprise that some of her best songs were also created during that time. She went from blues ballads to more heartfelt records such as “God Bless the Child” and “Strange Fruit.” She also explored deeper emotions and issues in her songs such as suicide in “Gloomy Sunday”, infidelity in “Don’t Explain”, and lynching in “Strange Fruit.” The forties were also a gloomy time for Billie Holiday as she began to battle her own personal demons, but despite it all her music continued and in 1949 she achieved international success at Carnegie Hall. Despite continued battles, her resilience and control of her voice remained. She continued to record successful songs and remained in the public eye. Entering the fifties Billie continued to deliver music. She returned to singing popular ballads, lighter material, and Rhythm and Blues with songs such as “Stormy Blues”, “I’ve Got My Love to Keep Me Warm”, “Rock Mountain Blues”, “Yesterdays”, “Come Rain Come Shine”, “What a Little Moonlight Can Do”, “Fooling Myself”, and “Nice Work If You Can Get It.” Her last recording, “Lady Satin” before her death was in March of 1959, including the sound of lush horns, it was unlike any of her other previous songs. (Radlauer n.d.)
Billie used vibrato to swing a note, setting it into motion by increasing the width of vibrato just before moving on to the next note or phrase. In her smooth and cool melody lines Billie would often add small but effective turns, up and down movements, fades, and drop-offs. A perfect example of this would be her version of the song I’ll Be Seeing You. Billie, knowing the strengths of her voice, instead of playing into the original performance she instead sings down and slower making the song hers and giving it a classic form of nostalgia in a way only she could. Her style of performing was unlike any other Jazz singer of her time. She embraced her stage presence by being herself. She had multiple voices from a bright nasal clear sound, a younger sounding middle, to a low rasp or growl. That still wasn’t the limit to her diverse sound because her voice could still change depending on the song she was singing. Her stage presence engaged her audience and she sang directly to them and would snap her fingers, moving to the beat of the music. An arch of the eyebrow or a tilt of the head as she responded to the sound of a chord was never uncommon. She had a rhythmic knowledge that was deeper than what could be expressed. (Szwed 2015). Billie introduced an innovative way in using her voice as an instrument when performing and singing that we still see today not only in Jazz singers but singers of other genres as well. Her use of vibrato backs this as she uses it the same way a violin does to create richer or warmer sound.
Considered one of the most significant performances of her career. Billie Holiday’s Strange Fruit was not only one of her most emotionally charged performances that brought her audience to tears, but a vital vehicle that took an anti-lynching stance against the racism that lived during her time. Written for her by Lewis Allan, the lyrics of Strange Fruit depicted vividly the picture of lynching:
Southern trees bear strange fruit
Blood on the leaves and blood at the root
Black bodies swinging in the southern breeze
Strange fruit hanging from the poplar trees
Pastoral scene of the gallant south
The bulging eyes and the twisted mouth
Scent of magnolias, sweet and fresh
Then the sudden smell of burning flesh
Here is fruit for the crows to pluck
For the rain to gather, for the wind to suck
For the sun to rot, for the trees to drop
Here is a strange and bitter fruit (Clarke 1994)
Despite being afraid that her audience would hate it, Billie ultimately decided to sing it despite her fears. While listening to Strange Fruit the way she uses her voice as an instrument to convey word painting can be heard as she stretches out and heightens the word drop. She not only intensifies suspense that the audience can feel but sings it as if the tree has literally dropped. Throughout the song Billie does this and it’s no surprise that audience was often brought to tears when she performed. Billie had a way of evoking the sadness of the lyrics that made you visualize them happening around you. Her voice while singing “Strange Fruit” was both strong and quiet. Taking a deeper analysis into “Strange Fruit” it was consisted of a simple AABB rhythm pattern. The music itself was in a quadruple meter and the tempo varied depending on Billie’s performance, yet most of the time she would sing it in a “slow moving tempo”. The rhythm is homophonic while the piano repeats minor chords in B Flat in the background as Billie sings. Billie sings at a much higher octave then the piano and to the listener, her voice and lyrics are the most prominent part of the song. She again heightens essential lines such as “For the rain to gather, for the wind to suck.” And “there is a strange and bitter crop”, forcing the listener to visualize what she is singing. The song overall has a low timbre and melancholy tone and Billie uses her legato to pull out lines such as “for a tree to drop.” The song’s raw texture remains throughout until towards the end where instruments increase in volume during the final line, then all is quiet. (Drees n.d.)
Josephson Roberts describes one of Billie Holiday’s staggering performances of Strange Fruit in Wishing on The Moon: The Life and Times of Billie Holiday:
The room was completely blacked out, service stopped- at the bar, everywhere. The waiters were not permitted to take a glass to the table, or even take an order. So, everything stopped- and everything was dark except for a little pin spot on her face. That was it. When she sang ‘Strange Fruit’, she never moved. Her hands were down. She didn’t even touch the mike. With the little light on her face. The tears never interfered with her voice, but the tears would come and just knock everybody in that house out. The audience would shout for ‘Strange Fruit’; those who’d never been down before and didn’t know her sets closed with it would shout for it when they felt her set was coming to a close. (Clarke 1994)
Strange Fruit become one of the first racial protest songs in Jazz. Its force and deep emotional lyrics challenged the Northern white audience by showcasing the brutality of Southern lynching. It is still considered one of the most powerful blows in the Civil Rights Movement and is one of Billie Holiday’s bestselling records. (Margolick 2001)
From the early thirties to the late fifties Billie Holiday left her mark on Jazz music and many amazing artists that came after her have credited her distinctive style as their influence, including Carmen McRae, Dinah Washington, Anita O’Day, Sarah Vaughn, Etta Jones, Pearl Bailey, Tony Bennett, Peggy Lee, and the great Frank Sinatra. Her distinctive voice and style made her unarguably one of the greatest Jazz singers not only of her era, but of all time. She challenged the expected by creating her on innovative take on music with improvised style and technique. She wasn’t afraid to be herself, she embraced who she was as an artist and used what she had and turned it into something only she could create. She was a torch that illuminated and shifted the definition of Jazz music. Her voice could morph into whatever she wanted it to be and was unforgettable to anyone who listened to it. Billie Holiday’s legacy transcends time, it transcends talents, and is still and will always remain a monument in the everchanging world of Jazz music.
Cite this page
Billie Holiday Famous Singer. (2021, Apr 01). Retrieved from
Let’s chat? We're online 24/7
|
<urn:uuid:aa6f7e9f-2b4d-4469-8510-c3ab2e94541d>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5625,
"fasttext_score": 0.10876566171646118,
"language": "en",
"language_score": 0.9743471145629883,
"url": "https://studymoose.com/billie-holiday-famous-singer-essay"
}
|
Naxi Ethnic Minority
Naxi ethnic minority is one of the 56 ethnic groups in China and one of the ethnic groups unique to Yunnan. Naxi is unique in art, its poetry, painting, sculpture, music and dance art are famous at home and abroad. Naxi culture is deeply influenced by Han culture.
History of Naxi Ethnic Minority
Scholars believe that the Naxi people originated from the ancient Qiang people who lived in the Yellow River region of China in ancient times. They migrated south to the upper reaches of the Minjiang river, southwest to the Yalong river basin, and then west to the upper reaches of the Jinsha river. In the Song dynasty, the Naxi area was under the jurisdiction of the Dali kingdom, while the chiefdoms of the Naxi ethnic group occupied their respective positions of power. The Dali Kingdom could not effectively control the area of the Naxi people, while the Tubo people in the north had a long history of civil unrest and could not invade southward. The Naxi area is relatively stable, with population, economy and culture developing. During the Yuan and Ming dynasties, due to the rise of Mu’s tusi in Lijiang, the Naxi ethnic group expanded outward, and Naxi’s activities were more extensive. In the northeast, it may reach the area of today’s Kangding of Sichuan and Nangongga Mountain, to the north, it may reach present Batang, Litang and Qamdo, and to the west reach the Nujiang river basin. In the first year of emperor Yongzheng of the Qing dynasty (1723), after bureaucratization of native officers, Tibetan Tusi in the north moved south, together with the large number of Yi people moving into the Yalong river basin, the population of Naxi ethnic group gradually decreased.
Distribution of Naxi Ethnic Minority
The Naxi ethnic group is an ethnic minority group in southwest China. Its settlement is mainly distribution in Lijiang and its neighboring areas at the border of Yunnan, Sichuan and Tibet. Yunnan province is the main distribution province of Naxi ethnic minority. Most Naxi people live in Lijiang and Diqing in northwest Yunnan, the rest are distributed in other counties and cities in Yunnan, as well as Yanyuan, Yanbian and Muli counties in Sichuan, and a few are distributed in Markam county of Tibet.
Autonomous Areas
Naxi Ethnic Towns
Language of Naxi Ethnic Minority
The Naxi language belongs to the Chinese-Tibetan language family. More than 1,000 years ago, the Naxi people had already created pictographic characters called the “Dongba” script and a syllabic writing known as the “Geba” script. With these scripts they recorded a lot of beautiful folklore, legends, poems and religious classics. However, they were difficult to master, and in 1957 the government helped the Naxi design an alphabetic script. Over the past few hundred years, as the Naxi people have come into closer contact with the people in other parts of China politically, economically and culturally, the oral and written Chinese has become an important means of communication in Naxi society.
Dongba Scripture
Dongba scripture is not only an encyclopedia of the ancient social life of Naxi ancestors, but also a treasure house of the great achievements of Naxi classical literature. The ancient myths, epics, legends and proverbs of Naxi ethnic minority are recorded in Dongba scriptures in hieroglyphics. The chanting of Dongba scriptures has a specific location, that is, on the various sacrificial rituals of Dongba religion, most of which are a combination of religious and folk activities. All works of Dongba scriptures are recited by Dongba in a particular tune. Dongba scripture literature includes natural myths, flood myths, ancestor myths, war epics, love poems and a lot of proverbs, etc.
Religions of Naxi Ethnic Minority
Naxi people believe in many religions, such as Dongba religion(the native religion of the Naxi ethnic minority), Tibetan Buddhism, Chinese Buddhism and Taoism. Dongba religion has the largest number of Naxi followers.
Dongba Religion
Dongba religion has a great influence on the social life, national spirit and cultural customs of Naxi people, is the backbone of Naxi’s diverse religious beliefs, is developed on the basis of the original belief of Naxi in the period of clan and tribal alliance. Later, in the different historical period, it gradually absorbed some contents of Tibetan Bon and Tibetan Buddhism, forming a distinct ethnic and religious forms. It has its own ritual system, and a huge system of ghosts and gods. Animism, nature worship, ancestor worship, Chongbu(重卜), and the basic idea that “nature and man are brothers” are the main features of Dongba religion.
Tibetan Buddhism
Tibetan Buddhism was introduced in Naxi area from Tibet via west Sichuan since the late Yuan dynasty. After the early Qing dynasty, Tibetan Buddhism developed rapidly in Lijiang and Weixi Naxi areas. During the 180 years period from Kangxi to Daoguang, 13 major temples of the Kagyu sect were successively built with considerable religious and economic power. In the Qing dynasty, Chinese Buddhism also got further development in Lijiang area, more than 60 temples of different sizes were built, distributed in urban and rural areas.
Taoism was spread to Lijiang in the Ming dynasty, and Mu’s Tusi invited Taoist priests from the mainland to spread Taoism in Lijiang. In the first year of emperor Yongzheng of Qing dynasty (1723), Taoism was further developed in Lijiang after bureaucratization of native officers.
Feature of Naxi Ethnic Minority
The Naxi areas, traversed by the Jinsha, Lancang and Yalong rivers, and the Yunling, Xueshan and Yulong mountain ranges, have a complicated terrain. There are cold mountainous areas, uplands, basins, rivers and valleys, averaging 2,700 meters above sea level. The climate varies from cold and temperate to subtropical. Rainfall is plentiful. Agriculture is the main occupation of the Naxi people. The chief crops are rice, corn, wheat, potatoes, beans, hemp and cotton. The bend of the Jinsha River is heavily forested, and Jade Dragon Mountain is known at home and abroad as a “flora storehouse”. The extensive dense forests contain Chinese fir, Korean pine, Yunnan pine and other valuable trees, as well as many varieties of herbs including fritillary bulbs, Chinese caterpillar fungus and musk. There are rich reserves of such non-ferrous metals as gold, silver, copper, aluminum and manganese, as well as abundant water resources.
Culture and Art of Naxi Ethnic Minority
Architecture Culture
Since the Ming dynasty, the Naxi people in Lijiang have built magnificent tile-roofed houses, but most of them are the houses of Tusi and headmen, or temples. Since the Qing dynasty, along with the increase in cultural exchange and the development of social economy and culture of Naxi ethnic minority, the construction techniques of Han, Bai and Tibetan were continuously absorbed by Naxi people, forming the architectural forms of “Three houses and one screen wall(三坊一照壁)” and “Four houses and five courtyards(四合五井天)”. This kind of civil or brick and wood structure tile-roofed building is popular in Lijiang areas, and has produced very characteristic residential courtyard. Three houses and one screen wall is the most basic and common folk dwellings among Naxi people in Lijiang.
Painting and Mural
Dongba paintings can be divided into wooden board paintings(木牌画), bamboo strokes(竹笔画), card paintings(纸牌画) and scroll paintings. Dongba sculpture has dough sculpture, clay sculpture and wood sculpture. Dongba paintings and sculptures have a straightforward, natural and unsophisticated style. The famous Baisha mural in Lijiang is the product of the great opening up of Naxi society in Ming dynasty. The most prominent characteristic of Baisha mural is that the content of various religions and different sects in the same religion on the subject matter are integrated and coexist, and the painting techniques of various ethnic groups are mixed together.
Song and Dance
Naxi ethnic minority is famous for their singing and dancing. Representatives of Naxi music include Lijiang ancient music, Lijiang Dongjing music and so on. “Lijiang ancient music” is the artistic crystallization of diverse cultures of Naxi and Han. “Lijiang ancient music” is composed of “Baisha fine music”, Lijiang dongjing music and Huangjing music (Huangjing music is now lost). Baisha fine music is one of the few large classical orchestras in China.
Cultural Heritages of Naxi Ethnic Minority
Festivals and Activities of Naxi Ethnic Minority
Keep Reading
|
<urn:uuid:21e584df-bb9b-4bfb-b8b6-13fa1ffff8e4>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.75,
"fasttext_score": 0.286821186542511,
"language": "en",
"language_score": 0.9478691220283508,
"url": "https://www.yunnanexploration.com/category/yunnan-ethnic-culture/yunnan-ethnic-minorities/naxi-ethnic-minority"
}
|
Red blood cells recover their shape in two ways after flowing through constricted channels. Credit: A. Amirouche, Université Lyon
Laboratory blood tests are often done by forcing samples through small channels. When the channels are very small, as in microfluidic devices, red blood cells (RBCs) are deformed and then relax back to their original shape after exiting the channel. The way the deformation and relaxation occur depends on both the flow characteristics and mechanical properties of the cell's outer membrane.
In this week's issue of the journal Biomicrofluidics, a method to characterize the recovery of healthy human RBCs flowing through a microfluidic constricted channel is reported. This investigation revealed a coupling between the cell's mechanical properties and the hydrodynamic properties of the flow. In addition, the method could distinguish between healthy RBCs and those infected by the malaria parasite. This suggests a possible new technique for diagnosing disease.
The microfluidic device consisted of a narrow channel interspersed by a succession of sawtooth-shaped wider areas. A solution of RBCs is pumped through the system by applying pressure from one end. As the travel through the channel, they are observed with a microscope. The images are captured with a high-speed camera and sent to a computer for analysis.
When an RBC enters a narrow channel, it takes on a parachutelike shape. When it exits into a wide region, it elongates in the direction of the flow until it meets the next widening and is again stretched by the flow.
At the final exit, two different shape recovery behaviors were observed, depending on the flow speed and viscosity of the medium. At high flow speed and viscosity, the cells get stretched upon their last exit from the channel and then recover their original shapes. At lower speed and viscosity, however, the parachutelike shape is recovered directly upon exiting.
The investigators found that the hydrodynamic conditions at which the transition between these two different recovery behaviors occurs depend on the elastic properties of the RBC.
Co-author Magalie Faivre said, "Although the time necessary for the cells to recover their shape after exiting the was shown to depend on the hydrodynamic conditions, we have demonstrated that, at a given stress, this recovery time can be used to differentiate healthy from Plasmodium falciparum-infected RBCs." Plasmodium falciparum is one of the parasites that cause malaria.
The investigators are seeking to expand their study to find a way to detect "signatures" for other types of diseases.
"We are currently evaluating if our approach is able to discriminate the alteration of different structural components of the RBC membrane," said Faivre. "To do so, we are studying RBCs from patients with malaria, sickle cell anemia and hereditary spherocytosis."
More information: "Dual shape recovery of red blood cells flowing out of a microfluidic constriction," Biomicrofluidics (2020).
Journal information: Biomicrofluidics
|
<urn:uuid:60cec6f6-71b7-49e6-a3ee-9bc659cba356>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.671875,
"fasttext_score": 0.07636678218841553,
"language": "en",
"language_score": 0.9444580078125,
"url": "https://phys.org/news/2020-04-blood-cells-deform-recover-tiny.html?deviceType=mobile"
}
|
The chloride ion is formed when the element chlorine picks up one electron to form the negatively charged ion Cl. The salts of hydrochloric acid HCl are also called chlorides. An example is table salt, which is sodium chloride with the formula NaCl. In water, it dissociates into Na+ and Cl ions.
Chloride ions have important physiological roles. For instance, in the central nervous system the inhibitory action of glycine and some of the action of GABA relies on the entry of Cl into specific neurons.
|
<urn:uuid:f67f186f-ca64-44d0-a096-9f4be4e6de05>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4,
"fasttext_score": 0.43939101696014404,
"language": "en",
"language_score": 0.8868170976638794,
"url": "http://infomutt.com/c/ch/chloride.html"
}
|
Skip to main contentSkip to main content
Bronchiolitis is a common lower respiratory tract infection that affects babies and young children under 2 years old.
Most cases are mild and clear up within 2 to 3 weeks without the need for treatment, although some children have severe symptoms and need hospital treatment.
The early symptoms of bronchiolitis are similar to those of a common cold, such as a runny nose and a cough.
Further symptoms then usually develop over the next few days, including:
• a slight high temperature (fever)
• a dry and persistent cough
• difficulty feeding
• rapid or noisy breathing (wheezing)
When to get medical help
Most cases of bronchiolitis are not serious, but see your GP or call NHS 111 if:
• you're worried about your child
• your child has taken less than half their usual amount during the last 2 or 3 feeds, or they have had a dry nappy for 12 hours or more
• your child has a persistent high temperature of 38C or above
• your child seems very tired or irritable
A diagnosis of bronchiolitis is based on your child's symptoms and an examination of their breathing.
Dial 999 for an ambulance if:
• your baby is having difficulty breathing
• your baby's tongue or lips are blue
• there are long pauses in your baby's breathing
What causes bronchiolitis?
Bronchiolitis is caused by a virus known as the respiratory syncytial virus (RSV), which is spread through tiny droplets of liquid from the coughs or sneezes of someone who's infected.
The infection causes the smallest airways in the lungs (the bronchioles) to become infected and inflamed.
The inflammation reduces the amount of air entering the lungs, making it difficult to breathe.
Who's affected?
Around 1 in 3 children in the UK will develop bronchiolitis during their first year of life. It most commonly affects babies between 3 and 6 months of age.
By the age of 2, almost all infants will have been infected with RSV and up to half will have had bronchiolitis.
Bronchiolitis is most widespread during the winter (from November to March). It's possible to get bronchiolitis more than once during the same season.
Treating bronchiolitis
There's no medication to kill the virus that causes bronchiolitis, but the infection usually clears up within 2 weeks without the need for treatment.
Most children can be cared for at home in the same way that you'd treat a cold.
Make sure your child gets enough fluid to avoid dehydration. You can give infants paracetamol or ibuprofen to bring down their temperature if the fever is upsetting them.
About 2 to 3% of babies who develop bronchiolitis during the first year of life will need to be admitted to hospital because they develop more serious symptoms, such as breathing difficulties.
This is more common in premature babies (born before week 37 of pregnancy) and those born with a heart or lung condition.
Preventing bronchiolitis
It's very difficult to prevent bronchiolitis, but there are steps you can take to reduce your child's risk of catching it and help prevent the virus spreading.
You should:
• wash your hands and your child's hands frequently
• wash or wipe toys and surfaces regularly
• keep infected children at home until their symptoms have improved
• keep newborn babies away from people with colds or flu
• avoid smoking around your child, and do not let others smoke around them
Some children who are at high risk of developing severe bronchiolitis may have monthly antibody injections, which help limit the severity of the infection.
|
<urn:uuid:b45068d3-7e3e-4bb2-aa34-db0170d0b901>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.640625,
"fasttext_score": 0.050155460834503174,
"language": "en",
"language_score": 0.9580846428871155,
"url": "https://api-bridge.azurewebsites.net/conditions/index.php?uid=c3VwcG9ydEBzaXRla2l0Lm5ldA%3D%3D&category=H&p=bronchiolitis&index=1&index=1"
}
|
Introduction Of Molecular Sieve
- Jan 03, 2019-
Molecular sieve is a material containing precise and single tiny holes that can be used to adsorb gases or liquids. Molecules that are small enough can be adsorbed through the orifice, while larger molecules cannot. Unlike a normal sieve, it operates at the molecular level. For example, a water molecule is small enough to pass through a molecule that is a little larger than it is. Therefore, molecular sieves are often used as desiccant. A molecular sieve absorbs water up to 22% of its own weight. Molecular sieves are often used in the oil industry, especially to purify gases. For example, silicone can be used to adsorb the corrosion of mercury in natural gas to aluminum pipes and other liquefaction equipment.
|
<urn:uuid:a7d8c43b-c402-4c44-89e4-f4476fb84e51>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4.125,
"fasttext_score": 0.43126416206359863,
"language": "en",
"language_score": 0.9185885190963745,
"url": "http://m.fcchemicals.com/news/introduction-of-molecular-sieve-20155143.html"
}
|
The Wallace Book
Through his personality, ingenuity and ability, he initiated a resistance movement which ultimately secured the nation's freedom and independence. Yet, Wallace was reviled, opposed and eventually betrayed by the nobility in his own day to re-surface in the epic poetry of the fifteenth century as a champion and liberator. Eventually, his legend overtook the historical reality, a process which has continued for centuries as manifested in modern media and film. A team of leading historians and critics from both Scotland and England investigate what is known of the medieval warrior's career from contemporary sources, most of which, unusually for a national hero, were created by his enemies. His reputation, from the time of his horrendous execution to the present, is examined to ascertain what the figure of Wallace meant to different generations of Scots. Too dangerous perhaps for his own era, he became the supreme Scottish hero of all time; the archetypal Scot who would teach kings and nobles where their duty lay, and who would live free or freely die for the liberty of his nation.
|
<urn:uuid:4b083f32-9a38-4974-8ab0-469cc9d440f5>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5625,
"fasttext_score": 0.3796975016593933,
"language": "en",
"language_score": 0.988395631313324,
"url": "https://www.belfastbooks.co.uk/products/the-wallace-book"
}
|
Encyclopædia Britannica, Inc.
Winfried Bruenken
Namaqualand is a desert region of southwestern Africa. From north to south it stretches from the Karas region of Namibia to the Northern Cape province of South Africa. From west to east it stretches from the Namib Desert to the Kalahari. The Namibian section, north of the Orange River, is sometimes called Great Namaqualand. The South African section, south of the Orange River, is sometimes called Little Namaqualand. The Richtersveld region is near the mouth of the Orange River in South African Namaqualand.
Namaqualand is very dry. For a large part of the year succulents are almost the only plants that can be seen on the vast plains. Succulents can hold water for long periods and can survive in droughts. Two of Namaqualand’s larger succulents are the quiver tree and the halfmens. Rain falls mostly in the winter. If there is enough rain, wildflowers cover Namaqualand for a few weeks during springtime.
Namaqualand is the traditional home of the Nama people. (Qua means “people” in the Nama language.) The Nama language is a Khoekhoe language. It is the only language in the Khoekhoe group that is still spoken. Nowadays, however, people in Namaqualand are more likely to speak Afrikaans.
There are large deposits of copper in Namaqualand. The Nama mined them for hundreds of years. They used the copper to make household items and decorations. In 1685 Simon van der Stel, the Dutch governor of the Cape Colony, found out about the copper. During the 1800s, European settlers opened copper mines and built railways to haul away the ore. Springbok, the most important town in South African Namaqualand, grew with the copper industry.
In the early 1900s diamonds were discovered in several places in Namaqualand, including Sperrgebiet in Namibia and the Richtersveld in South Africa. Part of the Namibian coastal region was declared a “restricted diamond area” and closed to the public. Mining is still very important to the economy.
|
<urn:uuid:1b32e07f-bbb6-447d-b28e-ece135728f58>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.515625,
"fasttext_score": 0.06912726163864136,
"language": "en",
"language_score": 0.9464021921157837,
"url": "https://kids.britannica.com/students/article/Namaqualand/608633"
}
|
Why Do Crickets Stop Chirping When Approached?
How a Cricket Knows a Predator Is Near
Gary Ombler/Corbis Documentary/Getty Images
There's nothing more maddening than trying to find a chirping cricket in your basement. It will sing loudly and ceaselessly until the moment you approach when it abruptly stops chirping. How does a cricket know when to hush?
Why Do Crickets Chirp?
Male crickets are the communicators of the species. The females wait for the songs of the males to spur the mating ritual. Female crickets don't chirp. Males make the chirping sound by rubbing the edges of their forewings together to call for female mates. This rubbing is called stridulation.
Some species of crickets have several songs in their repertoire. The calling song attracts females and repels other males, and it's fairly loud. This song is used only during the day in safe places; crickets aggregate at dawn without the use of acoustic calling. These groupings are typically not courtship displays or leks because they don't assemble for the sole purpose of mating.
The cricket courting song is used when a female cricket is near, and the song encourages her to mate with the caller. An aggressive song allows male crickets to interact aggressively with one another, establish territory, and claim access to females in that territory. A triumphal song is produced for a brief period after mating and may reinforce the mating bond to encourage the female to lay eggs rather than find another male.
Mapping Cricket Chirping
The different songs used by crickets are subtle, but they do vary in pulse numbers and hertzes, or frequency. Chirp songs have one to eight pulses, spaced at regular intervals. Compared with aggressive songs, courtship chirps tend to have more pulses and shorter intervals between them.
Crickets chirp at different rates depending on their species and the temperature of their environment. Most species chirp at higher rates the higher the temperature is. The relationship between temperature and the rate of chirping is known as Dolbear's law. According to this law, counting the number of chirps produced in 14 seconds by the snowy tree cricket, common in the United States, and adding 40 will approximate the temperature in degrees Fahrenheit.
Crickets "Hear" Vibrations
Crickets know when we approach because they are sensitive to vibrations and noises. Since most predators are active during daylight, crickets chirp at night. The slightest vibration might mean an approaching threat, so the cricket goes quiet to throw the predator off its trail.
Crickets don't have ears like we do. Instead, they have a pair of tympanal organs on their forewings (tegmina), which vibrate in response to vibrating molecules (sound to humans) in the surrounding air. A special receptor called the chordotonal organ translates the vibration from the tympanal organ into a nerve impulse, which reaches the cricket's brain.
Crickets are extremely sensitive to vibration. No matter how soft or quiet you try to be, a cricket will get a warning nerve impulse. Humans hear something first, but crickets always feel it.
A cricket is always on the alert for predators. Its body color, usually brown or black, blends in with most of its environments. But when it feels vibrations, it responds to the nerve impulse by doing what it can to hide—it goes silent.
How to Sneak Up on a Cricket
If you're patient, you can sneak up on a chirping cricket. Each time you move, it will stop chirping. If you remain still, eventually it will decide it's safe and begin calling again. Keep following the sound, stopping each time it goes silent, and you'll eventually find your cricket.
mla apa chicago
Your Citation
Hadley, Debbie. "Why Do Crickets Stop Chirping When Approached?" ThoughtCo, Aug. 26, 2020, thoughtco.com/chirping-crickets-quiet-when-you-move-1968336. Hadley, Debbie. (2020, August 26). Why Do Crickets Stop Chirping When Approached? Retrieved from https://www.thoughtco.com/chirping-crickets-quiet-when-you-move-1968336 Hadley, Debbie. "Why Do Crickets Stop Chirping When Approached?" ThoughtCo. https://www.thoughtco.com/chirping-crickets-quiet-when-you-move-1968336 (accessed March 4, 2021).
|
<urn:uuid:5b2f582f-1d51-4db6-b705-8fa0a6e81bbd>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.703125,
"fasttext_score": 0.9850639700889587,
"language": "en",
"language_score": 0.9308047294616699,
"url": "https://www.thoughtco.com/chirping-crickets-quiet-when-you-move-1968336"
}
|
Scott Joplin (c. 1868 – 1917)
Scott JoplinJoplin’s Legacy
Scott Joplin played an essential role in the development of ragtime music. His work also laid the groundwork for jazz, another distinctly American musical form. Through his performances and compositions, Joplin gave the world a unique form of music that combines classical structures and techniques with African American melodies and rhythms. He also opened the door for other black musicians and artists to succeed in a racially segregated nation.
Today, museums preserve Joplin’s memory and educate people about his contributions. His work was revived in the 1940s and more fully explored and promoted in the 1970s. A 1973 movie, The Sting, made his song “The Entertainer” a popular hit. His opera Treemonisha was finally fully produced, and in 1976 the Pulitzer Committee awarded Joplin a special Bicentennial Pulitzer Prize for his contribution to American music. Sedalia, Missouri, calls itself “The Cradle of Ragtime” and honors Joplin’s life and work with its annual “Scott Joplin Ragtime Music Festival.” Fans from around the world attend the festival to honor this great musician.
Scott Joplin was a musician and composer. He is considered the “King of Ragtime Writers.” Ragtime is music played in “ragged” or off-the-beat time. This varied rhythm developed from African American work songs, gospel tunes, and dance. Joplin wrote forty-four original piano pieces or rags, two operas, and one ragtime ballet. He also co-wrote seven rags with other composers.
Early Years
Scott Joplin was born around 1868 in northeastern Texas. He was the second of six children born to Giles and Florence Joplin. His family lived for a while on the farm of William Caves, but moved to Texarkana, Texas, in the 1870s. His father worked as a laborer, and his mother was a house cleaner and laundress. Joplin learned to play piano at an early age on the piano of his mother’s employer. He also took lessons from a German-born music teacher named Julius Weiss. Joplin’s parents were both very musical. His father played the violin, and his mother sang and played the banjo.
Joplin began performing as a musician when he was a teenager. In addition to the piano, Joplin played the violin and cornet. He also sang well. Sometime in the 1880s, Joplin left Texarkana and traveled to many places. He probably spent time in Sedalia, Missouri, attending Lincoln High School. He also went to St. Louis where he met Tom Turpin, another ragtime musician. Joplin played a variety of music, combining traditional western forms such as the waltz and march with melodies and rhythms borrowed from African American songs. In 1893 Joplin traveled to Chicago at the time of the World’s Fair. He led and played the cornet in a band that played outside the fairgrounds. There he met musician Otis Saunders, who encouraged him to write down and publish the songs he had been making up as he entertained his audiences.
Music Training in Sedalia
Scott Joplin moved to Sedalia, a busy railroad town in Missouri, in 1894. Here Joplin joined the Queen City Cornet Band and performed in local clubs. Using Sedalia as a home base, he continued to travel around the country with various musical groups. In 1896 he enrolled at the George R. Smith College to study music seriously and to develop the skill of transferring musical sounds into notes recorded on a page that other musicians could then play. Joplin quickly learned how to write down the vibrant melodies and complex rhythms he and his fellow musicians had been developing. He then published several original compositions and also started co-writing songs with Sedalia musicians Arthur Marshall and Scott Hayden.
Scott Joplin soon became a popular and respected musician in Sedalia. In 1899, a local music store owner and music publisher named John Stark printed Joplin’s song “Maple Leaf Rag.” Immediately popular, this song featured a pleasing melody and a catchy beat. It became a classic model of ragtime music and thrust Joplin into the national spotlight. Eventually, this rag and many others earned Joplin the title “King of Ragtime Writers.”
High Hopes in St. Louis
Trying to build on the success of “Maple Leaf Rag,” Scott Joplin and his bride, Belle Jones, moved to St. Louis in 1901. John Stark had already moved there, and Hayden and Marshall came, too. Joplin and his friends hoped to become successful performers and composers in this urban center. With their presence, St. Louis became a focal point for this special kind of music.
Joplin devoted most of his time and energy to composing new pieces and teaching music lessons. He wrote and published many new works, including an opera and ballet, while living in St. Louis. His ragtime compositions gained the attention of classically trained musicians and critics. Alfred Ernst, conductor of the St. Louis Choral Symphony Society, described Joplin as “an extraordinary genius.” Monroe Rosenfield, a respected music critic for the St. Louis Globe-Democrat, praised Joplin’s work highly.
Despite this respect and his popularity, Joplin was like many other African American musicians of that time period. He was praised, but not fully included in white society. He performed places where other members of his race had limited access. Though Joplin’s popular rags were published, he had trouble raising money to produce the works that he cared most deeply about—his longer and more complicated compositions.
Scott Joplin’s private life also became troubled. He suffered the loss of an infant child, his first marriage ended, and his second wife, Freddie Alexander, died shortly after they were married in 1904. By late 1907, Joplin had left St. Louis and moved to New York City. He hoped that this city would offer him new opportunities and the solid financial backing he needed to continue his work.
Ragtime in New York City
New York City offered Scott Joplin new experiences. Here he performed in vaudeville shows and wrote new songs. John Stark also moved to the city and set up his publishing business in a district called Tin Pan Alley. Joplin maintained his relationship with Stark, but also branched out with other publishers. He married again, this time to Lottie Stokes, who supported his work and efforts. Joplin still worked very hard for very little pay. He toiled for years on a major piece, a second opera titled Treemonisha. Between 1911 and 1915, he put on a series of unstaged run-throughs and partial performances of Treemonisha, but the opera failed to gain financial backing for a full production. By this time, Joplin was suffering from a disease that made it difficult for him to compose and perform as he always had. Sick, discouraged, and poor, Scott Joplin died on April 1, 1917. He is buried in Saint Michael’s Cemetery in New York City.
Print Friendly, PDF & Email
Leave a reply
|
<urn:uuid:41c522ee-258f-4882-a139-fcda6e421ece>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.53125,
"fasttext_score": 0.03230029344558716,
"language": "en",
"language_score": 0.9830249547958374,
"url": "https://pianonotes.piano4u.com/index.php/2016/08/composers-corner-scott-joplin-2/"
}
|
Early history
See the source image
According to ancient Chinese (977),:129 Indian and Japanese manuscripts, western coastal cities of Borneo had become trading ports by the first millennium AD. In Chinese manuscripts, gold, camphor, tortoise shells, hornbill ivory, rhinoceros horn, crane crest, beeswax, lakawood (a scented heartwood and root wood of a thick liana, Dalbergia parviflora), dragon’s blood, rattan, edible bird’s nests and various spices were described as among the most valuable items from Borneo. The Indians named Borneo Suvarnabhumi (the land of gold) and also Karpuradvipa (Camphor Island). The Javanese named Borneo Puradvipa, or Diamond Island. Archaeological findings in the Sarawak river delta reveal that the area was a thriving centre of trade between India and China from the 6th century until about 1300.
Stone pillars bearing inscriptions in the Pallava script, found in Kutai along the Mahakam River in East Kalimantan and dating to around the second half of the 4th century, constitute some of the oldest evidence of Hindu influence in Southeast Asia. By the 14th century, Borneo became a vassal state of Majapahit (in present-day Indonesia), later changing its allegiance to the Ming dynasty of China. The religion of Islam entered the island in the 10th century, following the arrival of Muslim traders who later converted many indigenous peoples in the coastal areas.
See the source image
The Sultanate of Brunei declared independence from Majapahit following the death of Majapahit Emperor in mid-14th century. During its golden age under Bolkiah from the 15th century to the 17th century, the Bruneian Empire ruled almost entire coastal area of Borneo (lending its name to the island due to its influence in the region) and several islands in the Philippines. During the 1450s, Shari’ful Hashem Syed Abu Bakr, an Arab born in Johor, arrived in Sulu from Malacca. In 1457, he founded the Sultanate of Sulu; he titled himself as “Paduka Maulana Mahasari Sharif Sultan Hashem Abu Bakr”. Following their independence in 1578 from Brunei’s influence, the Sulu’s began to expand their thalassocracy to parts of the northern Borneo. Both the sultanates who ruled northern Borneo had traditionally engaged in trade with China by means of the frequently-arriving Chinese junks. Despite the thalassocracy of the sultanates, Borneo’s interior region remained free from the rule of any kingdoms.
British and Dutch control
See the source image
Since the fall of Malacca in 1511, Portuguese merchants traded regularly with Borneo, and especially with Brunei from 1530. Having visited Brunei’s capital, the Portuguese described the place as surrounded by a stone wall. While Borneo was seen as rich, the Portuguese did not make any attempts to conquer it. The Spanish visit to Brunei led to the Castilian War in 1578. The English began to trade with Sambas of southern Borneo in 1609, while the Dutch only began their trade in 1644: to Banjar and Martapura, also in the southern Borneo. The Dutch tried to settle the island of Balambangan, north of Borneo, in the second half of the 18th century, but withdrew by 1797. In 1812, the sultan in southern Borneo ceded his forts to the English East India Company. The English, led by Stamford Raffles, then tried to establish an intervention in Sambas but failed. Although they managed to defeat the Sultanate the next year and declared a blockade on all ports in Borneo except Brunei, Banjarmasin and Pontianak, the project was cancelled by the British Governor-General Lord Minto in India as it was too expensive. At the beginning of British and Dutch exploration on the island, they described the island of Borneo as full of head hunters, with the indigenous in the interior practising cannibalism, and the waters around the island infested with pirates, especially between the north eastern Borneo and the southern Philippines.The Malay and Sea Dayak pirates preyed on maritime shipping in the waters between Singapore and Hong Kong from their haven in Borneo, along with the attacks by Illanuns of the Moro Pirates from the southern Philippines, such as in the Battle off Mukah.
The Dutch began to intervene in the southern part of the island upon resuming contact in 1815, posting Residents to Banjarmasin, Pontianak and Sambas and Assistant-Residents to Landak and Mampawa. The Sultanate of Brunei in 1842 granted large parts of land in Sarawak to the English adventurer James Brooke, as a reward for his help in quelling a local rebellion. Brooke established the Kingdom of Sarawak and was recognised as its rajah after paying a fee to the Sultanate. He established a monarchy, and the Brooke dynasty (through his nephew and great-nephew) ruled Sarawak for 100 years; the leaders were known as the White Rajahs. Brooke also acquired the island of Labuan for Britain in 1846 through the Treaty of Labuan with the Sultan of Brunei, Omar Ali Saifuddin II on 18 December 1846. The region of northern Borneo came under the administration of North Borneo Chartered Company following the acquisition of territory from the Sultanates of Brunei and Sulu by a German businessman and adventurer named Baron von Overbeck, before it was passed to British Dent brothers (comprising Alfred Dent and Edward Dent). Further enroachment by the British reduced the territory of Brunei. This led the 26th Sultan of Brunei, Hashim Jalilul Alam Aqamaddin to appeal the British to stop, and as a result a Treaty of Protection was signed in 1888, rendering Brunei a British protectorate.
Before the acquisition by the British, the Americans also managed to establish their temporary presence in northwestern Borneo after acquiring a parcel of land from the Sultanate of Brunei. A company known as American Trading Company of Borneo was formed by Joseph William Torrey, Thomas Bradley Harris and several Chinese investors, establishing a colony named “Ellena” in the Kimanis area. The colony failed and was abandoned, due to denials of financial backing, especially by the US government, and to diseases and riots among the workers. Before Torrey left, he managed to sell the land to the German businessman, Overbeck. Meanwhile, the Germans under William Frederick Schuck was awarded a parcel of land in northeastern Borneo of the Sandakan Bay from the Sultanate of Sulu where he operate business and export large quantities of arms, opium, textiles and tobacco to Sulu before the land were also passed to Overbeck by the Sultanate.
Prior to the recognition of Spanish presence in the Philippine archipelago, a protocol known as the Madrid Protocol of 1885 was signed between the governments of the United Kingdom, Germany and Spain in Madrid to cement Spanish influence and recognise their sovereignty over the Sultanate of Sulu—in return for Spain’s relinquishing its claim to the former possessions of the Sultanate in northern Borneo. The British administration then established the first railway network in northern Borneo, known as the North Borneo Railway. During this time, the British sponsored a large number of Chinese workers to migrate to northern Borneo to work in European plantation and mines, and the Dutch followed suit to increase their economic production. By 1888, North Borneo, Sarawak and Brunei in northern Borneo had become British protectorate. The area in southern Borneo was made Dutch protectorate in 1891. The Dutch who already claimed the whole Borneo were asked by Britain to delimit their boundaries between the two colonial territories to avoid further conflicts. The British and Dutch governments had signed the Anglo-Dutch Treaty of 1824 to exchange trading ports in Malay Peninsula and Sumatra that were under their controls and assert spheres of influence. This resulted in indirectly establishing British- and Dutch-controlled areas in the north (Malay Peninsula) and south (Sumatra and Riau Islands) respectively.
World War II
See the source image
During World War II, Japanese forces gained control and occupied most areas of Borneo from 1941–45. In the first stage of the war, the British saw the Japanese advance to Borneo as motivated by political and territorial ambitions rather than economic factors. The occupation drove many people in the coastal towns to the interior, searching for food and escaping the Japanese. The Chinese residents in Borneo, especially with the Sino-Japanese War in Mainland China mostly resisted the Japanese occupation. Following the formation of resistance movements in northern Borneo such as the Jesselton Revolt, many innocent indigenous and Chinese people were executed by the Japanese for their alleged involvement.
In Kalimantan, the Japanese also killed many Malay intellectuals, executing all the Malay Sultans of West Kalimantan in the Pontianak incidents, together with Chinese people who were already against the Japanese for suspecting them to be threats. Sultan Muhammad Ibrahim Shafi ud-din II of Sambas was executed in 1944. The Sultanate was thereafter suspended and replaced by a Japanese council. The Japanese also set-up Pusat Tenaga Rakjat (PUTERA) in the Indonesian archipelago in 1943, although it was abolished the following year when it become too nationalistic. Some of the Indonesian nationalist like Sukarno and Hatta who had returned from Dutch exile began to co-operate with the Japanese. Shortly after his release, Sukarno became President of the Central Advisory Council, an advisory council for south Borneo, Celebes, and Lesser Sunda, set up in February 1945.
Since the fall of Singapore, the Japanese sent several thousand of British and Australian prisoners of war to camps in Borneo such as Batu Lintang camp. From the Sandakan camp site, only six of some 2,500 prisoners survived after they were forced to march in an event known as the Sandakan Death March. In addition, of the total of 17,488 Javanese labourers brought in by the Japanese during the occupation, only 1,500 survived mainly due to starvation, harsh working conditions and maltreatment. The Dayak and other indigenous people played a role in guerrilla warfare against the occupying forces, particularly in the Kapit Division. They temporarily revived headhunting of Japanese toward the end of the war, with Allied Z Special Unit provided assistance to them. Australia contributed significantly to the liberation of Borneo. The Australian Imperial Force was sent to Borneo to fight off the Japanese.Together with other Allies, the island was completely liberated in 1945.
Recent history
Towards the end of the war, Japan decided to give an early independence to a new proposed country of Indonesia on 17 July 1945, with an Independence Committee meeting scheduled for 19 August 1945. However, following the surrender of Japan to the Allied forces, the meeting was shelved. Sukarno and Hatta continued the plan by unilaterally declaring independence, although the Dutch tried to retake their colonial possession in Borneo. The southern part of the island achieved its independence through the Proclamation of Indonesian Independence on 17 August 1945. The reaction was relatively muted with little open fighting in Pontianak or in the Chinese majority areas. While nationalist guerrillas supporting the inclusion of southern Borneo in the new Indonesian republic were active in Ketapang, and to lesser extent in Sambas where they rallied with the red-white flag which became the flag of Indonesia, most of the Chinese residents in southern Borneo expected to be liberate by Chinese Nationalist troops from Mainland China and to integrate their districts as an overseas province of China.
In May 1945, officials in Tokyo suggested that whether northern Borneo should be included in the proposed new country of Indonesia should be separately determined based on the desires of its indigenous people and following the disposition of Malaya. Sukarno and Mohammad Yamin meanwhile continuously advocated for a Greater Indonesian republic. As the President of the new republic, perceiving the British trying to maintain their presence in northern Borneo and Malay Peninsula, he decided to launch a military infiltration later known as the confrontation between 1962 until 1969. In 1961, Prime Minister Tunku Abdul Rahman of the independent Federation of Malaya desired to unite Malaya, the British colonies of Sarawak, North Borneo, Singapore and the Protectorate of Brunei under the proposed Federation of Malaysia. The idea was heavily opposed by the governments in both Indonesia and the Philippines as well from Communist sympathisers and nationalist in Borneo. As a response to the growing opposition, the British deployed their armed forces to guard their colonies against Indonesian and communist revolts, which was also participated by Australia and New Zealand.
The Philippines opposed the newly proposed federation, claiming the eastern part of North Borneo (today the Malaysian state of Sabah) as part of its territory as a former possession of the Sultanate of Sulu. The Philippine government mostly based their claim on the Sultanate of Sulu’s cession agreement with the British North Borneo Company, as by now the Sultanate had come under the jurisdiction of the Philippine republican administration, which therefore should inherit the Sulu former territories. The Philippine government also claimed that the heirs of the Sultanate had ceded all their territorial rights to the republic.
The Sultanate of Brunei at the first welcomed the proposal of a new larger federation. Meanwhile, the Brunei People’s Party led by A.M. Azahari desired to reunify Brunei, Sarawak and North Borneo into one federation known as the North Borneo Federation (Malay: Kesatuan Negara Kalimantan Utara), where the Sultan of Brunei would be the head of state for the federation—though Azahari had his own intention to abolish the Brunei Monarchy, to make Brunei more democratic, and to integrate the territory and other former British colonies in Borneo into Indonesia, with the support from the latter government. This directly led to the Brunei Revolt, which thwarted Azahari’s attempt and forced him to escape to Indonesia. Brunei withdrew from being part of the new Federation of Malaysia due to some disagreements on other issues while political leaders in Sarawak and North Borneo continued to favour inclusion in a larger federation.
With the continuous opposition from Indonesia and the Philippines, the Cobbold Commission was established to discover the feeling of the native populations in northern Borneo; it found the people greatly in favour of federation, with various stipulations. The federation was successfully achieved with the inclusion of northern Borneo through the Malaysia Agreement on 16 September 1963. Until present, the area in northern Borneo still subjected to attacks by Moro Pirates since the 18th century, and militants such as the Abu Sayyaf since 2000 in the frequent cross border attacks. During the administration of Philippine President of Ferdinand Marcos, the President made some attempts to destabilise the state of Sabah, although his plan failed and resulted in the Jabidah massacre and later in insurgency in the southern Philippines.
Comments are closed.
|
<urn:uuid:2cbdf317-c4f1-4dec-896a-f748454e2e7d>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.96875,
"fasttext_score": 0.04995989799499512,
"language": "en",
"language_score": 0.9670203328132629,
"url": "http://borneo-infos.com/history-2/"
}
|
Home Page
We have been working on carrying out a fair test to investigate orbit times. We used a ping pong ball tied to a string to model a planet and its orbit. We changed the size of the string so it was a smaller or larger orbit and then timed how long the orbit would take. We will now look closely at our results to determine if we can prove that the further the planet from the Sun, the longer the orbit time.
Still image for this video
Still image for this video
Still image for this video
Still image for this video
Still image for this video
The Big Question
‘Is there a pattern between the size of a planet and the time it takes to travel around the Sun?’
We explored the planet fact cards to look for patterns and relationships.
We have been learning about how flowers reproduce. We dissected a lily to explore the different parts of the flower.
We worked in small groups to sort the life cycles of different animals that had become muddled up into the correct order. This helped us to explore the similarities and differences between the life cycle of different classes of animal.
We have started finding out about the life cycles of amphibians. We will be finding out next about insects so that we can then compare the life cycles of these different classes and find similarities and differences.
We have been working scientifically in Beech class. We worked in groups to measure our current height, then used the World Health Organisation growth charts to plot where our height would be. We could then make a prediction on what our height might be when we are an adult. It was interesting to compare results in our class.
Gestation Gurus
We played a game to see if we could work out the different gestation periods for different mammals, including humans. We learned that gestation is the process by which a mammal grows a baby and is the time between conception and birth. Once we had worked out the correct gestation periods for different mammals, we were challenged to spot any patterns in the results. We realised that there was a link between the size of the animal and how long it carried a baby for before giving birth.
In most cases, the larger the animal the longer period of gestation. Therefore, the smaller the animal, the shorter gestation time.
We had a great time celebrating Roald Dahl day with some themed Science experiments. First of all, we enjoyed making our own BGF inspired dream jars and observing over time what happened when we mixed oil, water, food dye and alka seltzer. We then made our own phizzwizards, again inspired by the BFG. What a fun afternoon we had!
|
<urn:uuid:239343ce-4f00-4914-8eb2-b23f056f74a0>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.671875,
"fasttext_score": 0.036138713359832764,
"language": "en",
"language_score": 0.9206203818321228,
"url": "https://www.stpeters.lancs.sch.uk/beech-class-3/"
}
|
Online College Courses for Credit
4 Tutorials that teach Explanatory and Response Variables
Take your pick:
Explanatory and Response Variables
Explanatory and Response Variables
Author: Katherine Williams
Identify the explanatory and response variables in a scatterplot.
See More
Video Transcription
Download PDF
This tutorial covers explanatory and response variables. As a review, a variable is a characteristic of a person or a thing. And then sometimes when you have the set of variables, it's going to make sense to pick an explanatory variable and a response variable.
That's not always going to be true. Sometimes it's not going to have a clear explanatory and response variable, and that's OK.
But what an explanatory variable is, it's one that might cause an effect. So the explanatory variable is the thing that you're looking to cause something to happen. And the response variable, on the other hand, is the one that's going to reflect that effect.
And the explanatory variables, when you're making a scatter plot, are going to go in the x-axis. And the response variables are going to go in the y-axis.
Now, if you don't have a clear-cut explanatory and response variable, then it doesn't really matter which one goes on the x and y. You can choose either. You can try graphing it with both on there and choose which one gives the right picture that you're looking for. But when you have explanatory and response, then it has to go x for explanatory, y for response.
So in this first example here, it says we're comparing two variables. The first variable is the age of a young farm animal and the second variable is the weight of a young farm animal. Here, it's going to make sense that there's an explanatory and a response.
We think that age would have an effect on the weight. So the age is going to be the explanatory variable. And so then we're going to put that on the x. Whereas, the weight is going to be the response variable, so that would go on the y.
Now with example 2, it says a student's grade in English and a student's grade math. There, it's not as clear cut. We don't know whether the English grade is causing you to have a better math grade or the math grade is causing you to have a better English grade, or there's something else outside. So you can pick either x or y for the English grade, and then the math grade just goes in the other axis.
So here we're just looking at the example 1 that we just had. And we said that the age was the explanatory variable, and we want that to go on the x-axis. So this one here is the x-axis. And then the weight, we think is the response, so we think it's the y.
And one trick I use to keep the x-axis and the y-axis separate, because I forget sometimes, is the y-axis you can turn it into a capital Y. So it has the little top part that you can add onto it. So that can help you to remember the y is the one that's vertical.
Now this has been your tutorial on explanatory and response variables. The key thing to remember here is that with a scatter plot, the explanatory goes in the x-axis and the response goes on the y-axis.
Terms to Know
Explanatory Variable
The variable whose increase or decrease we believe helps explain a tendency to increase or decrease in some other variable.
Response Variable
The variable that tends to increase or decrease due to an increase or decrease in the explanatory variable.
|
<urn:uuid:e2b847f6-0de9-4c51-a251-60d0036487a2>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 4.1875,
"fasttext_score": 0.5247555375099182,
"language": "en",
"language_score": 0.9422240257263184,
"url": "https://www.sophia.org/tutorials/explanatory-and-response-variables--3"
}
|
Muhammad Ali
Writing Assignment on Muhammad Ali and the Draft
When most people think of the 1960s, images of civil rights activists and anti-war protesters immediately come to mind. One commonly thinks of Martin Luther King, Jr. or the Black Panthers, for example. At the same time, provocative photographs of burning draft cards and violent confrontations with the police also form a large part of America’s historical memory. The case of Muhammad Ali and conscription reflects these wider issues of war and peace and racial justice, but from a different angle that allows you to use your larger historical imagination to better understand the tensions underlying American society in that contentious decade.
Don't use plagiarized sources. Get Your Custom Essay on
Muhammad Ali
From as low as $9/Page
Order Essay
Let us go back to the late 1960s, when the federal government felt obligated to prosecute a celebrity draft evader, the Nation of Islam passionately advocated for their most prized recruit, Stokely Carmichael defended a man he called “hero,” who through his refusal to serve dramatically raised the profile of the growing anti-war movement (especially for Black Americans), patriotic American Legion members urged boycotts of Ali prize fights, traditional white establishment sportswriters heaped scorn upon the young heavyweight champ, and Ali, took a courageous and costly principled stand against a war that he could not in good conscience join.
Drawing on all the sources below, explain the issues surrounding Muhammad Ali’s greatest fight, his refusal to be drafted for combat during the Vietnam War. Having read chapter 25 of Foner’s Give Me Liberty, which provides a foundation for understanding social protest and antiwar sentiment during the 1960s, read the following articles from the Washington Post (Links to an external site.) and New Yorker (Links to an external site.) for more background about Muhammad Ali and the draft. Then consider the following sources—videos, primary documents, and newspaper and magazine articles—as you work through the assignment.
Order Now Image
The sources below are arranged around five personas, two of which are fictional composites, that represent five different constituencies/perspectives about the controversy. While they are hardly conclusive, they should provide plenty of context for you to construct a historical argument about the incident and its larger social and political meaning. With all that in mind, here is your prompt:
Drawing on all the sources below, explain the issues surrounding Muhammad Ali’s “greatest fight,” his refusal to be drafted for combat during the Vietnam War. Consider the historical context and the various perspectives of the five personas. Why was his decision met with such hostility? How did the controversy both reflect and shape larger social struggles, both in the civil rights and antiwar movements, as well as beyond? What does Ali’s struggle tell us about American society in the 1960s? In short, why is Cassius Clay/Muhammad Ali so important?
Your paper should be a four-to-five page typewritten (1250 words, double-spaced) analysis of the issue. A good paper will consider these questions and provide evidence from the various sources and/or your textbook to support your answer. The essay is due Friday, February 19th . Note that without the paper, you will not have completed all of the requirements for the course, and will therefore be ineligible for a passing grade.
Order Now Image
While grading is primarily based upon your understanding and critical analysis of the sources, form will also be taken into account. In addition to typographical errors, check carefully for spelling and grammatical mistakes. Pages must be numbered. With regard to formatting, use standard one inch margins and a 12 point font. Times New Roman is the preferred typeface. And remember to cite direct quotations. As a rule they can be valuable in underscoring a point, but avoid lengthy and excessive quotations: they are boring. As for form, you can cite your work with either MLA or Chicago styles, as long as you are consistent. Finally, do not plagiarize. No credit will be given for dishonest work. For more information on Muhammad Ali check out :
Attachment : Muhammad
The Homework Writings
Order NOW For A 10% Discount!
Pages (550 words)
Approximate price: -
Our Advantages
Free Revisions
Originality & Security
24/7 Customer Support
Try it now!
Calculate the price of your order
We'll send you the first draft for approval by at
Total price:
How it works?
Follow these simple steps to get your paper done
Place your order
Proceed with the payment
Choose the payment system that suits you most.
Receive the final file
Our Services
Flexible Pricing
Admission help & Client-Writer Contact
Paper Submission
Customer Feedback
|
<urn:uuid:baf6d280-9e31-41a5-8dee-08094d5afa5c>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.53125,
"fasttext_score": 0.03608715534210205,
"language": "en",
"language_score": 0.9168657660484314,
"url": "https://thehomeworkwritings.com/muhammad-ali/"
}
|
radiative zone
The layer of a star that lies just outside the core, to which radiant energy is transferred from the core in the form of photons. In this layer, photons bounce off other particles, following fairly random paths until they enter the convection zone. Despite the high speed of photons, it can take hundreds of thousands of years for radiant energy in the Sun's radiative zone to escape and enter the convection zone.
Nearby words
1. radiation sickness,
2. radiation therapy,
3. radiational cooling,
4. radiative,
5. radiative capture,
6. radiator,
7. radiator grille,
8. radiatory,
9. radiatus,
10. radical
|
<urn:uuid:d81fdf03-52b4-4e7a-8726-cabdea9d82d5>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.53125,
"fasttext_score": 0.999239444732666,
"language": "en",
"language_score": 0.846782386302948,
"url": "https://www.dictionary.com/browse/radiative-zone"
}
|
Temporary Camps and Camp Placement
Historian Gordon S. Maxwell defines the Roman ‘temporary camp’ as “the technical term for the defended bivouac in which bodies of Roman troops rested while on the march or while engaged in other field-operations, such as training exercises, fort-building, and the like1.”
Although these camps varied in size, shape, and purpose, they were typically constructed in the shape of a rectangle with rounded corners and were protected by a small ditch and a small earthen rampart (presumably made with the displaced earth from the ditch) lined with stakes2.
These temporary camps are categorized by Rebecca Jones into four types3. The first and most common — marching camps — were used to protect an army while away from a permanent base campaigning or conducting other military operations. These camps were constructed by the Romans at the end of a day of marching or conducting other operations in the field4. The second most commonly found type of camp is the construction camp, which appear to have housed troops building larger, more permanent fortifications. There are several proposed construction camps in Scotland, many of which are located near the Antonine Wall, but they can be difficult to distinguish from other camps. Even less common are siege camps, which were used to house troops besieging a nearby fortification (of which there is only one proposed site in Scotland, at Burnswark), and practice camps, constructed as part of training exercises (of which there are no firmly proposed sites north of Hadrian’s Wall5.
“Un acampamento romano en tiempo de la invasión romana en España” — an artist’s interpretation of a Roman marching camp, from an 1852 history book by Florián de Ocampo, provided by Fondo Antiguo de la Biblioteca de la Universidad de Sevilla via Creative Commons.
Most of what we know about these temporary camps comes from a combination of what we can glean from the writings of ancient historians, military manuals authored during the time of the Roman Empire, and information from the sites of the camps themselves, studied via aerial survey or archaeological excavation6. One notable exception to this is a more visual and artistic source — the column of Trajan in Rome contains several artistic depictions of Roman camps under construction7.
Of these sources, two of the most detailed in their descriptions of these camps come from military manuals. One of these manuals is known as the Epitoma Rei Militaris, written by Vegetius sometime in the late 4th century AD. His point of view is interesting in that he appears to be writing a guide on how to fix the Roman army of his day, apparently as advice to an emperor of disputed identity, rather than a history of the Roman military’s practices. Vegetius seems to have been some sort of court official, and “there is little evidence that he himself was a soldier,” but he does not seem to position himself as anything other than that8.
Main entrance to the Lunt Roman Fort (reconstructed) near Coventry, England. Photo by Chris McKenna19 .
The Epitoma covers a wide variety of military subjects, but of most importance to the subject of camp placement are sections XXI-XXV of Book I. Vegetius appears to have been dismayed at the state of camp construction and placement during his own time — Stelten’s translation at one point reads “[…] competence in this matter [referring to the fortification of camps] has been completely lost […]9.” He writes that the Roman camp should be placed in a location with easy access to food, water, and wood, in a location that is “conducive to health10” and where flooding would not endanger the camp. He also warns against choosing camp sites that are overlooked by nearby high points, “which, if captured by the enemy, might be an obstacle11.”
Another, even more detailed manual covering the construction and placement of Roman camps is the de Munitionibus Castrorum, likely written by an military surveyor whose name is disputed — he is sometimes called Pseudo-Hyginus12 because the work has been attributed to Hyginus Gromaticus in the past13.
The manual is probably from the 1st or 2nd century AD14, which if true is nearly contemporary with many of the Romans’ Scottish campaigns15. In addition to providing in-depth layouts of a temporary camps, down to the numbers of men and physical dimensions of camp features, de Munitionibus Castrorum lays out fairly detailed ground rules for choosing the terrain on which the camp would be built. Among these rules are16:
• “Whatever the position of the camp there should be a river or spring on one side or the other.”
• “[…] the camp should not be overlooked by a mountain from which the enemy could attack or see what is going on in the camp.”
• “[…] there should be no forest nearby […], nor gulleys or valleys […]; nor should the camp be near a fast-flowing river which might flood and overwhelm the camp in a sudden storm17.”
Pseudo-Hyginus also lists generic types of terrain in order of their suitability — he states that the best terrain for a camp is a small rise overlooking a plain, “so that the area is dominated by the camp18.” Following are flat plains, then hills, then mountains, and finally anywhere else — what the author refers to as an “unavoidable camp18.”
Beyond these two sources, there are of course other ways to glean information about the Roman temporary camp. Other literary sources, while often very detailed, typically do not focus specifically on camps or their placement — for more information on these sources, I would recommend Chapter 2 of Catherine Gulliver’s thesis The Roman Art of War (see notes below). Survey reports are also useful for studying the temporary camp — examples of these can be found on the Citations page.
1. Maxwell, Gordon S., The Romans in Scotland (Edinburgh: J. Thin, 1989, Print): 38-39.
2. Breeze, David. J., The Roman Army (New York: Bloomsbury Academic, 2016, Print): 111-112.
3. Jones, Rebecca H., Roman Camps In Scotland (Edinburgh: Society of Antiquaries of Scotland, 2011, Print): 7.
4. Goldsworthy, Adrian K., The Complete Roman Army (New York: Thames & Hudson, 2003, Print): 169-172.
5. Jones, Roman Camps In Scotland: 7-12.
6. Maxwell, The Romans in Scotland: 38-67.
7. Jones, Roman Camps In Scotland: 7.
8. Stelten, Leo F, translator, Epitoma Rei Militaris By Flavius Vegetius Renatus (New York : Peter Lang, 1990, Print): XIII-XVIII.
9. Stelten, Epitoma Rei Militaris: 45.
10. Stelten, Epitoma Rei Militaris: 45.
11. Stelten, Epitoma Rei Militaris: 45.
12. Jones, Roman Camps in Scotland: 5.
13. Gilliver, Catherine M., The Roman Art of War: Theory and Practice. A Study of the Roman Military Writers (London: University of London, University College London (United Kingdom), 1993, ProQuest, Web): 14.
14. Gilliver, The Roman Art of War: 14.
15. Maxwell, The Romans in Scotland: 26-37.
16. Gilliver, The Roman Art of War: 244-245.
17. On reading this passage in full, one might observe a stereotype that is literally ancient — Pseudo-Hyginus states that his ancestors referred to unfavorable positions like these as “mothers-in-law.”
18. Gilliver, The Roman Art of War: 244.
19. Chris McKenna (Thryduulf) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0) or CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons.
|
<urn:uuid:fb33756d-c6c8-45d5-b77f-323800edc5b3>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.9375,
"fasttext_score": 0.022670269012451172,
"language": "en",
"language_score": 0.9341287612915039,
"url": "http://anarmysfootsteps.leadr.msu.edu/historical-sources/"
}
|
Cadmium is a minor metallic element, one of the naturally occurring components in the earth’s crust and waters, and present everywhere in our environment. It was first discovered in Germany in 1817 as a by-product of the zinc refining process. Its name is derived from the Latin word cadmia and the Greek word kadmeia that are ancient names for calamine or zinc oxide.
Naturally-occurring cadmium-sulfide based pigments were used as early as 1850 because of their brilliant red, orange and yellow colors, and appeared prominently in the paintings of Vincent Van Gogh in the late 1800s. Germany was the first and only commercial producer of cadmium metal for industrial applications up until World War I. Thomas A. Edison in the United States and Waldemar Junger in Sweden developed the first nickel-cadmium batteries early in the 20th Century. However, the most significant early use of cadmium was as a sacrificial corrosion protection coating on iron and steel.
Exposure to certain forms and concentrations of cadmium is known to produce toxic effects on humans. Long-term occupational exposure to cadmium at excess concentrations can cause adverse health effects on the kidneys and lungs. Adverse human health effects have generally not been encountered under normal exposure conditions for the general population except in areas of historically high cadmium contamination. The potential risks from cadmium exposure have been extensively studied, and are now tightly controlled by occupational exposure standards, regulations for cadmium in ambient air, water and soil, and legislation covering cadmium emissions, labeling and disposal of cadmium-containing products, and impurity levels in other products such as fossil fuels, fertilizers and cement.
Source :
|
<urn:uuid:6809fb28-090b-48aa-b433-95489fcf8364>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.59375,
"fasttext_score": 0.09965938329696655,
"language": "en",
"language_score": 0.9595589637756348,
"url": "https://www.cadmium.org/?print_version"
}
|
How Do You Prove That A Function Is Continuous At Some Point?
Health insurance, taxes and many consumer applications result in a models that are piecewise functions. To prove these functions are continuous at some point, such as the locations where the pieces meet, we need to apply the definition of continuity at a point.
A function f is continuous at a point x = a if each of the three conditions below are met:
i. f (a) is defined
ii. \displaystyle \underset{x\to a}{\mathop{\lim }},f(x) is defined
iii. \displaystyle \underset{x\to a}{\mathop{\lim }},f(x)=f(a)
In the problem below, we ‘ll develop a piecewise function and then prove it is continuous at two points.
Problem A company transports a freight container according to the schedule below.
• First 200 miles is $4.00 per mile
• Next 300 miles is $3.00 per mile
• All miles over 500 is $2.50 per mile
Let C(x) denote the cost to move a freight container x miles.
a. Find a piecewise function for C(x).
For this function, there are three pieces. The first piece corresponds to the first 200 miles. The second piece corresponds to 200 to 500 miles, The third piece corresponds to miles over 500.
The board below show the function.
Let’s break this down a bit. In the first section, each mile costs $4.50 so x miles would cost 4.5x.
In the second piece, the first 200 miles costs 4.5(200) = 900. All miles over 200 cost 3(x-200). This gives the sum in the second piece.
In the third piece, we need $900 for the first 200 miles and 3(300) = 900 for the next 300 miles. In addition, miles over 500 cost 2.5(x-500).
b. Prove that C(x) is continuous over its domain.
Each piece is linear so we know that the individual pieces are continuous. However, are the pieces continuous at x = 200 and x = 500?
Let’s look at each one sided limit at x = 200 and the value of the function at x = 200.
m212_cont_2_bSince these are all equal, the two pieces must connect and the function is continuous at x = 200. At x = 500,
m212_cont_2_cso the function is also continuous at x = 500.
This means that the function is continuous for x > 0 since each piece is continuous and the function is continuous at the edges of each piece.
|
<urn:uuid:e6968533-ec14-4451-8e38-9d3a31ef5f2b>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.703125,
"fasttext_score": 0.06854218244552612,
"language": "en",
"language_score": 0.8847015500068665,
"url": "http://math-faq.com/wp/question/how-do-you-prove-that-a-function-is-continuous-at-some-point/"
}
|
Animals of Galapagos | Fauna - Galapagos Travel Guide
Galapagos Wildlife
Due to the relative isolation of the Galápagos Islands and its distance from the surrounding mainland continents, many mammals have not found their way to Galápagos shores—at least not naturally. Only 6 terrestrial mammal species are thought of as island ‘natives,’ though substantially more marine mammals call the Galápagos Marine Reserve home.
The Galápagos Marine Reserve extends over 133,00 square kilometers, making it the second largest marine reserve in the world. It was declared a World Heritage Site in 2001, and its conservation considers to be a top priority for the region as it supports the Galápagos Islands’ susceptible ecosystem.
Visit Punta Suarez, ESPANOLA, the area of Galápagos with the highest rate of endemic species on our Galápagos Island Cruise.
Galápagos fauna are as bizarre as they are beautiful—and sometimes the animal’s adaptive features are more eye-catching than the animals themselves. Multiple species of giant tortoises lumber slowly across fields of lava. Iguanas jump ship and plunge into the ocean. Finches flirt with their conspicuously changing beaks.
Galápagos fauna are genetic curiosities, all due to this thing called Adaptive radiation. Adaptive radiation refers to the diversification of a species’ lineage, and some of the best examples of adaptive radiation are Charles Darwin’s Galápagos finches. In comparison to adaptation, adaptive radiation refers to the diversification of a species’ lineage over a short period of time. During this process, newly formed biological lineages similarly evolve different adaptive characteristics. In the Galápagos, the forever-changing archipelago drove the further radiation of species such as finches and tortoises.
|
<urn:uuid:4b4d634f-41fa-4061-972d-e6b2d8b98353>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.625,
"fasttext_score": 0.26718640327453613,
"language": "en",
"language_score": 0.8942232728004456,
"url": "https://www.galapagosunbound.com/galapagos-wildlife"
}
|
Read. Write. Go.
Revenge in The Orestia (Agamemnon), Oedipus, and Medea
Natasha Cribbs
Dr. Sunni Thibodeau
World Literature to 1650
14 March 2013
Revenge in The Orestia (Agamemnon), Oedipus, and Medea
In the three tragedies of Agamemnon, Oedipus, and Medea, readers get a strong sense of revenge. It may seem tragic that so many people die but it often leads back to the actions of others and their desire for revenge. Throughout the tragedies, though, the revenge and forms of punishment change. The revenge and justice evolves from one story to the other. In the tale of Agamemnon there is a lot more death and blood. Then in Oedipus there is more of an exile punishment. In Medea she ends up having a monetary retribution.
The tragedy of Agamemnon is a story that holds true for the saying, an eye for an eye. The saying normally goes along with actions that people take against the ones who have wronged them. It is their own form of revenge. In Agamemnon, the sense of betrayal starts during the Trojan War. “From the outset of the trilogy, we are immersed in a world already full of the misery of the Trojan War and stained by the murderous House of Atreus. In each generation, there have been acts of violence and retribution, intermingling the private and public realms, that have called forth further vendettas in a seemingly endless chain. The facts in each case are indisputable: Agamemnon has sacrificed his daughter Iphigeneia ten years earlier to advance his campaign against Troy” (The Oresteia, n.d.). It is when Agamemnon sacrificed his daughter that the tale begins to take a different route. His wife, Clytaemnestra, is hurt since it was her daughter as well. When she finds out about what he had done she plots her revenge. The tragedy and revenge does not stop there, however. Their son Orestes feels that it is his job to extract revenge for his father. In turn he kills his own mother for killing Agamemnon.
Each of the characters believe in what they done. They believed that what they done was correct and justified. Justified or not, people died for previous actions and resent motives. Each of them end up with blood on their hands because of the pain they felt of the death of another and their need for revenge or personal justice.
In the next tragedy, Oedipus, the revenge is fairly different. In the story there is a prophecy about the king having a son who would in turn kill him and marry the mother. They believed that it had never come to pass. However later it is revealed that it indeed did happen. Oedipus thought that it was Creon who killed the king. They finally realize that Oedipus is the one in the prophecy and it had came true. Horrified at what had happened, Jocasta kills herself. In turn, the grief stricken Oedipus gouges out his eyes. The kingdom is then left in Creons care. Creon exiles Oedipus for his actions. At the end of the tale there is also some explanation about Oedipus’s children, in which are not seen again and the house of Oedipus is gone.
Like in the tale of Agamemnon, there is death. Death plays a large part in the roles of revenge in the stories. Unlike in Agamemnon, where they killed the others for their revenge, there is only really suicide and then exile. Oedipus does not die for his crime against the king but instead harms himself and must live as an exile. Exile can be seen as a more humane form of punishment then death.
The last of the these three tragedies is Medea. Medea is a really interesting woman. She takes her anger and pain to what some might think a new level. “The Medea tells the story of the jealousy and revenge of a woman betrayed by her husband. She has left home and father for Jason’s sake, and he, after she has borne him children, forsakes her, and betroths himself to Glauce, the daughter of Kreon, ruler of Corinth.” (Bates n.d.). Medea is a woman who feels betrayed. It is likely that the saying hell has no fury like a womans scorn, came from her tale. Because of the betrayal of her husband Medea ends up killing her own children. She doesn’t stop there, either. Medea is powerful and sly, and continues until she gets what she wants. Later in the tale she sends some gifts to Jasons wife. Of course the gifts were not normal. The gifts were poisoned and she dies a horrible death.
The revenge that Medea is put threw is that she ends up erecting a shrine of them and holds a festival every year. She does not die for her actions or get exiled, but that does not mean that she does not have to live with what she had done for the rest of her life. In the end Medea pays, but in a different way then the other two tragedies.
Revenge can be seen in all three of the tragedies of Agamemnon, Oedipus, and Medea. The characters believe that they are right in what they do, and that it is justified. If it is or is not would probably be a matter of opinion. The three tragedies show changes in the way that society deals with revenge or justice. It shows the movement from killing and blood, to anguish and exile, to ones own retribution because of the deeds. The actions tend to get more humane down the line of the tragedies. It slowly shows how beliefs and actions changed over time while still keeping the tales tragic and engulfing.
Work Cited
Bates, Alfred. “Medea.” Medea. Theare History, n.d. Web. 14 Mar. 2013.
“The Oresteia.” The Oresteia. The Great Books Foundation, n.d. Web. 14 Mar. 2013.
Single Post Navigation
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
%d bloggers like this:
|
<urn:uuid:06487e6e-5743-4b0e-bfde-15baef4a1130>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.75,
"fasttext_score": 0.11949944496154785,
"language": "en",
"language_score": 0.9688961505889893,
"url": "https://natashacribbs.wordpress.com/2013/03/26/revenge-in-the-orestia-agamemnon-oedipus-and-medea/"
}
|
Show this content while the ad loads.
Every empire has unusual features; the Romans, for instance, were civil engineers extraordinaire, building aqueducts and roads still in use today, thousands of years later. The Mongol Empire was noted for its sheer military power, a rapid communication system based on relay stations, paper currency, diplomatic immunity and safe travel under Pax Mongolica. These features facilitated the growth, strength and flexibility of the Empire in responding to ever-changing circumstances.
The Yam
The Yam or Ortoo, the communication/postal relay system, grew out of the Mongol army’s need for fast communication. As the empire grew, it eventually incorporated some 12 million square miles, the largest contiguous land empire in world history. Genghis Khan set up a system of postal/relay stations every 20 to 30 miles. A large central building, corrals and outbuildings comprised the station. A relay rider would find lodging, hot food and rested, well-fed horses. The rider could hand his message to the next rider, or he could grab a fresh horse, food and go. By this method, messages traveled quickly across the vast acreage of the empire. At first, merchants and other travelers could use the postal stations, but they abused the system and the Empire rescinded the privilege.
Paper Currency
When Marco Polo traveled through the Mongol Empire in 1274, he was astonished to find paper currency, which was completely unknown in medieval Europe at the time. Genghis Khan established paper money before he died; this currency was fully backed by silk and precious metals. Throughout the empire, the Chinese silver ingot was the money of public account, but paper money was used in China and the eastern portions of the empire. Under Kublai Khan, paper currency became the medium of exchange for all purposes.
Diplomatic Immunity
The Mongols relied on trade and on diplomatic exchanges to strengthen the empire. To that end, Mongol officials gave diplomats a paiza, an engraved piece of gold, silver or bronze to show their status. The paiza was something like a diplomatic passport, which enabled the diplomat to travel safely throughout the empire and to receive lodging, food and transportation along the way. The Mongols sent and received diplomatic missions from all over the known world.
Safe Travel through the Empire
Marco Polo’s Travels along the Silk Road in 1274
Along with diplomats, trade caravans, artisans and ordinary travelers were able to travel safely throughout the empire. Trade was essential to the empire since Mongols made very little themselves and so safe conduct was guaranteed. When Karakhorum, the Mongol capital was being built, artisans, builders and craftsmen of all types were needed, so talented people were located and moved to Mongolia. Under the Mongols, the Silk Road, a series of interconnected trade routes from East to West operated freely, facilitating a fertile exchange of ideas and goods from China to the West and visa versa.
|
<urn:uuid:25f570eb-7c19-4d6d-9fed-f788b844fc27>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.953125,
"fasttext_score": 0.17813235521316528,
"language": "en",
"language_score": 0.9630070924758911,
"url": "https://www.historyonthenet.com/mongol-empire-special-features"
}
|
Documenting Shipwrecks
With Mylar attached to a hard "slate," this archaeologist makes notes using an ordinary pencil.
Once archaeologists locate a shipwreck, they study it in great detail. You might think of a shipwreck as a history book. When a shipwreck is carefully studied and documented, it is like writing a book. If the shipwreck is not examined properly or artifacts are removed without recording them, it is like tearing pages from the book-pages that will be lost forever.
Archaeologists go under water and map the entire shipwreck exactly as it appears. They take measurements and make detailed drawings on Mylar (waterproof paper) using an ordinary pencil. Underwater archaeologists often divide a shipwreck into small sections. Each archaeologist records the ship parts and artifacts within his or her area. These drawings, along with underwater video and photographs, are later pieced together to form a complete picture of the shipwreck, much like putting together a puzzle.
Why do archaeologists draw such detailed maps of shipwrecks? Because of limited visibility, archaeologists cannot take a photograph of the entire shipwreck. So the only way they can view an entire wreck in great detail is to map it.
Have a Listen
Shipwrecks, Archeologists and Unholy Apostles (19 mins 48 sec)
The uniquely preserved shipwrecks of Lake Superior have become a historical resource for the state, as well as a recreational magnet for sport divers. Hear the chilling tale of the sinking of the Lucerne, and listen while underwater archeologists Tamara Thomsen and Keith Meverden share their passion for this fascinating field, explain its scientific and historical significance, and solve the mystery of the Lucerne's final hours.
© 2019 - Wisconsin Sea Grant, Wisconsin Historical Society
|
<urn:uuid:aebb2f9c-df2b-457f-b435-af69f67b0f75>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.640625,
"fasttext_score": 0.7835481762886047,
"language": "en",
"language_score": 0.9656407833099365,
"url": "http://wisconsinshipwrecks.org/learn/DocumentingShipwrecks"
}
|
The Allied Seizure of Germany’s Pacific Island Colonies
At the outbreak of World War I, Germany’s empire in the southwestern Pacific Ocean consisted of the following territories: the northeastern corner of New Guinea; the Bismarck Archipelago; the western half of Samoa; the northern half of the Solomon Islands, including Bougainville; Nauru; and Micronesia, consisting of the Mariana, Caroline and Marshall Islands. Acquired as showpieces to demonstrate the might of the emergent German Empire, the Pacific colonies were of little economic importance. Moreover, their great distance from Germany made them a strategic liability. As a result, German rule was characterized by more or less benign neglect of their indigenous subjects and the lack of any defensive preparations.
When the war broke out, the British government called on Australia to occupy New Guinea and New Zealand to take Samoa. Both Dominions, which harbored territorial ambitions in the Pacific, eagerly complied. Samoa surrendered without resistance to New Zealand on 29 August 1914. New Guinea fell shortly after a brief but sharp skirmish between the Australians and a mixed force of German and indigenous defenders at the Battle of Bitapaka near Rabaul on 11 September 1914. In October, Japanese landing forces moved quickly to occupy the Mariana, Caroline and Marshall Islands. The territories remained under the military rule of Australia, New Zealand, and Japan for the duration of the war.
Impact of the War on the Pacific Islands
In the territories occupied by Australia and New Zealand, German laws and currency were eventually phased out by the military administrators. At the same time, German business operations, most notably copra production and phosphate mining, were sequestered and then taken over by Australian and New Zealand companies, which intensified the economic exploitation of the Pacific Islands. The growing demand for labor in these enterprises led to the forcible recruiting and employment of the indigenous population. Resistance to forcible recruitment was often harshly punished using methods such as punitive expeditions by the military and police and the employment of corporal punishment. Thanks largely to the incompetence of medical officers in failing to implement a quarantine, the Spanish influenza struck the Pacific Islands in November 1918. The populations of Nauru and Samoa in particular were devastated as a result. In the Japanese-administered territories, the indigenous population was subjected to a heavy-handed policy of assimilation in the form of replacing local culture with a new and wholly Japanese identity.
In addition, Pacific Islanders from Allied-controlled territories also served in the war. A small number of Samoans, Tongans, Fijians, and Papuans were recruited into British service and eventually made their way to battlefields in Europe. Resentment against harsh French colonial rule in general, exacerbated by especially heavy wartime levies of manpower for labor and military service in France, sparked a revolt among the indigenous Kanak population of New Caledonia in 1917. By the time the French authorities succeeded in extinguishing the revolt in 1918, the conflict had claimed several hundred lives.
The Post-war Settlement and the Pacific Islands
The fate of the German Pacific Islands was largely an afterthought that evoked little interest among the statesmen assembled at the Paris Peace Conference. Although the Germans had entertained some hope of the restoration of their Pacific colonies, vigorous lobbying on the part of Australia, New Zealand, and Japan led to preservation of the wartime status quo in the form of League of Nations mandates. Australia was awarded the former German New Guinea, the Bismarck Archipelago, Nauru, and the northern half of the Solomon Islands. German Samoa became a New Zealand mandate, and Japan was awarded the mandate of the former German colonies north of the Equator, namely the Mariana, Caroline, and Marshall Islands. The Pacific Islands remained in the hands of those nations until World War II.
John Jennings, United States Air Force Academy
Section Editor: Mark E. Grotelueschen
|
<urn:uuid:770cdefe-c84b-40ca-b413-c64a808b5087>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.796875,
"fasttext_score": 0.11988794803619385,
"language": "en",
"language_score": 0.9601345658302307,
"url": "https://encyclopedia.1914-1918-online.net/article/pacific_islands"
}
|
<![CDATA[mosleyart.com - South and Southeast Asia]]>Sun, 10 Mar 2019 20:06:41 -0400Weebly<![CDATA[Helena Trevor]]>Fri, 03 Jun 2016 02:09:36 GMThttp://www.mosleyart.com/south-and-southeast-asia/enter-name-hereHow has the dome evolved in India throughout its history and how have the Arabic religion and culture influenced it, both in style and function?
"Let architects sing of aesthetics that bring Rich clients in hordes to their knees; Just give me a home, in a great circle dome, where stresses and strains are at ease."
~R. Buckminster Fuller
Domes in Islamic Architecture:
Indian architecture has by and far been influenced the most by Islam and the Muslim empire that took over India for a while. Islamic architecture reflected not only different structural elements, but also differing religious and social needs. Domes were often seen as a symbol of power. Whether religious or political, they often served as a visual focal point of a church or capitol building. The earliest domes were placed over kiblas, that were used to point to the direction of Mecca, the holy city where the Kaaba was located and where all worshippers were supposed to voyage to at some point in their lives. Nowadays, they are used in mosques to symbolize heaven above earth - similarly to in Christian culture.
When the Arabs rose to power in the 7th/8th centuries, they were often satisfied with flat-roofed mosques and buildings - such as those in Mecca and Medina. The Dome of the Rock (shown below) was the first domed building built, allowing the structure to be lighter and more flexible, but covered in copper to protect it from external elements (weather). While being structurally beneficial, the dome was also visually appealing as a central, unique object covered in gold material. It became a very prominent feature in mosques as a result.
The Dome of the Rock
-Jerusalem, Israel
-Opened 691 AD
-115 feet tall
Raja ibn HaywahYazid Ibn Salam
-Islamic, Byzantine, and Umayyad architecture
Ottoman domes first emerged around the 14th and 15th centuries. Such architecture was largely influenced by Iranian architecture and the earlier Seljuk culture - it eventually was reflected in Byzantine architecture. Domes covered prayer rooms and halls as well as the building surrounding a courtyard - and as the Muslim style dictates - were placed on a square structure, bringing about the transition problem.
The three methods for solving this problem (pictured below, in order) were to use the squinch, where corners of the square room were filled in to provide a base for the dome, the pendentive which is a triangular piece which is narrow toward the square structure but wide at the base of the dome, and the broken triangular surface where triangular pieces of stone were cut out to form a belt at the base of the dome.
After the Ottomans took over Constantinople, they converted many churches that already used the dome into mosques - furthering the use of the dome in Islamic culture. Impressed by the dome of the Hagia Sophia, the Muslims constructed and built many mosques of their own using a similar foundation.
Hagia Sophia
-Istanbul, Turkey
-Ashlar, brick
-180 ft tall
-Constructed in 537 AD
Indo-Islamic Architecture:
While the dome had existed in India prior to Arab control, it became more prominent (and in a different/distinct way) over time, as the Arabs shifted from preliminary contact with India in the pre-1200s era to establishing power in the country post-1200. As a result, Persian culture also influenced Indian architecture. Islamic architectural conventions were not hard to incorporate into Indian ones once the Ottomans gained power. The Indian’s had already been working with and had perfected stonework and the two cultures were similar in location and necessity - their buildings were built to withstand the same weather patterns and support similar ways of life.
Prior to interaction, the Indians and Muslims had focused on two different forms of architecture buildings. India focused on “sculptural” architecture, that emphasized the exterior of the building rather than the interior, characterized by large, externally elaborate temples. The Arabs focused on “membranous” architecture that emphasized the interior of the building, creating mosques that were simple on the outside, but magnificent on the inside and manipulated space to look small from the outside and big on the inside - by using domes and arches. While the Indians had used wood before the Middle Ages, and continued their initial beam and lintel structure using stone, the Middle East did not have much wood in the desert and thus began using stone much earlier. They had developed and used the arch and dome since the beginning. While the dome and arch were simple, they allowed for space to be used efficiently and for Indian architecture to take off. The first mosque in India, the Quwwat al-Islam Mosque (the Qutab) in Delhi - constructed c. 1200, was created using a combination of domes and arches. These preliminary attempts by Indian architects who did not understand the concept of the “true” arch and dome (defined above) ultimately failed as they used a method of corbelling that was unstable and would not support a large building. Once they learned how to build a true dome and arch, Islamic conventions of architecture were perfected and incorporated into Indian architecture. Mosques, schools, and palaces were built in this style.
Qutab Minar
-Dehli, India
-Commissioned in 1199 AD
-240 feet tall (tower)
-Architects unknown
-Arabian influence
While the assimilation of Islamic conventions into Indian architecture were rough and basic for many years, Indo-Islamic architecture reached its peak with the Mughal dynasty in India (16-17th cent). The first major building that typified the Indo-Islamic combination was the Mausoleum (tomb) of Humayun which was constructed so that a square platform carries four identical facades covered by a dome of white marble. The dome is symbolic of heaven, and the square platform of earth - appropriately since the tomb marks someone’s death and transition into the afterlife.
Mausoleum (tomb) of Humayun
-Delhi, India
-Mughal Architecture
-Architect: Mirak Mirza Ghiyath
-Built: 1565-1572
While Arab and Persian influence was significant, India still preferred sculptural architecture. They did not like the lack of adornment on the outside of mosques and other buildings. While iwans (which are concave openings in the exterior of a wall of a square building, created using arches) were popular in Arabic architecture by this point, India turned the iwans inside out - so they bulged outward - and pulled everything together with a dome in the center. As depicted below, the Taj Mahal demonstrates this practice, which proved to be very useful for tombs, at the peak of its existence.
The Taj Mahal
-Agra, India
-Commissioned 1632, Opened 1648
-240 feet tall
-Shah Jahan, Ustad Ahmad Lahouri, and Ustad Isa
-Iranian and Mughal architectural style
Eventually, the emperor Akbar of the Mughal empire came into power and attempted to further the construction of Indo-Islamic architecture - but with a focus on “framework” architecture and the use of posts and beams for support rather than arches and domes. He built himself a new capitol as well as a tomb at Sikandra - which used a Chatri (dome-like cover) supported by four columns and stacked upon others. Here, the dome was merely an ornamental, rather than fundamental, piece that decorated the tomb and gave it a focal point but did not have as much structural significance.
Mausoleum at Sikandra
-Agra, India
-Commissioned by Akbar, finished by Jahangir
-Built 1555-1613
Here is an interesting video that discusses the appearance and influence of Islamic architecture in India and touches on the history and architecture of some of the more famous buildings with domes from both cultures, such as the Taj Mahal and the Qutab!
Past (and Outside) Connections:
CASE STUDY: European Influence on Indian Architecture
Europeans invaded and colonized/established trading posts in India from the early 1600's to the mid-1900's - leaving lots of time for India's architecture to be greatly shaped by Western conventions. Among the buildings that the Europeans introduced, alongside European-style housing, were churches. Similarly to how the Arabic culture was incorporated into Indian culture to create Indo-Islamic architecture, Europeans assimilated Victorian/Gothic style architecture with Indian conventions to create Indo-Eurpoean architecture. European architects attempted (and often failed) to create public buildings that emulated oriental styles, often resulting buildings created from brick with iron support and domed roofs - not the best combination aesthetically or structurally. The Palladian style of architecture became quite popular around this time period - a style used to create public buildings - and was combined with Indian architecture, but did not have a lasting influence.
In the late 19th century, architects began combining elements and conventions of Indian and Western architecture, producing *successful* buildings that displayed the aforementioned of both cultures. Generally these combined-culture buildings were public buildings such as capitol buildings, colleges, etc, as this was around the time period where function was becoming increasingly important in architecture created. This "Indo-Gothic" revival (sometime's also known as "Mughal-Gothic" as the revival touched many cultures and styles) is exemplified in various buildings.
One example is the Gateway of India, which is constructed using the Indo-Saracenic style of architecture - which called for the merging of exotic Indian (and oriental) ornamentation with skilled structural European engineering. The Gateway combines elements of Muhgal architecture with Britain's Gothic style. The Mughal style of architecture consists of bulbous (onion-shaped) domes (as found on the Taj Mahal), four minarets, grandiose structures, ornamented facades, and large vaulted gateways. The Gothic style includes ribbed arches, vaults, and flying buttresses, as well as tall buildings with gargoyles and lots of decorations and patterns. The turrets (resembling minarets) surrounding the structure are evident in the Gateway, as are the pointed arches (traditional Indian convention) in the front of the building. The building stands tall and is detailed and decorated like both Mughal and Gothic architecture, and the vaulting on the interior of the building is similar to what can be found in a Gothic cathedral.
The Gateway of India
Indo-Saracenic architectural style
Another example of the Indo-Saracenic style that emerged was the Chhatrapati Shivaji Terminus in Mumbai. Turrets and pointed arches are evident as traditional Indian architecture, as the extraordinary detailing and high ceilings can be attributed to both Mughal and Gothic architecture. The central circular/patterned window and arches are derived from gothic architecture and the central dome, derived (more likely) from Gothic architecture, acts as a focal point and aesthetically pulls the building together. This building is a train station, and reinforces the idea of buildings being made for functional/useful purposes and then being ornamented with elements from various cultures.
Chhatrapati Shivaji Terminus
(Victoria Terminus)
On the left is the facade of the front of the building and on the right is the central dome.
Modern Day Connections:
The architecture I discussed in the last section can be viewed as contemporary or past architecture, depending on how you perceive Indian history. Since India didn't gain its independence until 1947, I don't consider that architecture to be entirely "modern" - it was still under the influence of European countries and not an autonomous nation yet, lacking the opportunity to thrive on its own.
Ultimately, the dome has become significantly less important in Indian architecture and just architecture in general, because churches and mosques are becomingly less frequent (where the dome acts symbolically) and because the dome has generally either acted as an easy way to pull together a somewhat poorly constructed building (and we now make better use of things like arches), or as a decoration/ focal point - which is still does. Domes are an old form of architecture - and while they may be common in mosques and churches from back in the day, they will be found more on public buildings, like courts or capital buildings, in the present day - and even that's just for fun.
Modern day India is much different from ancient India or the Indo-Islamic world that was full of combined architectural techniques and placed emphasis on its structural elements. The time period separating present-day India from its Arabic past consists of brutal imperialism by the Europeans - mostly Britain but also Dutch. Problems that arose with British colonialism such as discrimination and hierarchy resulted in widespread political, social, and ethnic instability. Once the British left India, architecture assumed a very functional personality as the desire for elaborate architecture in a wealthy, powerful country shifted to the urgent need for low-cost, efficient and compact housing for an overpopulated, unstable, poor one. Subsequent industrial revolutions and urban sprawl due to an influx in immigrants exacerbated the situation. Urban planning continues today due to a high population density and the need to conserve space and resources ($) - limiting greatly the ability to adorn buildings and make them large. Thus, lots of the architecture that previously awed so many people has been lost in ruins and in practice.
Examples of the dome and the influence of Islamic architecture still exist in many ways in Indian architecture despite restrictions. The Supreme Court in Delhi is a good example. Constructed to look like a scale, the building has a central dome at the top - not only bringing the building to a structural focal point, but also emphasizing its governmental importance within the city - with offices on one side and the library of the court along with other offices on the other side. This structure was built in the "Indo-European style".
Gamm, Niki. "The Dome - Symbol of Power." Hurriyet Daily News. N.p., 11 Jan. 2014. Web. 19 May 2016. <http://www.hurriyetdailynews.com/the-dome---symbol- of-power-.aspx?PageID=238&NID=60863&NewsCatID=438>.
The Metropolitan Museum of Art. "Ten Elements for East Window of an Architectural Ensemble from a Jain Meeting Hall." The Met. The Metropolitan Museum of Art, 1994. Web. 19 May 2016. http://www.metmuseum.org/art/collection/search/ 74425>.
Mustafaa, Faris Ali, and Ahmad Sanusi Hassan. "Mosque layout design: An analytical study of mosque layouts in the early Ottoman period." ScienceDirect (2013): 445-56. Print.
Stokstad, Marilyn, and Michael W. Cothren. "Art of South and Southeast Asia after 1200." Art History. 5th ed. New York City: Pearson Education, 2014. 771-91. Print.
|
<urn:uuid:7d533d51-3dbb-44ff-b78c-5785bebc36e4>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.546875,
"fasttext_score": 0.08577769994735718,
"language": "en",
"language_score": 0.9544649720191956,
"url": "https://www.mosleyart.com/24/feed"
}
|
→ Start healthfully with our BMI Weight Loss Calculator
What Is Gas and What Is Bloating?
By Russell Havranek, M.D. ; Updated August 14, 2017
Gas and bloating are the most common problems that bring people to the gastroenterologist. They affect all ages and genders. They cause embarrassment and discomfort. Gas and bloating come in many forms. Some people belch a lot. Some people feel they pass excess flatus. Some just feel big and bloated and uncomfortable. Whatever the overriding symptoms, having gas is uncomfortable at the very least.
What is Gas?
Gas is nothing more than air in the digestive tract. Bloating is the subjective feeling that the abdomen is full. Many people describe it as feeling like they have a balloon in their abdomen. When it has a visible increase in girth, we call it distention. Our digestive tract is made of the tubes that extend from our mouth to our anus. Most all gas production happens just past the stomach in the small intestine (about 18 to 22 feet long) and the large intestine, also known as the colon (about five to six feet long).
The main gases that exist in our gastrointestinal tract are carbon dioxide, oxygen, nitrogen, hydrogen and methane. There are two main reasons people have problems with gas and bloating — excess gas production and visceral hypersensitivity.
Excess Gas Production
Gas is a natural byproduct of the work done in our gastrointestinal tract as we digest food and move our stool. Everyone has gas. The amount of gas produced in the body depends upon one’s diet and other individual factors. The average amount of gas we normally produce has a wide range of somewhere between 500 milliliters and 1,500 milliliters in a day. However, some people do produce more gas than others.
Interestingly, we as humans don’t really make gas, it is mainly produced by the trillions of microorganisms (bacteria) that normally live in our bowels. In people that have more gas than normal, the gas builds up in their bowels and causes the symptoms of bloating, belching and passing of flatus. There are many reasons why people produce excess gas.
Visceral Hypersensitivity
Visceral hypersensitivity, a fancy medical term for sensitive bowels, is an interesting topic. There are many conditions that cause sensitive bowels, the most common of which is irritable bowel syndrome (IBS). We have found that most people that complain of gas and bloating actually aren’t making excess gas. What they have is visceral hypersensitivity, and their bowels are wired much more sensitively than others. They make normal amounts of gas in their bowels that the average person wouldn’t feel or notice, but because their bowels are more sensitive to things inside them they feel very gassy and bloated.
There have been several studies done on this topic. One of the most largely quoted studies measured gas excretion in patients with visceral hypersensitivity and bloating and showed that total gas excreted was not different than in healthy controls.
Whether you have excess amounts of gas in your gastrointestinal tract or normal amounts that just bother you too much, you are suffering from gas and bloating, and there are things we can do to help.
Video of the Day
Brought to you by LIVESTRONG
Brought to you by LIVESTRONG
More Related Articles
Related Articles
|
<urn:uuid:c6a966cc-10cc-4369-ba51-bf707c40eb6b>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.59375,
"fasttext_score": 0.28497016429901123,
"language": "en",
"language_score": 0.9651975035667419,
"url": "https://healthfully.com/1012008-gas-bloating.html"
}
|
A Coin Honoring A Famous Native American Woman
Every school child in America grew up learning about Sacagawea.
She, of course, was a legendary Native American (Lemhi Shoshone) woman who helped Meriwether Lewis and William Clark on their exploratory expedition from North Dakota across the Rocky Mountains to the Pacific Ocean and back in 1805-1806.
Sacagawea was born in 1788 or 1789 around the Salmon River region, in current day Idaho. In 1803 or 1804 she was married to French-Canadian fur trader Toussaint Charbonneau and quickly became pregnant.
Why was Sacagawea chosen to embark upon a journey, carrying her infant son, lasting thousands of miles across desolate and often dangerous land?
She was bilingual in two very different Native American tribal languages – Hidatsa and Shoshone. While her husband spoke French, English and Hidatsa. This translation chain was viewed as extremely valuable. Lewis and Clark knew they would need help communicating with the Shoshone tribes at the headwaters of the Missouri River.
Her work as interpreter proved invaluable and also her presence in the group demonstrate the peaceful nature of the mission.
In the year 2000, the United States Mint honored Sacagawea and her contributions to the early explorations of our great nation with the Sacagawea Golden U.S. Dollar coin. The coin was minted under the auspices of the United States $1 Coin Act of 1997.
The creation of the new coins were attempting to meet a need for vending machine use.
The Susan B. Anthony dollars were popular for vending machine use but the U.S. Treasury's supply of these coins were dwindling by the late 1990s. The act also provided direction to resume production of the Susan B. Anthony dollars until the new coins were ready for circulation.
A design contest was used to select the final representation of Sacagawea with her infant son, with the reverse side of the coin featuring an eagle representing peace and freedom. Sculptor Glenna Goodacre's design was chosen and she was paid a $5,000 commission in the dollar coins. The 2000-P coins paid to Goodacre were struck on burnished blanks, which created a unique striking for her set.
By and large the majority of Sacagawea coins are not rare and circulated coins do not carry numismatic value. They are also not made of gold, despite the golden color. The coins were composed of primary copper (77%), with small portions of zinc, manganese and nickel.
The coins still circulate today, but proved to be unpopular with the public and are not widely used.
There are a few key dates that are rare and have value beyond the $1 mark on the coin.
The U.S. Mint embarked upon partnerships with both Wal-Mart and General Mills to promote the use of the Sacagawea coin in commercial transactions.
Remember the days when you'd open a cereal box and get a prize? The partnership with General Mills included 10,000,000 boxes of Cheerios cereal that would contain a Lincoln cent as a prize or a new Sacagawea dollar. Some lucky Cheerios breakfasters would receive a certificate redeemable for 100 Sacagawea dollars.
Some of the Sacagawea dollars that were found in the Cheerios boxes were struck from a different set of dies. Within numismatic circles, these coins which showed "high detail" and enhanced eagle feathers on the reverse side of the coin became known as the "Cheerio Dollars" or "enhanced reverse die." These are valuable and depending on the grade have sold for $5,000 to $25,000.
Do you have jars of old coins sitting around? Check out any 2000 Sacagawea dollars and search for enhanced eagle feathers. You might find something more valuable than just a dollar.
|
<urn:uuid:2e3bdf97-a5ff-4203-bdb4-c7c27bcd4eaf>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.5,
"fasttext_score": 0.09105205535888672,
"language": "en",
"language_score": 0.9646322131156921,
"url": "https://www.blanchardgold.com/market-news/a-coin-honoring-a-famous-native-american-woman/"
}
|
Did you know?
There are several species of yellowjackets. These flying insects typically have a yellow and black head/face and patterned abdomen. Many say that the pattern resembles stripes. The abdomen pattern can help an entomologist or pest professional identify specific types of yellowjackets.
Yellowjackets are social insects that live in nests or colonies. They usually nest in the ground or in cavernous areas such as eaves and attics. Yellowjackets can be found anywhere humans can be found. They feed on sweets and proteins and are commonly attracted to trash and recycling bins. Yellowjackets are most active in the late summer and early autumn when a colony is at its peak.
Yellowjackets’ stings pose significant health threats to humans. They are territorial and will sting if their nest is threatened. Yellowjackets may sting repeatedly and can cause allergic reactions.
• Wear shoes, especially in grassy areas.
• Remove garbage frequently and keep trashcans covered.
• Do not swat at a yellowjacket, as it increases the likelihood of an aggressive reaction.
• Avoid wearing sweet-smelling perfumes.
• If you find a yellowjacket nest on your home or in your property, contact a licensed pest professional.
|
<urn:uuid:8ba7e294-c6ba-4cfd-bb38-fbabd35c08b3>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.84375,
"fasttext_score": 0.0253792405128479,
"language": "en",
"language_score": 0.9332346320152283,
"url": "https://lanepestcontrol.com/ajax/yellowjackets.html"
}
|
"Maibaum" by Kristi Malakoff, paper and foam core, 2009. Photo: Kristi Malakof (Kristi Malakof/National Portrait Gallery, Smithsonian Institution)
Art and architecture critic
It is strange and entirely delightful that the silhouette still enchants us. The shadow form, which reduces the three-dimensional world to lines and contours, dates back millennia before it became a popular medium for making portraits in the late 18th century. And despite the emergence of powerful new technologies for representation, including 3-D films and virtual reality, silhouette remains a vital form even today, used by artists and photographers to simplify, clarify and often alienate us from our usual habits of looking at the world.
An exhibition at the National Portrait Gallery focuses on the silhouette in American life, its prevalence as a cheap way of producing a portrait likeness before the advent of photography, and its persistence as a visual medium in contemporary art. “Black Out: Silhouettes Then and Now” is a fascinating show that successfully uncovers the strange cultural history of the form, especially its intersections with the foremost social crisis of the age, which was slavery. It was odd that white bourgeois families reveled in the form, which rendered them as black; it was odd that some of the most powerful abolitionist images used silhouettes to represent slave ships; and that runaway slaves were depicted in newspapers by their silhouettes, as if any more visual information would overly humanize them.
Curator Asma Naeem suggests that silhouettes flourished in America because they were cheap and easily made and were distinct from the genteel European tradition of formal portraiture. Americans during the era were also self-conscious about political representation, and perhaps saw a connection between representing themselves in visual form and being represented as political actors in the emerging democratic system. Silhouettes allowed families to keep a memento of loved ones, but also to assert their presence as individuals in an age that celebrated the rise of a new political class and identity. The making of silhouettes also seemed to attract artisans who would otherwise have been marginal to the artistic and economic mainstream, including the mixed-race Moses Williams, who was born a slave and later became both free and a prolific maker of “profiles,” and Martha Ann Honeywell, a woman born without arms and only three toes, who nevertheless managed to use scissors with such dexterity that she too became a master of the form.
The exhibition mentions photography, which was introduced after 1839, only glancingly, which is a strange omission. Photography would, of course, change the game entirely when it came to personal representation. As Naeem notes in her catalogue essay, prescient figures such as Frederick Douglass, who sat for more than 160 photographs, eagerly embraced the new technology as a tool for self-fashioning. But photography didn’t simply displace silhouette, it has retrospectively altered our understanding of it. One can’t see silhouettes today but through the prism of a century and a half of photographic imagery.
Both photography and the silhouette seem to offer a direct impression of the living being, either traced from the person’s shadow, or captured on a chemically prepared plate. The first commercially published book of photographs was called “The Pencil of Nature,” which is also a perfect description of the silhouette process, which involved tracing then cutting an image of the person’s shadow. Even the verb “to take” was used in both cases, to take a silhouette and to take a photograph, as if something material from a human being was removed in the process. The magic of both media was their peculiar mix of the occult and the technological, which remains to this day part of the reason that we still find ourselves reduced to stupefaction before particularly successful photographs and silhouettes.
Installation view of "Auntie Walker's Wall Sampler for Civilians" by Kara Walker. Cut paper on wall. (Kara Walker/Mark Gulezian/National Portrait Gallery)
A few of these stunners are on view, especially a faintly rendered, life-size image of an enslaved woman named Flora made around 1796, which is one of the earliest images of a slave made in the United States. Her neck is bent forward, her head straight, and the peaks and valleys of her tightly curled hair are clearly visible on the faded paper, which was found folded up in a cellar in the home of the family that once owned her. One senses in this document, which is as evocative as any of the professionally made silhouettes nearby, not the cheapness of the form, but the urgent need to remember and transcend mortal suffering that its creation fulfilled. Unlike millions of other enslaved people, Flora did not leave without a trace, though little else is known of her.
A double silhouette of Sylvia Drake and Charity Bryant, made in the early 19th century, shows two young women in profile, facing each other, their images attached to a piece of silk, with thin braids of hair framing them, forming a heart shape. They were a lesbian couple who lived in Vermont, memorialized both as individuals and partners, a relationship confirmed in the words of Charity’s nephew, William Cullen Bryant, who said they “took each other as companions for life,” and their “union, no less sacred to them than the tie of marriage, has subsided, in uninterrupted harmony for more than forty years.”
The exhibition divides neatly into a 19th-century gallery and four installations by contemporary artists who are inspired by the form. The art star of the 19th-century space is a Frenchman, August Edouart, who traveled in the United States for a decade beginning in 1839, making almost 4,000 elegant, detailed and artistically ambitious silhouettes. Many of these are portraits, often of renowned people of the age. But he also assembled silhouettes into composite pictures, sometimes capturing a whole family, or vignettes of family life (one includes a parlor image of people looking at projected lantern slides). He often places his figures on printed paper to give them social context, and imagined fantastical or exotic scenes, including one of “South Sea Islanders” engaged in combat. The last of these, perhaps a work of his own invention, strains against the inherent two-dimensionality of the form by including figures of various size, suggesting their recession into the distance of a perspectival picture.
The four contemporary artists represented in the exhibition amplify the dualistic sense of technology and the occult seen in the 19th-century work. The most stunning of the works are by Kumi Yamashita, who conjures convincing shadow images using light and gently folded pieces of origami paper, or the carefully carved edge of a chair, or letter and number forms glued on the wall. The shadows give the uncanny suggestion of a living being, while they in fact are ensorcelled from inanimate material. A room of work by Kara Walker plays with the nostalgia inherent in shadow shows, silhouettes and magic lanterns, to make real the grotesque and violent history of racism and slavery — the economic engine that produced the leisure that made it possible to revel in these entertainments.
John Quincy Adams by Auguste Edouart. Lithograph, chalk and cut paper on paper1841. National Portrait Gallery, Smithsonian Institution; gift of Robert L. McNeil, Jr. (National Portrait Gallery, Smithsonian Institution)
One senses in this exhibition the core of an even larger show, that would better distinguish the American silhouette mania from the making of silhouettes in Europe at the time, and draw out connections between the older, artisanal form made with candlelight and cut paper and its close cousin, the photograph. Naeem makes some large claims for the silhouettes, not least of which is that they “attempted to reconcile” the “discomfiting polarities” of American life. It’s not clear that they did that, though this show more than adequately demonstrates that the form was enormously popular, that it caught up in its abundance a remembrance of people who would not otherwise have been memorialized, and that like so many cultural habits of early America, the making and collecting of silhouettes was often wild and strange and slightly surreal.
Black Out: Silhouettes Then and Now is on view at the National Portrait Gallery through March 10. npg.si.edu.
|
<urn:uuid:026a1684-3921-454c-beeb-3efd5cddb129>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.546875,
"fasttext_score": 0.024020731449127197,
"language": "en",
"language_score": 0.9711979031562805,
"url": "https://www.washingtonpost.com/entertainment/museums/before-photography-the-silhouette-helped-leave-an-impression/2018/05/21/377e7774-592f-11e8-b656-a5f8c2a9295d_story.html?utm_term=.ce0f9f0f798c"
}
|
Konyak Tribe Essay
Konyak Tribe Term paper
The Konyak Nagas The Konyak Nagas, Indian tribes living in the northeast frontier of India are an interesting culture to research. They were considered in the 1930s as a culture still daily living as they did in ancient days yet by 1947 the government of India took effective steps to bring them under its administrative control. The tribes are separated into several villages such as the Sangtam, Chang, Kaylo Kengyu, Angami, Konyak and the Wakching Nagas. The villages differed in language, political structure, and some aspects of material culture. Even though governing officials have interrupted the traditional ways of life of the Naga tribes, there are some things that have not changed. Within the Wakching village their houses, appearance, language, religious beliefs and interpersonal relationships have been carried down from ancient times. The Wakching village occupied a high point of abroad and uneven ridge in the Naga Hills district. With 249 Houses and a population of about 1300 inhabitants, it was the largest village within a radius of ten miles. Its size and strategic position secured it against the attacks of hostile neighbors. The houses were usually grouped together in a compact block and enclosed with a fence or they were scattered over the site in several areas with vegetable plots and bamboo groves around them. Structurally, the Konyak houses were of two types, one of the open front and high roof points and those of closed fronts whose roofs hung low over a front porch. The men s houses were largest in size eighty four feet long and thirty-six feet wide. The roof was made out of thick thatched palm leaves and at the sides the branches nearly touched the ground. Leaf bundles, flat decorated stick and small carvings of birds were hung from the front ends of the roof rafters, so they formed a curtain which gave the porch shade from the sun. The porch was about twenty-four feet deep. Benchs were made for working upon and sitting at. While walking through the four feet wide entrance way one would have to step over a low bamboo barrier, placed in the doorway to keep out stray animals. Stepping into a sixteen-foot wide, seven foot in length porch. To the right a wooden door opened up to a corridor like hall, which occupied the entire length of the house. Within this room was a rice-pounding table that was about ten feet long. Then one moved into the living room where the family lived and did most of their cooking in. The families slept in bamboo bunks usually the father with two children and the mother with two children. Baskets, fishing nets and farming tools were hung up on the walls. Spears stood in the corners and dao, were stuck in the matting walls. Utensils used in the preparation of food, cooking pots, pounding pestles, wooden ladles and dishes were grouped within reach of the cooking area. All items that were used daily were packed up in woven waterproof cane baskets and closed with lids. The only source of light was a small door in the wall of the living room. At the back of the house the hall widened into a utility room, which was the entire length of the house. Here the drying of rice and taro, an Indonesian root crop, took place. Also the cutting up of animals for food preparation and the entertaining of guests happened within this room. Finally, the back door lead out to a veranda about fifteen by twenty feet, half of it was covered with a roof and half open to the sky where the drying of washed clothes could hang. The bathroom area was located on the veranda sheltered from view with palm leaves and the waste in which fell to the ground was eaten by pigs roaming among the piles. The people of the Wakching tribe were very physically attracted in appearance and wore the most splendid of colorful ornaments. Men and women were of slender build and delicate bone structure and well-maintained their youthfulness in to their middle-ages. They have light to medium brown skin tone and dark brown or black colored eyes. Young men and women were well groomed, their bronze skin clean and their black hair nicely combed, often with fresh flowers stuck in an earlobe or hair knot. Men and half-grown boys seldom wore more than a tight belt and a small apron, only covering their private parts but still some were seen gossiping in the villages as late as 1962, with no aprons on. Men s belts were made of several coils of cane or of broad strips of bark, with long ends that hung down over the buttocks like a tail. During ceremonial occasions the men wear splendid attire depending on their achievements in the field of headhunting. While at the time of the annual spring festival all males, from small boys to white-haired grandfathers, wore some sort of headgear, only head-takers were entitled to the more magnificent headdresses. Most common were conical hats made of red cane and yellow orchid stalks, crested with red goat s hair and topped with a few tail feathers of the great Indian hornbill. Head-takers garnished such hats with flat horns carved from buffalo horn and tassels made of human hair. Boar s tusks, monkey skulls and hornbill beaks were other favored ornaments. Both men and women wore arm rings and neck ornaments of many different shapes and materials. The women and girls wore narrow, oblong pieces of cloth wrapped around their waist, with one corner tucked in over the left hip. These skirts were about ten inches wide and covered the body areas, which were required by the tribe to attain decency. A woman never took off her skirt in the presence of men, not even when bathing or fishing. Unmarried girls usually wore plain white or blue skirts, but married women preferred skirts with red and white stripes. A young girl with developed breasts and pubic hair could be seen walking around the village nude. Due to women s skirt covering only their private parts, when they would have menstrual blood appear on their thighs, they felt no embarrassment. In ceremonial celebrations a women s attire varied depending on her social status. A girl or woman of pure chiefly blood had the right to wear red and white striped skirts decorated with embroidery, glass beads, and tassels of dyed goat s hair. Women of minor chiefly clan were entitled to similar skirts, but to no decorative tassels. Commoners wore skirts of darker color, usually blue with no ornaments. All of these skirts were woven of cotton, and patterns varied slightly from village to village. The Konyak Nagas language is a Tibeto-Burman tonal launguage of which not even a simple word list was known. Though some
More College Papers
Fgm essay
As you are reading this article, there are between eight and ten million women and girls in the Middle East and in Africa who are at risk of undergoing one form or another of genital mutilation. In the United States it is estimated that about ten thousand girls are at risk of this practice. Awa Th
Boeing 700 essay
Boeing 700 By: Jack Richards The Boeing 700 s are very capable of handling duties in the commercial and military world. The Boeing 700 s are capable of handling many tasks in the commercial and military world. With the introduction of the 707 in the late fifties to the most recent 777 in the ear
Hannibal Of Carthage essay
Hannibal of Carthage: The Father of Strategy Through out history there have been many great military leaders, Alexander the Great, Napoleon, Generals Washington, Grant and Charles Lewis Puller. The one however that sticks out the most is General Hannibal of Carthage. Often called the Father of St
|
<urn:uuid:32fb5f53-43e3-49e1-8942-881518acc3e7>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.828125,
"fasttext_score": 0.026617586612701416,
"language": "en",
"language_score": 0.9774277806282043,
"url": "http://essaymania.com/146247/konyak-tribe"
}
|
Maps, Data, and Models
Hurricane Harvey's Effect on Soil Moisture
Symbol map of Hurricane Harvey's effect on soil moisture around Houston Texas.
Hurricane Harvey dropped record-breaking amounts of rainfall, particularly around Houston, Texas on August 25, 2017. Have your students analyze the legend below and this proportional symbol map of Hurricane Harvey to answer the questions below. Note: Soil moisture is expressed in volumetric terms, water by volume/volume of soil.
Scale for Map
Harvey Map
Mini Lesson
Student Activity Sheet
1. What does the size of the dot represent? The rate of change in the amount of moisture in the soil
2. What does the color represent? The quantity of moisture per cm cubed per cm cubed
3. What area was the most impacted by Hurricane Harvey? How do you know? North West of Houston because it has the largest and darkest hexagons.
4. Why do you think there was not a change in soil moisture in the city of Houston? The surface of a city is mainly impermeable so the water isn’t able to soak into the soil but rather runs off into its watershed
5. What is one question you have when looking at this map? Answers can vary but examples are; why did the East side of Houston not have as drastic soil moisture change compared to the west side of the city; what was the path the storm took; how much water dumped onto the city?
|
<urn:uuid:4c71fcbc-c777-43f2-a27d-9652c0f30a85>
|
{
"dataset": "HuggingFaceTB/dclm-edu",
"edu_int_score": 4,
"edu_score": 3.640625,
"fasttext_score": 0.9945127964019775,
"language": "en",
"language_score": 0.9402780532836914,
"url": "https://mynasadata.larc.nasa.gov/maps-and-data/hurricane-harveys-effect-soil-moisture"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.