proba
float64 0.5
1
| text
stringlengths 16
174k
|
---|---|
0.942246 |
Paul Manafort, Donald Trump’s former campaign chairman, has agreed to cooperate with Robert Mueller’s inquiry into Russian interference in the 2016 election, in a move that could cause legal trouble for the president. The dramatic development in the Trump-Russia saga was announced at a court hearing in Washington DC on Friday morning, where Manafort confessed to two criminal charges as part of a plea deal. “I’m guilty,” he said. Manafort signed a 17-page plea agreement that said he would assist government prosecutors with “any and all” matters, and brief officials about “his participation in and knowledge of all criminal activities”. He also agreed to turn over documents and testify in other cases.
Full Article: Paul Manafort: Trump's ex-campaign chair agrees to cooperate with Mueller | US news | The Guardian.
|
0.999998 |
Any set of n integers form n(n - 1)/2 sums by adding every possible pair. Your task is to find the n integers given the set of sums.
Each line of input contains n followed by n(n - 1)/2 integer numbers separated by a space, where 2 < n < 10.
|
0.998127 |
Which word in the following sentence is an adjective?
The silly girl ran around in circles.
Which of the underlined words in the following sentence is an adjective?
The baby wanted the red toy.
Zoey found shells on the sandy beach.
Last night I took a long walk around the block.
My uncle ran fast when he saw the scary clown.
Jay played a pretty song on his flute.
Franky and Lita sat on the wooden bench.
The cat was noisy when it howled.
My little brother can play ball.
The green frog jumped over the pond.
|
0.94806 |
I want YOU to give me spoons to bend!
Uri Geller (born 1946) is a professional spoon bender an Israeli magician and mentalist who, for the last forty years, has made a career for himself posing as a psychic.
Geller was effectively discredited by, among other things, a 1973 appearance on The Tonight Show with Johnny Carson. Geller failed to bend spoons that Carson, an amateur magician in his own right, had (with the help of James Randi) chosen beforehand, and to which Geller did not have access.
He was a focus of much paranormal research during the Human Potential movement of the 1970s, including a well-known study at the Stanford Research Institute that claimed to validate his powers. Geller is notoriously litigious, and has on many occasions attempted to use this study (considered flawed at best by experts on magic and pathological science) as a cudgel in court against his opponents, particularly against Randi. Despite this, Geller typically loses in court battles.
Geller's early career has been extensively documented in the skeptical literature, particularly Martin Gardner's Science: Good, Bad, and Bogus, James Randi's Flim-Flam! and The Truth About Uri Geller, and William Poundstone's Big Secrets. In recent times, Geller has become sort of a professional hanger-on, associating with credulous celebrities (including Rabbi Shmuley Boteach and, at one point, Michael Jackson), as well as investing in soccer teams in his adopted home of the United Kingdom and running them into the ground.
Nintendo has avoided using the Pokémon character "Kadabra" in their media for some time, fearing lawsuits from Geller. Geller believes Kadabra to be a libelous mockery of his image (Kadabra's Japanese name is ユンゲラー, Yungeraa; replace ン with the visually-similar リ and it becomes Yurigeraa. which is how his name would be transliterated into Japanese), complete with "anti-Semitic" symbols (a star on the forehead, probably an occult reference, and chest lines similar to the runic insignia of the SS) The trading card game has not printed a Kadabra card since 2003, despite Kadabra being an evolutionary link between Abra and Alakazam. In order to get around this problem, recent Abra and Alakazam printings regard Kadabra as a completely optional step in the evolutionary chain. Thanks Mr. Geller, we couldn't have done it without you.
In 2009 he bought a small uninhabited island off the east coast of Scotland called The Lamb. There are various nonsensical mystical theories about this island. Firstly, it lies in a triangle formed by three sites associated with the Knights Templar: Rosslyn Chapel, the Isle of May, and the village of Temple in Midlothian. For some reason the Knights Templar are viewed as having vast mystical powers. Secondly, along with the nearby islands of Fidra and Craigleith, it forms a shape which allegedly resembles the layout of the Giza pyramids and the layout of the stars in Orion's Belt (a fringe theory holds that the pyramids match Orion's belt). Pseudo-archaeology researcher Jeff Nisbet claims that "anyone standing on the battlefield of Bannockburn, where Robert the Bruce defeated the English army in 1314, on the anniversary of the battle on June 24, would see the three stars (Alnitak, Alnilam and Mintaka) rise exactly over the three islands of Craigleith, Lamb and Fidra". Whether you can even see the islands from Bannockburn, near Stirling, is unclear, but it sounds like the worst sort of Celtic or Scottish nationalist pseudohistory.
Geller has suggested that The Lamb was visited 3000 years ago by Princess Scota, the sister of the pharoah Tutankhamen, who supposedly fled Egypt and stayed there for a while (despite the island being tiny, rocky, and not far from the Scottish mainland), doubtless leaving behind lots of treasure. He announced plans to excavate the island to find the treasure, in a speech at the Scottish Seabird Centre in North Berwick (which is incidentally a must for fans of puffins). In 2010 he announced he was visiting to do some dowsing. Since then, he has fallen quiet, which suggests no treasure was found, or that he has - like many other older humans - retreated to a life of introversion and solitude, perhaps no longer interested in continuing to be "Fake". It seems his prior lifestyle was quite profitable.
The nearby island of Fidra is considered to be one of the inspirations of Robert Louis Stevenson's novel Treasure Island. It seems Geller has taken it all rather literally, and may soon be found roving the island in a dishevelled state muttering about cheese .
↑ The envelope was later, according to James Randi in a November 2007 blog entry, revealed to contain the numbers "911", a swipe against psychic claims of being able to predict the future.
↑ See the Wikipedia article on Human Potential Movement.
↑ An account of this.
↑ See the Wikipedia article on The Lamb (island).
↑ See the Wikipedia article on Orion correlation theory.
↑ See the Wikipedia article on Fidra.
This page was last modified on 22 March 2019, at 21:14.
|
0.985117 |
Mehmet Güner Kaplan (born 18 July 1971 in Gaziantep Turkey) is a member of the Swedish parliament (Riksdag) aligned with the Swedish Green Party (Miljöpartiet de Gröna) and former spokesperson for the Muslim Council of Sweden (2005–2006) and Young Muslims of Sweden (2000–2002). He was also a founding member of the Muslim peacemovement - Swedish Muslims for Peace and Justice as well as a strong supporter of the Swedish peacemovement.
When is Mehmet Kaplan's birthday?
Mehmet Kaplan was born on the 18th of July 1971 , which was a Sunday. Mehmet Kaplan will be turning 48 in only 89 days from today.
How old is Mehmet Kaplan?
Mehmet Kaplan is 47 years old. To be more precise, the current age as of right now is 17443 days, 18 hours, 9 minutes and 33 seconds.
What is Mehmet Kaplan's zodiac sign?
Mehmet Kaplan's zodiac sign is Cancer.
Is Mehmet Kaplan still alive?
Yes, Mehmet Kaplan is still alive.
|
0.971123 |
For the San Diego Chargers linebacker, see Bobby Lane.
Robert Lawrence Layne (December 19, 1926 – December 1, 1986) was an American football quarterback who played for 15 seasons in the National Football League. He played for the Chicago Bears in 1948, the New York Bulldogs in 1949, the Detroit Lions from 1950–1958, and the Pittsburgh Steelers from 1958–1962.
Layne was selected by the Bears with the third overall pick of the 1948 NFL draft. He played college football at the University of Texas. Layne was inducted into the Pro Football Hall of Fame in 1967 and the College Football Hall of Fame in 1968. His number, 22, has been retired by the University of Texas Longhorns and Detroit Lions.
6 "Curse of Bobby Layne"
Born in Santa Anna, Texas, Layne's family moved when he was very young to Fort Worth, where he attended elementary and junior high school. His mother died when he was only eight years old, and Layne moved in with his uncle and aunt, Mr. and Mrs. Wade Hampton. He attended Highland Park High School in University Park, where he was a teammate of fellow future hall of famer Doak Walker, the Heisman Trophy winner in 1948 for the SMU Mustangs and a pro teammate with the Detroit Lions.
In his senior year, Layne was named to the all-state football team, played in the Oil Bowl All-Star game, and led Highland Park to the state playoffs.
One of the most successful quarterbacks ever to play for Texas, Layne was selected to four straight All-Southwest Conference teams from 1944–47, and was a consensus All-American in his senior year. World War II caused a shortage of players, and rules were changed to allow freshmen to play on the varsity, thereby allowing Layne a four-year career.
Freshman play was sporadically allowed by various conferences during wartime, but would not be allowed universally until the rules were permanently changed in 1972. In his freshman season, Layne became a very rare player (in that era) to start his first game. He missed his second game due to an injury and was replaced by future North Texas transfer Zeke Martin, but Layne played the rest of the season and led the Longhorns to within one point of the Southwest Conference Championship when they lost to TCU 7–6 on a missed extra point.
Prior to and during his sophomore year, he spent eight months in the Merchant Marines, serving with his friend Doak Walker. He missed the first six games of the season, and was replaced by Jack Halfpenny. The last game he missed was the team's only loss, to Rice, by one point. Texas went 10–1, won the Southwest Conference, and despite playing only half a season, Layne again made the all-conference team.
In the Cotton Bowl Classic following that season, Texas beat Missouri 40–27, and Layne played perhaps the best game of his career. He set several NCAA and Cotton Bowl records that have lasted into the 21st century. In that game, he completed 11 of 12 passes and accounted for every one of the team's 40 points, scoring four touchdowns, kicking four extra points, and throwing for two other scores, thus he was named one of the game's outstanding players.
In 1946, the Longhorns were ranked number one in the preseason for the first time, but after beating number 20 Arkansas, they were upset by number 16 Rice and later by unranked TCU. They went 8–2, finished third in the conference, ranked number 15 nationally, and missed out on any bowl games. Layne led the Southwest Conference in total offense (1420 yards), total passing (1115 yards), and punting average(42 yards). Despite the unexpected finish, Layne was named All-Conference again and finished eighth in Heisman Trophy balloting to Glenn Davis of Army.
In 1947, Blair Cherry replaced Dana X. Bible as head coach at Texas and he decided to install the T-formation offense. Cherry, Layne, and their wives spent several weeks in Wisconsin studying the new offense at the training camps of the Chicago Bears and Chicago Cardinals of the National Football League. The change was a success, as Layne led the Southwest Conference in passing yards, made the All-Conference and All-American teams, and finished sixth in Heisman Trophy voting to John Lujack of Notre Dame. The Longhorns, after beating number-19 North Carolina, started the season ranked number 3. They then beat number-15 Oklahoma, but as happened in 1945, Texas was again denied an undefeated season by a missed extra point. After coming back once against Walker's number-8 SMU, Texas again found itself behind late in the game.
Layne engineered a fourth-quarter touchdown drive that would have tied the game, but kicker Frank Guess pushed the extra point wide and the Longhorns lost 14–13. They fell to eighth, and finished behind SMU in the Southwest Conference, but gained an invitation to the Sugar Bowl, where Layne and the Longhorns beat number-six Alabama. As a result of his 10-24, 183 yard performance, Layne won the inaugural Miller-Digby award presented to the game's most valuable player. The Longhorns finished ranked fifth, the best finish in Layne's career. Layne finished his Texas career with a school-record 3,145 passing yards on 210 completions and 400 attempts and 28 wins.
Layne was one of the first inductees into the Cotton Bowl Hall of Fame and made the Cotton Bowl's All-Decade team (1937–1949) for the 1940s. Later, both of Layne's sons, Rob and Alan, played college football. Robert L. Layne, Jr., was a kicker for Texas, playing on the 1969 National Championship team, and Alan played tight end for Texas Christian in 1973.
Layne was one of the best pitchers to ever play at Texas. He made the All-Southwest Conference team all four years he played, and played on teams that won all three Conference Championships available to them (none was named in 1944 due to World War II). He won his first career start, in 1944, when he was managed by his future football coach Blair Cherry, versus Southwestern, 14-1, in a complete-game, 15-strikeout performance. Similar to football, he missed the 1945 season because he was in the Merchant Marines, but returned to play three more seasons. In 1946, he threw the school's first and second no-hitters and posted a 12-4 record. In 1947, he went 12-1 and led Texas to a third-place finish in the first NCAA baseball Tournament.
In 1948, he went 9-0 and again helped Texas win the Southwest Conference, but though they qualified for it, Texas decided not to attend the 1948 NCAA tournament because the players felt they had too many obligations with family and jobs.
Texas went 60-10 overall, and 41-2 in the SWC during Layne's final three years in Austin. When his career was over, Layne had a perfect 28-0 conference record and set several school and conference records during his time on the team, including a few that still stand today. Between baseball and football, he was All-Conference an astounding eight times and won four conference championships.
In 1948, after earning his degree in physical education, Layne played a season of minor league ball for the Lubbock Hubbers baseball team of the Class C West Texas–New Mexico League. He went 6-5 with a 7.29 ERA, and had bids from the New York Giants, the Boston Red Sox, and the St. Louis Cardinals to join their staffs, but he preferred to go to the National Football League, where he could play immediately rather than grind out several years in the minor-league system.
Drafted into the National Football League by the Pittsburgh Steelers, Layne was the third overall selection in the 1948 NFL Draft and was the second overall selection in the 1948 All-America Football Conference (AAFC) draft by the Baltimore Colts. Layne did not want to play for the Steelers, the last team in the NFL to use the single-wing formation, so his rights were quickly traded to the Chicago Bears.
He was offered $77,000 to play for the Colts, but George Halas "sweet talked" him into signing with the Bears. He promised a slow rise to fame in the "big leagues" with a no-trade understanding.
After one season with the Bears, during which Layne was the third-string quarterback behind both Sid Luckman and Johnny Lujack, Layne refused to return and tried to engineer his own trade to the Green Bay Packers. Halas, preoccupied with fending off a challenge from the AAFC, traded Layne to the New York Bulldogs for their first-round pick in the 1950 draft and $50,000 cash. The cash was to be paid in four installments.
With Layne at quarterback, the Bulldogs won only one game and lost 11, but Layne played well and developed quickly. Layne compared one season with the soon-to-be-defunct New York Bulldogs as worth five seasons with any other NFL team.
In 1950, he was traded to the Detroit Lions for wide receiver Bob Mann, and the Lions agreed to make the final three payments to Halas (Halas later remarked that the Lions should have continued the yearly payments indefinitely to him in view of Layne's performance). For the next five years, Layne was reunited with his great friend and Highland Park High School teammate Doak Walker, and together they helped make Detroit into a champion.
In 1952, Layne led the Lions to their first NFL Championship in 17 years, and then did so again in 1953 for back-to-back league titles. They fell short of a three-peat in 1954 when they lost 56–10 to Cleveland Browns in the NFL championship game, a loss which Layne explained by saying, "I slept too much last night."
In 1955, the team finished last in their conference and Walker surprisingly retired at the top of his game. As Walker had been the team's kicker, Layne took over the kicking duties in 1956 and 1957, and in 1956 led the league in field goal accuracy. In 1956, the Lions finished second in the conference, missing the championship game by only one point. In 1957, the season of the Lions' most recent NFL championship, Layne broke his leg in three places in a pileup during the 11th game of the 12-game season. His replacement, Tobin Rote, finished the season and led the Lions to victory in the championship game in Detroit, a 59-14 rout of the Cleveland Browns.
After the second game of the 1958 season, Pittsburgh Steelers coach Buddy Parker, formerly in Detroit, arranged a trade on October 6 that brought Layne to the Steelers. During his eight seasons in Detroit, the Lions won three NFL championships and Layne played in four Pro Bowls, made first team All-Pro twice, and at various times led the league in over a dozen single-season statistical categories.
Following the trade, Layne played five seasons with the Pittsburgh Steelers. Though he made the Pro Bowl two more times, he never made it back to the playoffs, and the team's best finish was second in the conference in 1962. During his last year in the NFL, he published his autobiography Always on Sunday. Later he stated that the biggest disappointment in his football career was having never won a championship for the Pittsburgh Steelers and specifically, Art Rooney.
By the time Layne retired before the 1963 season, he owned the NFL records for passing attempts (3,700), completions (1,814), touchdowns (196), yards (26,768), and interceptions (243). He left the game as one of the last players to play without a facemask and was credited with creating the two-minute drill. Doak Walker said of him, "Layne never lost a game...time just ran out on him."
Following his retirement as a player, Layne served as the quarterback coach for the Pittsburgh Steelers from 1963–65 and the St. Louis Cardinals in 1965. He was a scout for the Dallas Cowboys from 1966–67. He later unsuccessfully sought the head coaching job at Texas Tech, his last professional involvement with the sport.
For his on-the-field exploits, Layne was inducted into a vast assortment of halls of fame. These included the Texas Sports Hall of Fame in 1960, the Longhorn Hall of Honor in 1963, the Pro Football Hall of Fame in 1967, the state halls of fame in Michigan and Pennsylvania, and the Texas High School Sports Hall of Fame in 1973.
In 2006, he was a finalist on the initial ballot for pre-1947 inductees to the College Baseball Hall of Fame. He was a finalist again the following year.
In a special issue in 1995, Sports Illustrated called Layne "The Toughest Quarterback Who Ever Lived." In 1999, he was ranked number 52 on the Sporting News' list of Football's 100 Greatest Players. After retirement, Layne spent 24 years as a businessman back in Texas in Lubbock, working with his old college coach, Blair Cherry. His business ventures included farms, bowling alleys, real estate, oil, and the stock market.
In his younger days, he, often accompanied by Alex Karras, was well known for his late-night bar-hopping and heavy drinking and it was said of him, "He would drink six days a week and play football on Sunday"; but his heavy drinking may have contributed to his death. Layne is reported to have stated: "If I'd known I was gonna live this long, I'd have taken a lot better care of myself." That line was later used by baseball legend Mickey Mantle, a Dallas neighbor and friend of Layne's, who also died in part due to decades of alcohol abuse. Layne suffered from cancer during his last years.
In November 1986, he traveled to Michigan to present the Hall of Fame ring and plaque to his old friend and teammate Doak Walker, but was hospitalized with intestinal bleeding in Pontiac after a reunion dinner with his former Detroit teammates. He returned to Lubbock on November 12, but three days later was hospitalized again. He died in cardiac arrest on December 1 in Lubbock, and was buried there. Doak Walker and three other members of the Pro Football Hall of Fame were among the pallbearers.
"My only request", he once said, "is that I draw my last dollar and my last breath at precisely the same instant."
In 1958, the Lions traded Layne to the Pittsburgh Steelers. Layne responded to the trade by supposedly saying that the Lions would "not win for 50 years". This story has been called a hoax, particularly because the quote was never published at the time.
Coincidentally, in the 2009 NFL Draft, right after the curse supposedly expired, the Detroit Lions drafted University of Georgia quarterback Matthew Stafford first overall. Stafford was an alumnus of Layne's former school Highland Park High School and also lived in a house on the same street as Layne's. In the 2011 season, Stafford's first full injury-free season, he led the Lions to their first playoff berth since 1999, but lost to fellow Texan Drew Brees and the New Orleans Saints. Also, in the 60 years since the curse, the Lions also endured multiple playoff droughts lasting more than 6 years, including the year of the trade, the Lions did not make the playoffs in 12 consecutive seasons. (1958–1969; 1984–1990; 1971–1981; 2000–2010).
^ a b c d e f g h Kohou, Martin Donell. "Layne, Robert Lawrence". tshaonline.org. Retrieved May 25, 2017.
^ Lucksinger, Ross. "The History of Freshman Quarterbacks at Texas". insidetexas.com. Retrieved May 25, 2017.
^ "Texas-Oklahoma classic to be played Saturday in Dallas". Abilene Reporter-News. October 10, 1944. Retrieved May 25, 2017.
^ "One Man, All 40 Points". goldenrankings.com. Retrieved August 7, 2013.
^ "Bobby Layne Chalks Up Three SWC Titles". Lubbock Morning Avalanche. December 3, 1946. Retrieved March 13, 2015.
^ "1946 Heisman Trophy Voting". sports-reference.com. Archived from the original on January 21, 2015. Retrieved January 21, 2015.
^ "SMU's Greatest Moments #21". smumustangs.com. Archived from the original on December 21, 2014. Retrieved January 22, 2015.
^ "14th Annual Sugar Bowl Classic". allstatesugarbowl.org. January 1, 1948. Retrieved January 22, 2015.
^ a b "2006 Official College Baseball Foundation Hall of Fame Ballot" (PDF). lsusports.net. Retrieved January 29, 2015.
^ Madden, W.C.; Stewart, Patrick J. (January 1, 2004). The College World Series: A Baseball History, 1947–2003. McFarland. p. 11. ISBN 0786418427.
^ Guzzardi, Joe (February 5, 2011). "Bobby Layne: The NFL Hall of Fame Great Who Could Have Starred in the Major League". Retrieved January 30, 2015.
^ a b c d e Nassar, Taylor. "Layne, Robert Lawrence". libraries.psu.edu. Archived from the original on May 15, 2013. Retrieved January 23, 2015.
^ Collier, Gene (April 23, 2006). "Making a pitch for Bobby Layne for baseball hall". Retrieved January 29, 2015.
^ Livingston, Pat (October 7, 1958). "Layne takes over as Steeler QB". Pittsburgh Press. p. 27.
^ Sell, Jack (October 7, 1958). "Steelers get Layne for Morrall". Pittsburgh Post-Gazette.
^ Cavanaugh, Jack (2008). Giants Among Men. Random House. p. 129. ISBN 978-1-4000-6717-6.
^ a b c Harvey, Randy (December 2, 1986). "Football Legend Layne Dies at 59 of Heart Failure". Los Angeles Times. Retrieved January 27, 2015.
^ a b "Longhorn MVPs/Hall of Famers" (PDF). Archived from the original (PDF) on November 25, 2015.
^ a b "Famed Quarterback Bobby Layne Dies". Pittsburgh Post-Gazette. December 2, 1986. Retrieved January 30, 2015.
^ "COLLEGE BASEBALL HALL OF FAME ANNOUNCES OFFICIAL 2007 NOMINEE BALLOT" (PDF). Retrieved May 25, 2017.
^ "Sporting News' Football's 100 Greatest Players". Archived from the original on May 16, 2008. Retrieved May 25, 2017.
^ "Bobby Layne remains in critical condition". Sarasota Herald-Tribune. Florida. Associated Press. November 21, 1986. p. 8C.
^ "Famed quarterback Booby Layne dies". Pittsburgh-Post Gazette. December 2, 1986. p. 8.
^ "500 attend funeral for Bobby Layne". Ottawa Citizen. Canada. UPI. December 4, 1986. p. B6.
^ "Friends eulogize Layne". Ludington Daily News. Michigan. Associated Press. December 4, 1986. p. 10.
^ King, Peter (March 2, 2009). "Searching For Bobby Layne". Sports Illustrated. Retrieved November 25, 2010.
^ Rogers, Justin (March 7, 2009). "Turns out the Curse of Bobby Layne is probably a myth". MLive.com. Retrieved November 25, 2010.
^ Seifert, Kevin (July 27, 2009). "Black and Blue all over: Offseason's final week". ESPN.com. Retrieved November 25, 2010.
This page was last edited on 11 April 2019, at 13:33 (UTC).
|
0.999999 |
how do you write 'Japan' in Japanese?
Japanese language uses 'kanji' (characters) which, for most of them, represent ideas. Kanjis for 'Japan' are 日本.
The first one, 日, means 'sun' while the second one, 本, means 'origin' (and also 'book', but not in this case).
In Japanese, 日本 therefore means 'origin of sun', which explains why we refer to Japan as the land of the rising sun.
日本 is pronounced 'nihon' or sometimes 'nippon', which is the old pronunciation.
|
0.950761 |
The number of rounds of golf played declined 4.8% last year while the total population continued to increase. So, what is happening? Is golf in the USA dying a slow death?
Looking at the state by state breakdown doesn't produce any answers either. TN down 12%, OR up 5%. WI down 12%. WA up 8.7%.
If it is,it has to be due to SLOW PLAY !
And yet rounds played in NJ were down 10.3% last year.
Boomers are the anchor of the industry. Long term there's a problem. Until they are all in diapers and wheel chairs, no.
Also the economy is healthier. The main reason my round count is down in last 2 years is I 'm busier with work. Good problem.
Year vs previous year comparisons are so obviously pointless - in helping to answer some tired trope - in an activity so beholden to the weather.
Give us 2008 to 2018 and then maybe a conversation can be had.
Do you have that data?
The intro to the article itself says it was weather. Upper Midwest had one of the worst seasons for rain and bad weather ever - ruined a lot of golf course years last summer. 4.8% decline with weather as the primary identified factor just doesn't seem to be a call to bring out your golf dead.
1 year is a small sample size.
I definitely believe this is a large part of it. For a good number of years, I played year round in TN. The last 3-4 years have seen much colder temps and lots of rain. The last couple of years the courses have not tried out well until May.
For my input - where I live, we received 2.5 times the annual rainfall from average. Each year for the 5 years before - as well as this year to day - we have fallen behind on the amount of rain received.
If I rememember correctly, there were no hurricanes in Washington state, or Oregon, but we had a major hurricane affect the entire east coast, plus multiple tropical storms not categorized as hurricanes, but bringing torrential amounts of rains.
I won't say that golf in America is doing handsprings, but data does show that the number of course closings has greatly dropped, most likely due to the fact that we are reaching a balancing point, where the drop in golfers is becoming matched to the drop in golf courses.
The growth of new courses has also slowed, again, I believe, as a result of the realignment of facilities vs players.
Final statement: if you ain't doing something to make golf "better" - you aren't helping the situation at all.
The responsibility golfers have to keep the game growing is greater than in other sports. And I mean recreational golf - the tours will always seem to go on, but - start a kid out in golf today, and see what happens. If every golfer in the US did that this year, we could double the number of players. Doesn't seem all that hard to me.
The baby boomer generation hitting their peak earning years and enjoying unprecedented prosperity as family units moved from a single earner model to a dual income model fueled the extraordinary growth of the game starting in the early 90s. Subsequent generations with the exception of Millennials have been incapable of filling that void in large part due to their generational populations simply being significantly smaller.
Millennials have the population to drive growth, but have not shown as much of an interest in the game so far. It is yet to be seen if they'll take to the game en masse as their disposable incomes increase. Seems that Millennials are far from traditionalists, so it will be interesting to see how it all evolves, particularly the private club model.
Why do you feel responsible for growing the game? If fewer people want to play golf, why is that a problem?
This is true and I can't find much historical data. I did find the December 2017 data from the OP website.
2017 was down 2.7% compared to 2016.
" The net is that the industry has given back the gains from the prior two years, which were up 1.8% and 0.6%, respectively – resulting in a relatively stable number of rounds played over the last several years."
Population makeup and societal changes have and always will drive industries success and failure. Korea, Japan, India (pick your country) golf is really taking off, but even there, will ebb and flow with population changes and their spending habits.
There's golf, then there's the business of golf.
The golf business has to contract until the next large pig in the python introduces more golfers. and that will be effected by screen time and attention span.
and not speed of play. 18 Golf takes time, driving to and fro, Checking in, Playing, 19th hole. If you don't have 6 hours, do something else.
Come to Florida and ask me if golf is dieing after having the privilege of playing a 6 hour round for $80 with stacked tee times all day in 8 min intervals on EVERY COURSE.
Baseball is dieing, too. Both sports aren't going anywhere though!
not dying, just falling victim to the accessibility of everything else. I don't think there are as many people that "specialize" with their hobbies anymore, they just do so many more things in addition to playing golf.
Most of the country experienced well above average precipitation in 2018. Some parts of the country, like the region where I live, shattered records for amount of precipitation in a calendar year. Calendar year 2017 was also well above average for precipitation.
So far, 2019 hasn't improved much in that regard!
That's an interesting thought. What hobbies other than smartphone and computer games have an increasing number of participants?
I blame the rain. I have a friend in eastern PA who for years has played every single Saturday and Sunday plus at least 2 weeknights during the season without fail so long as it isn't raining and his home course is open. He told me that in 2018 he played the least number of rounds he's played in 15 years simply due to it either raining on those days or the course being flooded/closed/severely restricted due to heavy rains or flooding. He said unlike most years, it was so wet that the course never really dried out all summer, it played soft/wet with the rough growing vigorously the whole time, whereas usually in July/August it gets real firm and dried out.
Facebook is a verb, apparently.
I hope it's dying. Courses around me are too crowded. Need to thin out the herd.
I can only speak for my area and golf is not dying, but it is bleeding out. In the past 2 years, there have been 5 courses that have closed (that were news worthy because of housing developments that surrounded them). I primarily play on a military installation and if it was not for the over the hill gang, the course would most likely be closing also. Very rarely do we see any younger players in any meaningful numbers on the course or even at the driving range. Now on the other hand, the Top Golf location seemed to be doing fine, but after talking to one of the guys that work there, they have seen a steady decline in participation also. In addition, our weather is not all that conducive for year round play, so that adds to the declining factor. Also take into consideration the costs and time involved and you have a recipe for non participation. I know the being on a fixed income has reduced my playing ability, so it should stand to reason that the younger guys with families also fall prey to this obstacle.
In Los Angeles, the courses are jammed pack with 5 somes back to back from before the sun's up to twilight. This was last Saturday.
Your map covers Dec. through Feb. Too much of a spot weather effect to make decisions on multi-year national trends.
In St. Louis area, frost on ground continues until 11 AM, then it gets really cold by 3 PM. I just think a lot of people are working on Income Tax so they can play in mid-March when things warm up. Also, 20* F at dawn means no thaw all day long.
Over in Asia, golf boom in South Korea reveals that more than half of golfers are simulator players only.
USA: Golf is continuing to right-size. Still more green-grass courses closing than opening.
|
0.985803 |
The Internet is a vast network of networks that spans the entire globe. Data is transferred from computer to computer, and from network to network, using packet-switching technology and a suite of Internet protocols called TCP/IP, after its two most important protocols.
Although only popularised in the 1990s, the events that were to lead ultimately to the creation of the Internet started back in the late 1950s. When the Soviet Union launched the first earth-orbiting satellite (Sputnik) in 1957, the United States awoke to the fact that they were being overtaken in the space race. One of the results of this realisation was the establishment of the Advanced Research Projects Agency (ARPA) in 1958. In 1969, ARPA set up a research project called ARPANET (Advanced Research Projects Agency Network) to create a secure, de-centralised network capable of functioning when parts of the network infrastructure were destroyed or disabled (for example, in the event of a nuclear war).
The idea behind packet-switching was to break messages down into small blocks of data called packets that could be sent across a network independently of one another, and if necessary via different routes. The message would be reassembled by the receiver once all of the packets had arrived safely. If a packet was lost or damaged, it could be re-transmitted, avoiding the need to re-transmit the entire message from the beginning. The protocols used to transfer data across the network had to be robust and flexible enough to be able to deal with lost or damaged packets and adapt to the sudden loss of network links by finding a new route between the two communicating end points. The protocols also had to work over a number of different underlying network technologies, and on different operating systems and hardware platforms.
The first standard networking protocol developed for ARPANET was the Network Control Protocol (NCP), which was deployed in December 1970 and successfully used by a number of ARPANET sites to communicate with each other in October of the following year. By the end of 1971 there were fifteen sites using NCP. They consisted mainly of universities and scientific research centres, and included Carnegie Mellon University, Harvard University, MIT, the RAND Corporation, Stanford University, and UCLA.
By July of 1975, ARPANET was an operational network, and the period from 1973 to 1982 saw the development and refinement of the TCP/IP protocol suite, and its implementation on a range of operating systems. The developing Internet technology attracted the attention of the US military, and in 1978 it was decided that the TCP/IP protocols would be adopted for military communications. ARPANET became the world's first TCP/IP-based wide area network in January 1983, when all ARPANET hosts were switched from NCP to the new Internet protocols.
During the 1970s and 1980s, the evolving network was used primarily by academics, scientists and the US government for research and communications, but all that changed in 1992, when the US Department of Defense withdrew funding from the ARPANET project, having essentially achieved their objectives. In 1985, the US National Science Foundation (NSF) had commissioned NSFNET, a 56 kilobits per second university network backbone, which was upgraded to T1 bandwidth (1.544 Megabits per second) the following year due to high demand. 1989 saw NSFNET linked with the commercial MCI Mail network, and other electronic mail services, including Compuserve, were quick to follow suit. 1989 also saw the emergence of three commercial Internet service providers (ISPs) - UUNET, PSINET and CERFNET.
Other networks, notably Usenet and BITNET at first offered gateways into the Internet, and later merged with it. Soon, more commercial and educational networks, such as Telenet, Tymnet and JANET were interconnected with the Internet. The rapid growth of the network was facilitated by the availability of commercial routers from companies such as Cisco Systems, a sharp increase in the number of Ethernet-based local area networks, and the popularity of the Berkeley Software Distribution (BSD) of the UNIX operating system, which included the TCP/IP protocols.
In 1989 Tim Berners-Lee, a computer scientist working at the CERN laboratories in Switzerland, developed the hypertext-based information system that was to become the World Wide Web. The Web was at first used, like the Internet itself, only by academics and scientists. When the military closed down the ARPANET in 1992, however, a number of commercial organisations offered Internet access to the general public for the first time. 1993 saw another milestone as the National Centre for Supercomputing Applications (NCSA), based at the University of Illinois, released the Mosaic Web browser. The significance of this development was that his was the first Web browser to offer a user-friendly, graphical user interface that could display graphic images as well as text. By 1994, there was growing public interest in the Internet, and by 1996 use of the term "Internet" had become commonplace.
During the course of the 1990s, most of the remaining public computer networks were linked to the Internet, and became part of it by definition. The size of the Internet is estimated to have approximately doubled each year during this decade, with the most dramatic growth occurring during 1996-1997. Many factors encouraged such growth, among them the non-proprietary and open nature of the Internet protocols which facilitated the interoperability of both hardware and software from different vendors, the lack of any significant centralised control, and the fact that no one organisation actually owned the Internet. The internet today is a globally distributed network of interconnected networks, consisting of high-capacity backbone networks, regional networks, commercial networks and local area networks, as well as the millions of home computers, mobile phones and other personal computing devices connected to the internet via a service provider's network.
Below is a graphic representation of the Internet circa 2003, showing the links between Internet routers. If nothing else, it serves to illustrate the sheer complexity of this vast network of networks.
According to recent statistics, the Internet had 1.463 billion users worldwide as of June 30th, 2008. This represents a thousand-fold increase in the last fifteen years, largely attributable to the widespread availability of low-cost computers and Internet access, the almost universal adoption of computer and networking technology by commerce, industry and mainstream education, and the phenomenal growth in commercial Internet services and new Internet technologies.
All research councils, universities and FE colleges in the UK are connected to the UK's government-funded national research and education network (NREN). The network is called JANET (Joint Academic NETwork), and is linked to other NRENs in Europe and around the World via GEANT, Europe?s main research and education network.
The South West England Regional Network (SWERN) connects the Universities of Plymouth, Exeter, Bristol, Bath, Gloucestershire and the West of England and provides connections for many other HE institutions and FE colleges in the region.
The growth of public interest in and use of the Internet has been given further impetus since the mid 1990s thanks to increasingly powerful computers, user-friendly desktop operating systems, a rapid and continuing increase in connection bandwidth, and the availability of a vast range of online services. At the same time, the cost of both computer hardware and broadband Internet connections has fallen dramatically. In addition to the many Internet cafés, the Internet can be accessed from public libraries, community centres and other publicly accessible institutions free of charge, which means that even those with limited means can gain access to information services, provided they have a modicum of computer literacy.
The Internet has changed the way we communicate. Electronic mail, although predating the Internet, is now available to both businesses and private individuals, and allows us to send text information and file attachments to anyone, anywhere in the world, providing they have an Internet-enabled computer, PDA or mobile phone. Social interaction has been given a new dimension thanks to the advent of Internet Relay Chat (IRC), social networking Web sites such as Facebook and MySpace, and multi-player online gaming. Business people can talk to colleagues, customers and suppliers anywhere in the world using video conferencing. Even private long-distance and international telephone calls can now be conducted using Voice Over IP (VOIP) technologies such as Skype, at a fraction of the cost of using conventional landlines, or even free of charge if both parties have an Internet connection, a computer, and a suitable headset or Internet phone.
Advances in mobile phone and wireless technology mean that even those on the move can now access the many services available via the World Wide Web in they have a 3G mobile phone, or a wireless-enabled PDA or laptop computer. Railway and bus stations, airports and ports, and many other public places (like McDonald?s!) now provide wireless access points (sometimes referred to as "hot spots"). Interestingly, there are more mobile phones with access to the Internet than there are computers, although due to the far higher cost involved, these facilities are still not widely used.
The Internet, and the many Web-based services available today, has changed the way we live and work, probably for ever. More of us are now able to work from home, or at least work far more flexibly, thanks to secure broadband Internet connections. We can shop online, bank online, and even renew our motor insurance, road tax and the TV license online. We can receive live Internet TV and radio broadcasts, download music and video, catch up with the news and sport, get a weather report, book a holiday, or even track down long lost friends, all online. Where will the Internet be in ten years time? I suspect it would be very foolish to make any predictions.
|
0.999999 |
Biological pathways are important for understanding biological mechanisms. Thus, finding important pathways that underlie biological problems helps researchers to focus on the most relevant sets of genes. Pathways resemble networks with complicated structures, but most of the existing pathway enrichment tools ignore topological information embedded within pathways, which limits their applicability.
A systematic and extensible pathway enrichment method in which nodes are weighted by network centrality was proposed. We demonstrate how choice of pathway structure and centrality measurement, as well as the presence of key genes, affects pathway significance. We emphasize two improvements of our method over current methods. First, allowing for the diversity of genes’ characters and the difficulty of covering gene importance from all aspects, we set centrality as an optional parameter in the model. Second, nodes rather than genes form the basic unit of pathways, such that one node can be composed of several genes and one gene may reside in different nodes. By comparing our methodology to the original enrichment method using both simulation data and real-world data, we demonstrate the efficacy of our method in finding new pathways from biological perspective.
Our method can benefit the systematic analysis of biological pathways and help to extract more meaningful information from gene expression data. The algorithm has been implemented as an R package CePa, and also a web-based version of CePa is provided.
As omics and high throughput technology continues to develop, researchers can increasingly understand biological phenomena at the systems level; that is, can elucidate the complicated interactions between genes and molecules responsible for biological functions . Microarray technology has been widely used to measure gene expression profiles and has produced huge amounts of data for biological analysis . However, traditional single gene analysis tells us little about the cooperative roles of genes in real biological systems. New challenges for microarray data analysis are to find specific biological functions affected by a group of related genes. Biological pathways are sets of genes or molecules that act together by chemical reactions, molecule modifications or signalling transduction to carry out such functions . Since pathways are essentially integrated circuits that actualize specified biological processes, perturbation of pathways may be harmful to regular biological systems. Thus, finding biologically important pathways can assist researchers in identifying sets of genes responsible for essential functions. Currently, amount of tools are available to identify which pathways are significantly influenced based on the transcription level change of member genes [4, 5]. In other words, they identify pathways where differentially expressed genes are enriched.
Since a pathway can be denoted as a set of genes, pathway enrichment belongs to a more general category of methods termed gene set enrichment. Two main categories of enrichment methodologies exist: over representation analysis (ORA) and gene set analysis (GSA) . The former only focuses on the number of differential genes in the pathway, while the latter incorporates the entire gene expression from microarray datasets. In fact, ORA is a special case of GSA, utilizing a binary transformation of gene expression values. In standard ORA, the correlations between genes within the pathway and those that are differentially expressed are evaluated by Fisher’s exact test or chi-square test, in form of a 2 × 2 contingency table . The most popular ORA online tool in current use is DAVID , which supports a variety of species and gene identifiers. For researchers familiar with the R statistical environment, the GOstats package is a highly recommended ORA analysis tool. GSA methods are implemented via either a univariate or a multivariate procedure . In univariate analysis, gene level statistics are initially calculated from fold changes or statistical tests (e.g., t-test). These statistics are then combined into a pathway level statistic by summation or averaging . GSEA is a widely used univariate tool that utilizes a weighted Kolmogorov-Smirnov test to measure the degree of differential expression of a gene set by calculating a running sum from the top of a ranked gene list. Multivariate analysis considers the correlations between genes in the pathway and calculates the pathway level statistic directly from the expression value matrix using Hotelling’s T2 test or MANOVA models . Besides these standard models, extended models of GSA exist. For example, GSCA (Gene Set Co-Expression Analysis) aims to identify gene sets whose members have different co-expression structures between phenotypes; ROAST uses a Monte-Carlo simulation for multivariate regression which is applicable to diverse experimental designs; GGEA (Gene Graph Enrichment Analysis) evaluates gene sets as Petri networks constructed from an a priori established gene regulatory network. Further studies have focused on the methodology issues of gene set enrichment analysis, such as evaluating the power of different statistical models [6, 16], generating null distributions of gene set scores [17, 18], and overlapping of gene sets [19–21]. The approach of gene set enrichment analysis is also applicable to a broad range of systems-biology-related fields, including functional network module analysis and microRNA target prediction [23, 24].
Current enrichment methods are limited for pathway analysis because they treat genes identical in pathways. Rather than comprising a list of genes, a pathway identifies how member genes interact with each other. Clearly, perturbation on a key gene will make more considerable effect for the pathway than on an insignificant gene. Since a pathway is represented as a network with nodes and edges, its topology is essential for evaluating the importance of the pathway. To date, few pathway enrichment studies have incorporated any topological information. Gao et al. designed a pathway score in which the values of all connected gene pairs are summed, where the value of a gene pair is obtained by multiplying the absolute normalized expression values of the paired genes. Hung et al. defined a value for each gene based on the closest correlated neighbor genes, and assumed this value as the weight of the Kolmogorov-Smirnov test in GSEA procedure for each pathway. Drăghici et al. introduced a topology term into the scoring function, reflecting that the importance of genes is enhanced if they in turn influence important downstream genes. Thomas et al. assigned larger weights to upstream and downstream pathway genes, and to genes having high connectivity, and then integrated into the maxmean statistics . Currently available methods determine the importance of genes in the pathway by a single measure. However, because of the complexity of biological pathways and the varying characteristics of genes, such single-measure quantitation cannot fully capture the properties of different genes on biological environment. Thus, a model that comprehensively integrates both enrichment and topology information is urgently required.
Here, we propose a general, systematic and extensible enrichment methodology by which to find significant pathways using topology information. Two improvements of our method over current methods are apparent. First, given the diversity of genes’ characteristics and the difficulties of covering gene importance from all angles, we do not assume a fixed measurement for each gene but allow the user to specify the method by which network nodes will be weighted, as an optional parameter in the model. This feature enables researchers to assess gene importance from a perspective relevant to their particular biological problem. In our model, the importance of genes in pathways is assessed by network centralities. In graph theory, centrality provides a means of ranking nodes based on network structure. Different centrality measurements assign importance to nodes from different aspects. Degree centrality quantifies the number of neighbours to which a node directly connects, while betweenness defines the number of information streams passing through a given node. Generally speaking, large centrality values are assigned to central nodes in the network. Nodes representing metabolites, proteins or genes with high centralities are essential for maintaining biological networks in steady state [30, 31]. Moreover, the relevance of a particular centrality measurement may vary according to the biological role of the pathway [32, 33]. Choice of centrality measurement depends on the types of genes considered important in the pathway. Second, nodes rather than genes are taken as the basic units of pathways in the model. In general, the regular biological functions in significant pathways are usually altered where abnormal pathway states arise from abnormal internal node states. We note that pathway nodes may represent not only single genes, but also complexes and protein families. For a complex comprising more than one gene, if one member gene has been altered, the function of the whole complex is disrupted. On the other hand, a single gene may reside in multiple complexes; if this gene loses its function, all of its complexes will be influenced. Therefore a mapping procedure from genes to pathway nodes is applied in our model. The pathway nodes further include non-gene nodes such as microRNAs and compounds, which also contribute to the topology of the pathway. Hence, all types of nodes are retained in our pathway analysis.
In this article, the original pathway enrichment method is extended by assigning network centralities as node weights, and nodes are mapped from differentially expressed genes in pathways. The model is flexible in that it can readily accommodate available gene set enrichment methods and various topological measurements. Through a simulation study, we demonstrate how pathway significance depends on network structure and choice of centrality measurement. In the analysis of liver cancer data set, our model identified relevant biological processes which were bypassed using existing methods. We also demonstrate how key genes affect the significance of pathways directly underlying biological processes.
Because ORA methodology is easily implemented and rapidly executed, it is favored over GSA in applications . Therefore, we focus mainly on the centrality-based extension of ORA, while the extension of GSA will be discussed briefly at the end of this article.
Since a pathway represents as a network, the basic unit of the network (the node) is not always a single gene. In real biological pathways, the nodes can also represent complexes or protein families. Moreover, the product of a particular gene may be incorporated into different complexes to serve different functions. Such diverse roles of gene products are ignored by traditional ORA methods, possibly leading to erroneous interpretations. Abnormal node states are expected to contribute to the abnormal states of pathways. As previously mentioned, the function of a multi-gene complex is affected by alteration of any one gene in the complex, while alteration of a multi-complex gene influences all of the complexes in which the gene resides. Merely counting genes in pathways cannot reflect these different types of roles played by different genes. In a real-world pathway catalogue, a node typically comprises two or more genes, and some genes locate in multiple complexes or families. Among pathways in the NCI-Nature catalogue of Pathway Interaction Database (PID) , 58.6% of nodes comprise more than one gene while 47.2% of genes reside in multiple nodes (Figure 1A, 1B). Compounds and microRNAs can also form pathway nodes. Although the changing quantity of these entities is not captured by typical microarray experiments, they may contribute significantly to pathway regulation. Therefore, these types of nodes cannot be neglected in topological pathway analysis. For the above reasons, the number of genes involved in a biological pathway does not correspond to the number of nodes in the pathway. Figure 1C shows how node count varies with gene count in pathways extracted from PID. Therefore, in our analysis we map genes to the pathway nodes and assume the node as the basic pathway unit. In this way, if any member of a complex or family is differentially expressed, the node representing the complex or family is differentially affected. We consider that nodes representing protein coding genes, compounds and microRNAs are all legitimate regulators of pathways.
Meta-analysis of the pathway catalogue. A) Distribution of the number of member genes in each node; B) Distribution of the number of nodes in which a single gene resides; C) Relationship between node count and gene count in biological pathways. The pathways are derived from Pathway Interaction Database, NCI-Nature catalogue. For figure A and B, points are log-scaled on the Y-axis.
where s is the pathway score, w i is the weight of the ith node (reflecting the importance of the node), n is the number of nodes in the pathway, and d i identifies whether the ith node is differentially affected. The pathway score is the aggregate of two components, the weight and the number of differential nodes. Therefore, if a node has larger weight, i.e. is more important, it more strongly determines whether the pathway is significant. On the other hand, large numbers of differential nodes also increase the pathway score. Consequently, a significant pathway may contain a few highly important nodes, while an insignificant pathway contains many non-significant differential nodes. In Equation 1, the definition of w is general and the weight can be assigned any value the researcher considers appropriate. Note that when w i = 1 for all i, s is simply the number of differential nodes in the pathway. We refer to this condition as the equal weight condition in the following section.
The most important information in pathways comprises the complicated interactions between genes that govern the transmission of biological signals through networks. Since pathways present as networks, it is natural to define the weight w from topological information. In existing methods using topological information, various aspects of gene importance are assigned fixed values. It is noteworthy that, because genes play different roles in biological pathways, it is difficult to design measurements that cover the entire spectrum of a gene’s function. Instead of designing single measurements, we compute various topological measurements that measure the importance of genes from different aspects. Since different measurements relate to different biological functions, the best practice is to try every choice in the search for significant pathways.
Here, we identify central nodes in pathways using network centrality. Recall from the Background section that centrality in graph theory is a means of ranking nodes according to network structure. Two frequently-used centralities, degree and shortest path betweenness (or more concisely, betweenness), are selected as candidate measurements. Since pathways are directed networks, degree centrality is denoted as in-degree and out-degree. In biological networks, in-degree refers to the number of upstream genes directly acting on a given gene, while out-degree refers to the number of downstream genes directly acted upon by the gene. As previously mentioned, betweenness assesses the amount of information streaming through a given node in the network. These two centralities are broadly used in biological network analysis [31, 35].
To measure the importance of nodes in the network from different aspects, we define an additional centrality: largest reach. The largest reach centrality is based on the shortest path between two nodes and is affected by all the other nodes in the network. The largest reach centrality determines how far a node can send or receive information within the network. It is defined as the largest length of the shortest paths to all the other nodes in the network. Since information is always transmitted sequentially in biological pathways, the largest reach centrality can reflect whether nodes stay in the upstream or downstream part of the pathway. In a directed network, the largest reach is denoted as in-largest reach and out-largest reach.
Other centralities, besides those described above, can also be considered. For instance, the closeness centrality computes the time required to spread information from one node to all other nodes. The eccentricity centrality determines whether a node resides in the center of the network and whether the distribution of nodes around the central node is symmetric. The stress centrality measures the extent to which a node can hold network communications. The eigenvector centrality measures the importance of a node based on its connections to other high-scoring nodes in the network (which contribute more to the node score than low-scoring nodes). Centralities closely related to the eigenvector are Katz’s Status Index and PageRank. For more details on this subject, readers may refer to [32, 33, 36].
A novel gene list and a novel pathway are generated in the simulation study. In the pathway, we assume that every node corresponds to a single gene. The contingency table for ORA is listed in Table 1. The p-value of the pathway (1.36 × 10−5 by Fisher’s exact test, one sided) is constant and independent of pathway structure.
The simulated microarray contains 10000 genes of which 1000 genes are differentially expressed. The novel pathway contains 200 genes of which 40 are differential genes.
The structure of the pathway is generated as random networks. Two representative random network models, Erdös-Rényi model (abbreviated to ER) and Barabási-Albert model (abbreviated to BA), are selected. These models are the basic random network models in graph theory but their network structures differ. We generate ER random networks as follows: 1) Each pair of nodes has the same probability (1/n) to be connected, where n is the number of nodes in the pathway; 2) Each connection can choose a direction with equal probability (p = 0.5). The BA random network is generated as follows: 1) The probability that a node will make a new connection is proportional to its degree; 2) Each connection can choose a direction with equal probability (p = 0.5). In the ER model, node degree follows a binomial distribution; while in BA model it follows a power law distribution. In the BA model, the majority of nodes have few neighbors while a small minority holds most connections in the network. Examples of ER and BA random networks can be found in Additional file 1.
The structure of the pathway was generated for 1000 times, and 40 differential nodes were randomly selected from each simulated network. For each simulated network, we calculate the significant of the pathway. Values of in-degree, out-degree, betweenness, in-largest reach, out-largest reach centralities, as well as the equal weight condition, are compared between our method and traditional ORA. Note that since every node corresponds to a single gene, the equal weight condition approximates to the hypergeometric distribution, on which traditional ORA is based .
Since the pathway score is computed from a list of differential nodes, we measure the approximate distribution of the differential nodes’ centrality in each simulation by four values: maximum, median, minimum and 75th quartile. From these four values, the effect of the differential nodes’ centralities on the final pathway score can be estimated. Figure 2 illustrates p-values and distribution of centralities of differential nodes in each simulation under different centrality measurements. The proportions of the pathway with p-values ≤ 0.01 are listed in Table 2. Clearly, the significance of the pathway is lost when centrality is used as a weighting factor, and levels of pathway significance depend on network structure and type of centrality measure. For example, in an ER-generated network structure in which nodes are weighted by in-degree, the proportion of being significant for the pathway is 57.4% out of 1000 simulations.
P -values and centrality distributions of pathways with different random network structures under different centrality measurements. Pathway topologies are generated from (A) Erdös-Rényi model and (B) Barabási-Albert model. Comparisons are made between in-degree, out-degree, betweenness, in-largest reach, out-largest reach centralities, as well as the equal weight condition. Each plot represents the distribution of differential nodes centralities in each simulation, assessed by maximum value, the 75th quartile, median value and minimum value. All data are ordered by p-values on the X-axis. Points in the figure are randomly shifted by small intervals for ease of visualization.
When using degree (in and out) as the weight, the ER model outputs a larger proportion of significant pathways than does the BA model. In BA, a small minority of important nodes (measured by degree) dominates the pathway; hence, if differential nodes are randomly picked from a BA network, the probability of selecting those nodes which yield large pathway scores is low. The majority of trials, therefore, generate insignificant pathways.
It is observed that maximum largest reaches (in and out) from both ER and BA networks are similar (around 10; see Figure 2), but the median values and the 75th quartile of largest reach in the BA-generated network exceed those of the ER-generated network, implying that the distribution of largest reach in BA model is right shifted relative to that of the ER model (The histograms of the largest reach in both models can be found in Additional file 2). As a result, when using largest reach as weight, the BA model produces a higher proportion of significant pathways than does the ER model. This is due to the presence of central hub nodes in the BA model, which ensure robust information transmission and are thus more likely to score high largest reach values.
From the simulation study, we observe that although the number of differential nodes in a pathway is significant by Fisher’s exact test (or by its approximation, the equal weight condition), the pathway will not be significantly affected if these genes hold less important positions in the pathway. The level of significance is affected by both centrality measurements and network structure. If researchers consider that nodes with large degree will be more important, without considering the network topology, traditional ORA would yield large false positives. In the current simulation study, the proportion of significant pathway under ORA is expected to be 100%; but, when the structure of the pathway is assembled by the ER model and assessed by degree centrality, there are only 57.4% significant pathways from 1000 simulations. It means there would be 42.6% false positives from above perspective.
We next assess the influence of the key nodes in the evaluation of pathway significance. For the same novel gene list and novel pathway as were used in the simulation study, the number of differential nodes in the pathway is varied from 1 to 100. The pathway structures are generated from the BA model with no directions, and degree is used as the centrality measure. Differential nodes may be integrated into the pathway via two approaches; 1) from largest to smallest degree, and 2) from smallest to largest degree.
In the BA model the small number of nodes holding most connections are the most central nodes, thus they contribute majorly to the significance of the pathway. The pathway would be altered if these nodes were differentially affected. As illustrated in Figure 3, when selecting high-degree differential nodes, provided that the number of differential nodes is 5 or greater, the pathway is highly significant (p-value < 0.01). By comparison, pathways generated from 5 differential nodes by traditional ORA are far from significant (p-value ≈ 1). Applying ORA, the minimum number of differential nodes required to achieve p-value < 0.01 is 31. On the contrary, if differential nodes in the pathway are largely of very low degree, many more of these nodes are required to make the pathway significant. As shown in Figure 3, at least 90 small-degree differential nodes must be selected to render the p-value of the pathway less than 0.01. In conclusion, considering the number of differential nodes alone cannot fully reflect the significance of the pathway. We reiterate that without highlighting these key nodes, researchers are likely to make erroneous interpretations of biological pathways.
Comparison of p -values influenced by key nodes. Differential nodes, weighted by degree, are selected in two ways: from high to low degree and from low to high degree. Also, traditional ORA was applied for comparison.
We tested our method on a real microarray dataset [GEO: GSE22058] . The microarray experiment measures mRNA expression changes in liver cancer tissue and adjacent non-tumour tissue. Following gene ID matching and duplicated gene merging, 18503 genes were obtained. The top 2000 most differentially expressed genes (determined by t-test) comprised our differential gene list. NCI-Nature pathway catalogue from Pathway Interaction Database (PID) was used because it is manually curated and reviewed, and is highly recommended by the PID database. In-degree, out-degree, betweenness, in-largest reach and out-largest reach centrality measurements were applied and compared. In addition, we applied the dataset to equal weight condition and traditional ORA because the equal weight condition maps genes to nodes, while traditional ORA focuses solely on gene number. P-values for pathways are calculated from 1000 simulations and the false discovery rate (FDR) is calculated by Benjamini-Hochberg (BH) process . Cutoff for FDR is set to 0.05.
Figure 4 illustrates the heatmaps of the FDRs of pathways generated under different centrality measurements. A complete list of p-values and FDRs is tabulated in Additional file 3 and Additional file 4. Among the 11 pathways for which our method agrees with traditional ORA using at least one centrality, the PLK pathway, MET pathway and MAPK pathway are directly related to liver cancers [41, 42]. MAPK pathway is significant when nodes are weighted by in-largest reach (p-value = 0.001, FDR = 0.025), consistent with expected biological phenomena. The differential nodes are mainly located in the downstream of the pathway; that is, transcriptional factors (e.g. FOS) or cell cycle related factors (e.g. CDK5 and CD5R1), while few of the upstream genes are included in our differential gene list. As the MAPK pathway is essentially a cascade of sequential interactions , weighting its nodes by out-largest reach renders it insignificant, whereas weighting by in-largest reach, which gives larger weight to the downstream nodes, marks the pathway as significant (Figure 5). In other words, if the pathway is rendered significant by in-largest reach weighting, we can infer that the downstream nodes are differentially affected.
Heatmap of FDRs in pathways. A) Pathways evaluated as significant by both traditional ORA and our method for at least one centrality measure; B) Pathways for which our method disagrees with traditional ORA. In each heatmap, columns are sorted by FDRs calculated from traditional ORA and rows are sorted through hierarchical clustering. Green and red denote insignificant and highly significant, respectively.
Summary of MAPK-TRK pathway generated under in-largest reach centrality. A) Distribution of in-largest reach centrality of differential nodes in the simulated pathway. The distribution of differential nodes centralities in each simulation is assessed by maximum value, the 75th quartile, median value and minimum value; B) Distribution of in-largest reach centrality of all nodes in the real pathway; C) Histogram of simulated scores in the pathway; D) Graph view of the pathway where the size of a node is proportional to its centrality value and nodes in red represent differential nodes. In figures A and B, dots are randomly shifted by small intervals for ease of visualization. In figures A and C, the real pathway score is marked with a red line.
Among 8 pathways evaluated as insignificant by traditional ORA but significant by centrality-based methods, four have been previously linked to liver cancers [42, 44, 45]. AP-1 pathway is assessed as insignificant by traditional ORA because, of the 70 genes involved in the pathway, only 15 are differential. However, after mapping genes to the pathway nodes, we obtain 55 differential nodes among 114 pathway nodes. Because two key genes, FOS and JUN [46, 47], combine with a host of other genes to form activated complexes in the pathway, the mapping procedure increases the number of positions that these two genes occupy in the network. Therefore the AP-1 pathway becomes more significant under equal weight condition than under traditional ORA. As another example, the VEGF receptor (VEGFA) is a principal component in the VEGFR1 and VEGFR2 signaling pathway. As a membrane protein, VEGFA receives large quantities of extracellular information and disseminates it into intracellular proteins . VEGFA requiresVEGFR2 to form an activated complex, hence the representative node possesses high values of both in-degree and out-degree, and the degree-weighted pathway is rendered significant (p-value = 0.002, FDR = 0.034 for in-degree; p-value = 0.007, FDR = 0.104 for out-degree). On the other hand, VEGFA itself is not differentially expressed, but its companion gene VEGFR2 is. Consequently, an abnormal state of the member gene results in a dysfunctional complex. This type of circumstance, which cannot be inferred by traditional ORA, emphasizes why nodes, rather than genes, should form the basic units in pathway analysis.
Pathway analysis can assist researchers to understand biological aberrations at a systems level. The functionality of biological pathways depends upon complex gene interactions. Therefore, pathway enrichment tools should highlight genes that play important roles in the pathway from the view of topology. Here we proposed a systematic and extensible methodology, which finds significant pathways using network centrality to weight the nodes. We demonstrated that levels of pathway significance depend on choice of pathway structure and centrality measure. The method performed favorably when applied to real-world data.
Centrality can reflects the central nodes in a pathway, and different centralities assign gene importance from different aspects. The use of centralities in biological networks can aid in explaining biological phenomena. In this work, we demonstrated the advantages of using multiple centrality measurements to obtain a complete view of the system. Pathway nodes, rather than genes, should form the basic units in pathway analysis, since many genes must aggregate as complexes in order to function completely. The focus on pathway nodes accommodates the fact that genes can be members of complexes or families, or may exist in many complexes. Finally, it should be noted that a high quality and non-redundant pathway structure dataset is required. Projects like BioPAX , which aspire to the integration and exchange of biological pathway data, will greatly assist future pathway analysis.
Our method can reveal new findings that relate to, and can aid the understanding of, current biological problems. We consider that our method will become a valuable tool in the systematic analysis of biological pathways, and will help to extract more meaningful information from gene expression data.
To implement the method, a list of differential genes and a list of background genes, both formatted with gene identifier (e.g. gene symbol or RefSeq ID), is required. A list of pathways and their topology, and a means of mapping genes to pathway nodes, is also required. In this study, 223 NCI-Nature pathways from PID (released September 9th 2011) are included. The pathway data are parsed from XML format file provided by the PID FTP site. The Perl code for parsing can be obtained from the author’s website (http://mcube.nju.edu.cn/jwang/lab/soft/cepa/). The general workflow of the method is illustrated in Figure 6.
Workflow of the centrality-based pathway enrichment analysis. A typical figure on the left illustrates the corresponding step on the right side. The essential steps are: 1) Obtain a differentially expressed gene list. This list can be compiled using a variety of methods and sources; 2) Map genes to nodes; 3) Select several centrality measurements and calculate their values; 4) Weighting nodes by centrality, calculate the pathway-level score; 5) In simulations, repeat steps 1 to 4 for a user-specified number of cycles (1000 cycles were used in the current study) and generate a null distribution of pathway-level scores; 6) Calculate p-values and display the results summary.
PID provides mappings from UniProt ID to node id. In this study, gene symbol is selected as the primary identifier ID. The mapping from gene symbol to HGNC ID (accomplished via the online “custom downloads” tool in HGNC database) and the mapping from HGNC ID to UniProt ID (using idmapping.dat.gz on the UniProt FTP site) are first extracted. The final mapping from gene symbol to node id is generated by merging the above three kinds of mapping data.
Two commonly used centralities, degree and shortest path betweenness, are selected as initial candidate measurements. Degree centrality quantifies the number of neighboring nodes to which the node of interest is directly connected, while betweenness centrality measures the amount of information streaming through a given node.
To measure the importance of nodes in the network from multiple aspects, we defined an additional centrality: largest reach. This centrality is based on the shortest path between two nodes and the value of the centrality is affected by all other nodes in the network. The largest reach centrality measures how far a node can send or receive information. It is defined as the largest length of the shortest path from node v to all other nodes in the network (see Equations 3 and 4 where d(w, v) refers to the shortest path length between nodes v and w). In a directed network, this measure is denoted as in-largest reach or out-largest reach.
Users of our system can replace the provided centrality measures with their centrality measurements of interest. It is recommended that centrality choice is guided by biological plausibility and/or reality.
where s is the score of the pathway, w i is the weight of the ith node and reflects the importance of the node, n is the number of nodes in the pathway, and d i identifies whether the ith node is differentially affected or not.
In our model, we weight the nodes by network centrality. Because the network centrality can be zero, an additional term is added to the weight measure. In Equation 7, α is a small positive number to ensure that all weights are positive. α is chosen to exert marginal effect upon the weight. The default value of α is 1/100 of the minimum non-zero weight.
where p diff is the probability that a gene is differentially expressed. It is calculated as the proportion of differentially expressed genes on the microarray.
P ( S ≥ s ) = ∑ k = 0 n n k p diff k 1 − p diff n − k P w ∑ i = 1 k w i ≥ s .
The binomial term of equation 12 is the probability of obtaining k differential genes from n genes, and the second term is the probability that the sum of k differential genes’ weight is equal to or larger than s. The final probability P(S ≥ s) is the summation over all conditions of k.
Since genes are independent, provided that P w (w) is known, the distribution of the summation of w can be calculated. For instance, given a pathway with ER random network structure in which nodes are weighted by degree, w will follow a binomial distribution and thus P(Σ i w i ) also follows a binomial distribution.
In applications, because the weight distribution is not easily determined and nodes are not independent after the mapping procedure, the theoretic distribution is difficult to calculate. A non-parametric null distribution of s can be generated through simulation. For every gene in a pathway, we guess whether it is differentially expressed. Similar to throwing a coin, we assume that each gene has a probability p diff (calculated by Equation 10) of being differentially expressed. In each simulation, we obtain a list of simulated differentially expressed genes in the pathway. This simulated differential gene list is then mapped to the pathway nodes. The pathway structure is unchanged and the simulated pathway score is re-calculated from Equations 5 and 6. The significance is calculated as the proportion of the simulated score exceeding the real score (Equation 13).
The ORA centrality-based enrichment method yielded plausible, biologically relevant results in the simulation study and real-world data analysis. However, an oft-mentioned drawback of ORA is that an objective cutoff is appointed in the acquisition of a differential gene list, with the following consequences: 1) The resulting pathway or network may be sensitive to the cutoff . In the centrality-based extension of ORA, when a high-scoring node is marginally close to the imposed cutoff, this effect can be critical; 2) In some circumstances, differential genes are too few to apply ORA ; 3) Binary transformation of expression data leads to loss of information. To address these issues, researchers have developed the GSA framework, which utilizes all gene expression values. Like traditional ORA however, GSA assumes that genes in pathways occupy unvarying positions in the topological structure. We propose that our centrality-based enrichment methodology can be similarly extended on GSA. In this section, we suggest, but do not implement, a conceptual methodological extension to the GSA method.
where w is the weight vector and the transformation function f acts upon the product of w and d. Equation 15 incorporates centrality weight into the original node-level statistic. To prevent w from overpowering d (or vice versa) when both vectors contain continuous variables, we propose that w and d should be normalized. The null distribution of the pathway score could then be generated by permuting the gene expression matrix.
The method proposed in the article has been implemented in an R package named CePa which is available at CRAN (http://www.r-project.org/). In the CePa package, four pathway catalogues, namely NCI-Nature, BioCarta, Reactome and KEGG from PID, have been integrated. Centrality calculation and network visualization are processed by igraph package . A web-based version of CePa is available for researchers who are not familiar with R programming (http://mcube.nju.edu.cn/cgi-bin/cepa/main.pl), in which Cytoscape Web is used for network visualization .
This work was supported by grants from the National Natural Science Foundation of China (30890044, 31071232 and 31170751), International Bureau of the Federal Ministry of Education and Research (CHN08/031) and Jiangsu Province Innovation Fund for PhD Candidates (CX10B_014Z).
ZG developed the algorithm, performed the analysis, implemented the software and wrote the manuscript. JL and KC implemented the online version of the software. JZ and JW conceived the study, and helped to draft the manuscript. All authors have read and approved the final manuscript.
|
0.945213 |
Romanticism and Revolution. This political cartoon by James Gillray (1757-1815) illustrates the difference between opposing political views of the French Revolution by contrasting a dignified British freedom with the events of the Reign of Terror, or the rule of fear masquerading as liberty.
The French Revolution is widely recognized as one of the most influential events of late eighteenth- and early nineteenth-century Europe, with far reaching consequences in political, cultural, social, and literary arenas. Although scholars such as Jeremy Popkin point to more concrete political issues as grounds for the upheaval, supporters of the Revolution rallied around more abstract concepts of freedom and equality, such as resistance to the King’s totalitarian authority as well as the economic and legal privileges given to the nobility and clergy. It is in this resistance to monarchy, religion, and social difference that Enlightenment ideals of equality, citizenship, and human rights were manifested. Shannon Heath. Romantic Politics. A printing press in William Hone's best-selling Political House that Jack Built (1819) shows the faith of reformers in language's power to produce political change.
Romanticism emerged amidst political tumult, as evidenced by the French Revolution (1789) and First Reform Act (1832) that conventionally bookend this literary-historical period. The Culture of Rebellion in the Romantic Era. Eugene Delacroix, Liberty Leading the People, 1830.
One of Delacroix’s best known works, the painting depicts a bare-breasted Liberty leading Parisians of mixed social and economic backgrounds into battle. The Romantic era is typically noted for its intense political, social, and cultural upheavals. Victor Marie Hugo: Master of the Romantic Era. Romanticism. If the Enlightenment was a movement which started among a tiny elite and slowly spread to make its influence felt throughout society, Romanticism was more widespread both in its origins and influence.
No other intellectual/artistic movement has had comparable variety, reach, and staying power since the end of the Middle Ages. Beginning in Germany and England in the 1770s, by the 1820s it had swept through Europe, conquering at last even its most stubborn foe, the French. It traveled quickly to the Western Hemisphere, and in its musical form has triumphed around the globe, so that from London to Boston to Mexico City to Tokyo to Vladivostok to Oslo, the most popular orchestral music in the world is that of the romantic era. Famous people of the Romantic Period. The Romantic period or Romantic era lasted from the end of the Eighteenth Century towards the mid 19th Century.
Romanticism was a movement which highlighted the importance of: The individual emotions, feelings and expressions of artists.It rejected rigid forms and structures. Instead it placed great stress on the individual, unique experience of an artist / writer.Romanticism gave great value to nature, and an artists experience within nature. This was in stark contrast to the rapid industrialisation of society in the Nineteenth Century.Romanticism was considered idealistic – a belief in greater ideals than materialism and rationalism and the potential beauty of nature and mystical experience.Romanticism was influenced by the ideals of the French and American revolution, which sought to free man from a rigid autocratic society.
British Romanticism. The Romantic period was largely a reaction against the ideology of the Enlightenment period that dominated much of European philosophy, politics, and art from the mid-17th century until the close of the 18th century.
Whereas Enlightenment thinkers value logic, reason, and rationality, Romantics value emotion, passion, and individuality. Chris Baldick provides the following description: “Rejecting the ordered rationality of the Enlightenment as mechanical, impersonal, and artificial, the Romantics turned to the emotional directness of personal experience and to the boundlessness of individual imagination and aspiration” (222-3).
Untitled Document. By the late 18th century in France and Germany, literary taste began to turn from classical and neoclassical conventions.
The generation of revolution and wars, of stress and upheaval had produced doubts on the security of the age of reason. Doubts and pessimism now challenged the hope and optimism of the 18th century. Men felt a deepened concern for the metaphysical problems of existence, death, and eternity. It was in this setting that Romanticism was born. Origins Romanticism was a literary movement that swept through virtually every country of Europe, the United States, and Latin America that lasted from about 1750 to 1870. The Romantic Style The term romantic first appeared in 18th-century English and originally meant "romancelike"-that is, resembling the fanciful character of medieval romances.
Romanticism stresses on self-expression and individual uniqueness that does not lend itself to precise definition.
|
0.994548 |
Context: Even in cases where two people share the same gene, they can produce widely differing amounts of the protein the gene codes for. This can lead to differences in physical characteristics, and it can also mean the difference between sickness and health. Segments of DNA called regulatory elements are one factor controlling how much of a particular protein the body produces. While researchers today can use algorithms to pick out genes from sequences of DNA, they have previously been unable to accurately distinguish regulatory elements from other non-coding DNA, let alone match those elements with the genes that they regulate. Researchers at the University of Pennsylvania, led by Vivian Cheung, have found a way to do just that.
Methods and Results: Using white blood cells from 94 people, the researchers identified more than 3,500 genes whose expression was similar among relatives but varied widely among people who were unrelated. These patterns of expression were then correlated with patterns of known genetic markers across the genome. Hundreds of genes’ expression was linked to particular genetic markers – far more than the number predicted by chance. About four-fifths of these markers were located more than 5,000 base pairs from the genes that they regulated; many were even on other chromosomes. Researchers found that some “hot spot” regions apparently influence the expression of more than 30 genes. In addition, many genes seem to be regulated by more than one region.
Why it matters: Researchers can finally study the genetic differences governing gene expression. The hot spots, which Cheung’s team calls “master regulators,” will help to tease out some of the mysteries that surround gene expression. More immediately, the techniques may allow researchers to use variation within genes and within regulatory elements to understand and treat disease. For years, geneticists have scoured the human genome for genes that contribute to complex traits, like susceptibility to depression or heart disease. Finding factors that control the genes is just as important but much more difficult. Now scientists should be better equipped to find the genetic variations that make a difference in matters of life and death.
Source: Morley, M. et al. (2004) Genetic analysis of genome-wide variation in human gene expression. Nature 430:743-7.
Context: Good health requires more than the right genes; those genes must also be able to switch on and off at the right time. In research involving animals or cell cultures, fi guring out a gene’s function is much easier when scientists can turn it on at will. Led by Richard Mulligan, a group of researchers at Harvard Medical School and Children’s Hospital in Boston have crafted genes that come with an easily controlled on/off switch – a powerful research tool that has the potential to off er a new kind of gene therapy.
Methods and Results: The switch consists of a ribozyme, an enzyme made up of RNA. Laising Yen, a postdoc in Mulligan’s lab, and colleagues inserted a ribozyme sequence into a gene that coded for an easily detectable protein. Cells with the altered gene made long stretches of messenger RNA; part of the RNA made the ribozyme, while the rest carried instructions for making the protein. The researchers tinkered with diff erent ribozymes, eventually creating ones that were able to chop up the RNA before the protein it coded for could be made. In the cell cultures and living mice containing the ribozyme sequence, protein production dropped to nearly undetectable levels. What’s more, the researchers were able to deactivate the ribozyme using certain drugs – essentially turning on the inserted gene by turning off the off switch. Such treatments succeeded in restoring gene expression by up to 50 percent.
Why it matters: The researchers imagine creating genetic therapies in which the onset of a physiological condition would activate the genes necessary to manage it. Genetically engineered cells might be able to secrete insulin in accordance with glucose levels, freeing diabetics from constant blood monitoring and insulin injection. For the moment, however, such dreams are far from reality. Closer at hand and still very exciting are discovery techniques that would allow researchers to monitor the effects produced by several genes in a single animal, or to analyze how a gene adjusts to an organism’s aging or to different stages of a disease.
Source: Yen, L. et al. (2004) Exogenous control of mammalian gene expression through modulation of RNA self-cleavage. Nature 431:471-6.
Context: Finding ways to get drugs to the right part of the body is a constant challenge for drugmakers. The intestines would seem easier to treat than other areas, as drugs taken orally should eventually arrive there. But a number of promising drugs for the treatment of colitis, an intensely uncomfortable infl ammation of the large intestine, become waylaid in the mucus of the small intestine and never reach their target. Now, a group of researchers led by Lothar Steidler from Ghent University in Belgium has genetically modified bacteria to secrete such a drug as they travel through the gut.
Methods and Results: The researchers engineered Lactococcus lactis so that it would produce trefoil factors, shamrock-shaped proteins that hasten healing and protect the gut from injury. The modified bacteria proved more effective than the purified protein alone at preventing and treating colitis in mice. Outside the body, the bacteria do not survive.
Why it matters: The use of genetically modified (GM) organisms as drug delivery devices is moving toward the mainstream. Another GM bacterium produced by these researchers, one that secretes the anti-inflammatory drug interleukin-10, is being tested in European clinical trials as a treatment for infl ammatory bowel disease. Other GM bacteria, to be delivered to the nose and vaginal tract, are being studied to prevent infectious disease. Still another may deliver a cancer vaccine. In the 1980s and ’90s, recombinant DNA technology ushered in an era of new protein drugs; despite substantial regulatory and technical obstacles, bacteria may prove an effective way to deliver them.
Source: Vandenbroucke, K. et al. (2004) Active delivery of trefoil factors by genetically modifi ed Lactococcus lactis prevents and heals acute colitis in mice. Gastroenterology 127:502-513.
Context: Strokes kill neurons by depriving them of oxygen. Without oxygen, neurons have difficulty producing the molecule ATP, their source of energy. This prevents them from performing housekeeping chores, including the important task of pulling glutamate, a message-transmitting chemical, back into the neuron after its message has been received; glutamate keeps sending signals to neighboring neurons, resulting in a deadly influx of calcium ions. However, drugs designed to curb stroke damage by blocking glutamate’s effects have shown disappointing results in clinical trials. New research, led by Zhigang Xiong at the Legacy Clinical Research and Technology Center in Portland, OR, shows another strategy that seems more promising.
Methods and Results: To make ATP without oxygen, cells use an inefficient method that produces lactic acid and protons as by-products. Neurons using this method become more acidic; they also become more susceptible to damage, but it wasn’t clear why. Xiong and his colleagues speculated that acid-sensing ion channels (ASICs) might move calcium into the cell, thereby accelerating neuronal damage. After showing that strokelike conditions activated ASICs, and that ASICs allowed calcium into the neuron, they studied mice lacking the gene for ASIC1a, which is highly expressed in the brain. When subjected to simulated strokes, mice without the gene fared better than mice with it, even when treated with memantine, a drug that blocks the actions of glutamate. The researchers also discovered that small molecules that block ASICs can protect against stroke injury. In rats treated with one such molecule before simulated strokes, the rate of neuronal death was less than half that among untreated rats.
Why it matters: Drugs that block ASICs will likely face many of the same challenges as those that block glutamate: they must be administered quickly after a stroke and could have unintended effects on brain function. Nonetheless, small molecules have already shown the capacity to prevent the type of brain damage caused by this newly described mechanism. Thus, these results offer hope against a devastating cause of disability and the third-leading cause of death in the United States.
Source: Xiong, Z. G. et al. (2004) Neuroprotection in ischemia: blocking calcium-permeable acid-sensing ion channels. Cell 118: 687-698.
|
0.998906 |
How to receive email notification for successful submissions?
When someone completes the form and submits the form, how am I notified of this form?
You will receive an email notification when a submission has been completed. To set it up, you can follow this guide: http://www.jotform.com/help/25-Setting-Up-Email-Notifications.
To properly set it up, you may also want to check this guide: http://www.jotform.com/help/208-How-to-setup-email-alerts-to-prevent-email-bouncing-related-issues. This should help to prevent your email notifications for bouncing related issues.
|
0.999921 |
What does a wrought iron worker in Grand Rapids, MI do?
An ornamental wrought iron worker fabricates bulk iron into the types of structures that a particular home calls for, then installs them onsite. Some of these pieces are mass produced, others custom crafted. Many ornamental wrought iron workers also have the design skills to fashion unique decorative components. Wrought iron workers must be trained in safety to create elements such as railings and balconies that both satisfy a home’s aesthetic needs and meet Grand Rapids, MI building codes. Here are some related professionals and vendors to complement the work of wrought iron workers: Fencing & Gates, Decks, Patios & Outdoor Enclosures, Cladding & Exteriors.
Find a wrought iron worker on Houzz. Narrow your search in the Professionals section of the website to Grand Rapids, MI wrought iron work. You can also look through Grand Rapids, MI photos to find examples of ironwork that you like, then contact the ironworker who fabricated them.
|
0.952159 |
What are the alternatives to alternative finance?
I’m sure you’ve heard of alternative finance being the saviour of small businesses. If you believe some of the stories in the press, the banks are not lending to small businesses at all, and using alternative finance such as crowdfunding or peer-to-peer lending are, in today’s market, the only ways to get any form of finance for a business.
Alternative finance can be a brilliant option for some businesses, as the range of possible commercial finance options has increased. However, they are not necessarily right for each and every business. And the traditional banks are still lending to businesses, even if the majority are only focussed on specific businesses and industry sectors. What this means for potential borrowers, is that there are more options than ever, but it has become increasingly harder to identify which avenue is right for each particular borrower.
And that is where we come in as commercial finance brokers. We understand that your expertise is in running your business, and therefore having to shop around for finance becomes a tedious process. As brokers with access to a wide range of finance providers, from the traditional high street banks, to niche and specialist lenders, and even crowdfunders and peer-to-peer lenders, we can focus on the right source of finance for your business, so you can focus on running and developing your business.
|
0.965747 |
Where does the phrase “He is risen. He is risen, indeed. Alleluia!” come from?
|
0.921617 |
For the grunge rock band, see Ileum (band). For other uses, see Ilium (disambiguation).
The ileum /ˈɪliəm/ is the final section of the small intestine in most higher vertebrates, including mammals, reptiles, and birds. In fish, the divisions of the small intestine are not as clear and the terms posterior intestine or distal intestine may be used instead of ileum.
The cecal fossa. The ileum and cecum are drawn backward and upward.
The ileum follows the duodenum and jejunum and is separated from the cecum by the ileocecal valve (ICV). In humans, the ileum is about 2–4 m long, and the pH is usually between 7 and 8 (neutral or slightly basic).
Ileum is derived from the Greek word eilein, meaning "to twist up tightly".
The ileum is the third and final part of the small intestine. It follows the jejunum and ends at the ileocecal junction, where the terminal ileum communicates with the cecum of the large intestine through the ileocecal valve. The ileum, along with the jejunum, is suspended inside the mesentery, a peritoneal formation that carries the blood vessels supplying them (the superior mesenteric artery and vein), lymphatic vessels and nerve fibers.
The ileum has more fat inside the mesentery than the jejunum.
The diameter of its lumen is smaller and has thinner walls than the jejunum.
Its circular folds are smaller and absent in the terminal part of the ileum.
While the length of the intestinal tract contains lymphoid tissue, only the ileum has abundant Peyer's patches, unencapsulated lymphoid nodules that contain large numbers of lymphocytes and other cells of the immune system.
A single layer of tall cells that line the lumen of the organ. The epithelium that forms the innermost part of the mucosa has five distinct types of cells that serve different purposes, these are: enterocytes with microvilli, which digest and absorb nutrients; goblet cells, which secrete mucin, a substance that lubricates the wall of the organ; Paneth cells, most common in the terminal part of the ileum, are only found at the bottom of the intestinal glands and release antimicrobial substances such as alpha defensins and lysozyme; microfold cells, which take up and transport antigens from the lumen to lymphatic cells of the lamina propria; and enteroendocrine cells, which secrete hormones.
A thin layer of smooth muscle called muscularis mucosae.
A submucosa formed by dense irregular connective tissue that carries the larger blood vessels and a nervous component called submucosal plexus, which is part of the enteric nervous system.
An external muscular layer formed by two layers of smooth muscle arranged in circular bundles in the inner layer and in longitudinal bundles in the outer layer. Between the two layers is the myenteric plexus, formed by nervous tissue and also a part of the enteric nervous system.
General structure of the gut wall. Brunner's glands are not found in the ileum, but are a distinctive feature of the duodenum.
Goblet cells in the wall of an ileum vili. At its sides, enterocytes are visible over a core of lamina propria.
Cross section of ileum with a Peyer's patch circled.
The small intestine develops from the midgut of the primitive gut tube. By the fifth week of embryological life, the ileum begins to grow longer at a very fast rate, forming a U-shaped fold called the primary intestinal loop. The proximal half of this loop will form the ileum. The loop grows so fast in length that it outgrows the abdomen and protrudes through the umbilicus. By week 10, the loop retracts back into the abdomen. Between weeks six and ten the small intestine rotates anticlockwise, as viewed from the front of the embryo. It rotates a further 180 degrees after it has moved back into the abdomen. This process creates the twisted shape of the large intestine.
In the fetus the ileum is connected to the navel by the vitelline duct. In roughly 2−4% of humans, this duct fails to close during the first seven weeks after birth, leaving a remnant called Meckel's diverticulum.
The function of the ileum is mainly to absorb vitamin B12 and bile salts and whatever products of digestion were not absorbed by the jejunum. The wall itself is made up of folds, each of which has many tiny finger-like projections known as villi on its surface. In turn, the epithelial cells that line these villi possess even larger numbers of microvilli. Therefore, the ileum has an extremely large surface area both for the adsorption (attachment) of enzyme molecules and for the absorption of products of digestion. The DNES (diffuse neuroendocrine system) cells of the ileum secrete various hormones (gastrin, secretin, cholecystokinin) into the blood. Cells in the lining of the ileum secrete the protease and carbohydrase enzymes responsible for the final stages of protein and carbohydrate digestion into the lumen of the intestine. These enzymes are present in the cytoplasm of the epithelial cells.
The villi contain large numbers of capillaries that take the amino acids and glucose produced by digestion to the hepatic portal vein and the liver. Lacteals are small lymph vessels, and are present in villi. They absorb fatty acid and glycerol, the products of fat digestion. Layers of circular and longitudinal smooth muscle enable the chyme (partly digested food and water) to be pushed along the ileum by waves of muscle contractions called peristalsis. The remaining chyme is passed to the colon.
In veterinary anatomy, the ileum is distinguished from the jejunum by being that portion of the jejunoileum that is connected to the caecum by the ileocecal fold.
The ileum is the short termi of the small intestine and the connection to the large intestine. It is suspended by the caudal part of the mesentery (mesoileum) and is attached, in addition, to the cecum by the ileocecal fold. The ileum terminates at the cecocolic junction of the large intestine forming the ileal orifice. In the dog the ileal orifice is located at the level of the first or second lumbar vertebra, in the ox in the level of the fourth lumbar vertebrae, in the sheep and goat at the level of the caudal point of the costal arch. By active muscular contraction of the ileum, and closure of the ileal opening as a result of engorgement, the ileum prevents the backflow of ingesta and the equalization of pressure between jejunum and the base of the cecum. Disturbance of this sensitive balance is not uncommon and is one of the causes of colic in horses. During any intestinal surgery, for instance, during appendectomy, distal 2 feet of ileum should be checked for the presence of Meckel's diverticulum.
^ Guillaume, Jean; Praxis Publishing; Sadasivam Kaushik; Pierre Bergot; Robert Metailler (2001). Nutrition and Feeding of Fish and Crustaceans. Springer. p. 31. ISBN 1-85233-241-7. ISBN 9781852332419. Retrieved 2009-01-09.
^ a b Moore KL, Dalley AF, Agur AM (2013). Clinically Oriented Anatomy, 7th ed. Lippincott Williams & Wilkins. pp. 241–246. ISBN 978-1-4511-8447-1.
^ a b c Ross M, Pawlina W (2011). Histology: A Text and Atlas. Sixth edition. Lippincott Williams & Wilkins. ISBN 978-0-7817-7200-6.
^ Santaolalla R, Fukata M, Abreu MT (2011). "Innate immunity in the small intestine". Current Opinion in Gastroenterology. 27 (12): 125–131. doi:10.1097/MOG.0b013e3283438dea. PMC 3502877. PMID 21248635.
^ Sagar J.; Kumar V.; Shah D. K. (2006). "Meckel's diverticulum: A systematic review". Journal of the Royal Society of Medicine. 99 (10): 501–505. doi:10.1258/jrsm.99.10.501. PMC 1592061. PMID 17021300.
^ Cuvelier, C.; Demetter, P.; Mielants, H.; Veys, EM.; De Vos M, . (Jan 2001). "Interpretation of ileal biopsies: morphological features in normal and diseased mucosa". Histopathology. 38 (1): 1–12. doi:10.1046/j.1365-2559.2001.01070.x. PMID 11135039.
Wikimedia Commons has media related to Ileum.
Look up ileum in Wiktionary, the free dictionary.
Anatomy photo:37:11-0101 at the SUNY Downstate Medical Center – "Abdominal Cavity: The Jejunum and the Ileum"
|
0.999197 |
Search Results: 1 - 10 of 217106 matches for " Sara Maldonado-Martín "
Abstract: Our objective was to investigate the influence of pedaling technique on gross efficiency (GE) at various exercise intensities in twelve elite cyclists ( ·VO2max=75.7 ± 6.2 mL·kg-1·min-1). Each cyclist completed a ·VO2max assessment, skinfold measurements, and an incremental test to determine their lactate threshold (LT) and onset of blood lactate accumulation (OBLA) values. The GE was determined during a three-phase incremental exercise test (below LT, at LT, and at OBLA). We did not find a significant relationship between pedaling technique and GE just below the LT. However, at the LT, there was a significant correlation between GE and mean torque and evenness of torque distribution (r=0.65 and r=0.66, respectively; p < 0.05). At OBLA, as the cadence frequency increased, the GE declined (r=-0.81, p < 0.05). These results suggest that exercise intensity plays an important role in the relationship between pedaling technique and GE.
Abstract: The aims of this pilot study are on one hand, to evaluate the upper body aerobic characteristics of junior surfers competing at the European branch of the Association of Surfing Professionals (ASP) and on the other, to assess the relationship between the junior surfers' upper body aerobic characteristics and their ranking position. Ten surfers competing at the European junior branch of the ASP took part in the study. The maximal oxygen uptake (VO2MAX), the maximum power output (WMAX), the maximum lactate concentration [La]MAX, the maximum heart rate (HRMAX) and the power output at the intensity where the lactate threshold and the onset of blood lactate accumulation are produced (WLT and WOBLA) were determined during an incremental maximal test in a swim bench ergometer. It was observed a lack of a significant relationship between the ranking position and the parameters at maximal intensity (VO2PEAK, WMAX, HRMAX y [La]MAX). The WLT (W · kg-1) and the WOBLA (W · kg-1) were significantly related to ranking position (r= -0.69, p= 0.02; r= -0.72, p= 0.01, respectively). Resumen Los objetivos de este estudio piloto son por un lado, determinar las características aeróbicas de surfistas junior que compiten en la rama Europea de la Asociación de Surfistas Profesionales (ASP) y por otro, analizar la relación de estas características con la posición en el ranking. Diez surfistas tomaron parte en el estudio. Se realizó un test máximo incremental en un ergómetro. Se determinaron el consumo máximo de oxígeno (VO2MAX), la potencia máxima (WMAX), la frecuencia cardiaca máxima (HRMAX), la máxima producción de lactato en sangre ([La]MAX) y la potencia en el umbral de lactato (WLT) y en el inicio de la acumulación de lactato en sangre (WOBLA). No se observó una relación significativa entre la posición en el ranking y los parámetros a intensidad máxima (VO2MAX, WMAX, HRMAX y [La]MAX). La WLT (W · kg-1) y la WOBLA (W · kg-1) mostraron una relación significativa con la posición en el ranking (r= -0.69, p= 0.02; r= -0.72, p= 0.01, respectivamente).
Abstract: the main question about moral treatment to non-human animals is not anymore a matter of applied ethics, assumed by some intellectuals, but a social demand which urges for its resolution. as a response to this requirement, martha nussbaum, offers -opposite to the proposals by peter singer, richars m. hare or tom regan- a new perspective to abord the speciesist problem: the capabilities approach. in this article we expose the main ideas in the nussbaum iniciative, and question, as a discussion, some aspects on her capacities focus.
Abstract: In this paper, we analyse the distributional patterns of adult helminth parasites of freshwater fishes with respect to the main hydrological basins of Mexico. We use the taxonomic distinctness and the variation in taxonomic distinctness to explore patterns of parasite diversity and how these patterns change between zoogeographical regions. We address questions about the factors that determine the variation of observed diversity of helminths between basins. We also investigate patterns of richness, taxonomic distinctness and distance decay of similarity amongst basins. Our analyses suggest that the evolution of the fauna of helminth parasites in Mexico is mostly dominated by independent host colonization events and that intra - host speciation could be a minor factor explaining the origin of this diversity. This paper points out a clear separation between the helminth faunas of northern - nearctic and southern - neotropical components in Mexican continental waters, suggesting the availability of two distinct taxonomic pools of parasites in Mexican drainage basins. Data identifies Mexican drainage basins as unities inhabited by freshwater fishes, hosting a mixture of neotropical and nearctic species, in addition, data confirms neotropical and neartic basins/helminth faunas. The neotropical basins of Mexico are host to a richest and more diversified helminth fauna, including more families, genera and species, compared to the less rich and less diverse helminth fauna in the nearctic basins. The present analysis confirms distance - decay as one of the important factors contributing to the patterns of diversity observed. The hypothesis that helminth diversity could be explained by the ichthyological diversity of the basin received no support from present analysis.
Abstract: In order to draw patterns in helminth parasite composition and species richness in Mexican freshwater fishes we analyse a presence-absence matrix representing every species of adult helminth parasites of freshwater fishes from 23 Mexican hydrological basins. We examine the distributional patterns of the helminth parasites with regard to the main hydrological basins of the country, and in doing so we identify areas of high diversity and point out the biotic similarities and differences among drainage basins. Our dataset allows us to evaluate the relationships among drainage basins in terms of helminth diversity. This paper shows that the helminth fauna of freshwater fishes of Mexico can characterise hydrological basins the same way as fish families do, and that the basins of south-eastern Mexico are home to a rich, predominantly Neotropical, helminth fauna whereas the basins of the Mexican Highland Plateau and the Nearctic area of Mexico harbour a less diverse Nearctic fauna, following the same pattern of distribution of their fish host families. The composition of the helminth fauna of each particular basin depends on the structure of the fish community rather than on the limnological characteristics and geographical position of the basin itself. This work shows distance decay of similarity and a clear linkage between host and parasite distributions.
Beans are rich in dietary fiber and polyphenols; however, growing conditions may affect the occurrence of these components. The effect of irrigation and rain fed conditions on dietary fiber, indigestible fraction, polyphenols and antioxidant capacity of Black 8025 and Pinto Durango bean cultivars grown in Mexico have been determined. Total dietary fiber decreased in beans grown under rain fed conditions compared to those grown under irrigation. The water regimes had an effect on the total indigestible fraction for Black 8025 bean. The extractable polyphenols were affected by the water regimes, while the antioxidant capacity of extractable and non-extractable polyphenols was dependent on the bean variety. Cooking the beans altered the extractable and non-extractable polyphenols and the antioxidant capacity. Also, the antioxidant properties and some extend, the digestibility of non-digestible carbohydrates of beans were affected by water regimes. This information could be taken into account for dry bean breeding programs to improve the nutritional quality of beans.
|
0.999999 |
Is AI, artficial intelligence, fundamentally lying?
No, it is fundamentally exact and precise, following with reliable exactitude the logic and the data it has been instructed and coded to follow.
Lying has absolutely no meaning in this context.
|
0.938681 |
"Neha is sleeping on the floor."
Could ज़मीन पर नेहा सो रही है be correct?
That would mean "On the floor, Neha is sleeping".
|
0.996531 |
To the Hindu, this idea has been an active force in defining the 'Eternal Dharma.' It has been for Hinduism what the infinite Divine Self of Advaita is to existence, remaining forever unchanged and self-luminous, central and pervasive, in spite of all the chaos and flux around it.
Hinduism rests on the spiritual bedrock of the Vedas, hence Veda Dharma, and their mystic issue, the Upanishads, as well as the teachings of many great Hindu gurus through the ages.
Thus, Hindu image worship is a form of iconolatry, in which the symbols are venerated as putative sigils of divinity, as opposed to idolatry, a charge often levied (erroneously) at Hindus.
Hinduism is the western term for the religious beliefs and practices of the vast majority of the people of India.
Hinduism is a synthesis of the religion brought into India by the Aryans (c.1500 B.C.) and indigenous religion.
The first phase of Hinduism was early Brahmanism, the religion of the priests or Brahmans who performed the Vedic sacrifice, through the power of which proper relation with the gods and the cosmos is established.
In general, Hindu views are broad and range from monism, dualism, pantheism, panentheism, alternatively called monistic theism by some scholars, and strict monotheism, but are not polytheistic as outsiders perceive the religion to be.
Hinduism has often been confused to be polytheistic as many of Hinduism's adherents are monists, and view multiple manifestations of the one God or source of being.
The post- Vedic Hindu scriptures form the latter category, the most notable of which are the Mahabharata and the Ramayana, major epics considered scripture by most followers of Sanatana Dharma, their stories arguably familiar to the vast majority of Hindus living in the Indian subcontinent, if not abroad.
The presence of different schools and sects within Hinduism should not be viewed as a schism.
It is the Smarta view that dominates the view of Hinduism in the West as Smarta belief includes Advaita belief and the first Hindu saint, who significantly brought Hinduism to the west was Swami Vivekananda, an adherent of Advaita.
The newest and least numerous denominations are comprised of Balinese Hindus, who make up a sect of Hinduism that once flourished on the nearby island of Java until late 16th century, when a vast majority of its adherents converted to Islam.
Hinduism (सनातन धर्म; commonly called Sanātana Dharma, roughly Perennial Faith by Hindus) is the oldest major world religion still practised today and first among Dharma faiths.
Each of its four sects shares rituals, beliefs, traditions and gods with one another, but each sect has a different philosophy on how to achieve life's ultimate goal (moksa, liberation) and on their views of the Gods.
Some sects of Hinduism believe in a monotheistic ideal of Vishnu (often as Krishna), Siva, or Devi; this view does not exclude other gods, as they are understood to be aspects of the chosen ideal (e.g., to many devotees of Krishna, Shiva is seen as having sprung from Krishna's creative force).
The backward classes figuring in the State list opposed moves to include Kapus and its sub-castes like Balija, Telaga and Ontari in the national as well as State lists at a day-long public hearing held at the Sankshema Bhavan here on Tuesday by the National Commission on Backward Classes.
The proceedings were marked by chaotic scenes with shouting and counter-shouting by the representatives of Kapus and its sub-sects on one side and the entire bloc of listed backward classes on the other.
Be that as it may, the ranking from 39 to 55 also held a mirror to the backwardness from which the Kapus were suffering, they argued, pleading for their inclusion in the list.
Hinduism (सनातन धर्म; also known as Sanātana Dharma, and Vaidika-Dharma) is a worldwide religious tradition that is based on the bed-rock of the Vedas.
The post- Vedic Hindu scriptures form the latter category, the most notable of which are the Mahabharata and the Ramayana, major epics considered scripture by most followers of Sanatana Dharma, their stories arguably familiar to the vast majority of Hindus.
Hindus stress meditative insight, an intuition beyond the mind and body, a trait that is often associated with the ascetic god Shiva.
Americans' exposure to expressions of Hinduism largely is limited to travelogues of India, Bollywood song-and-dance movies and the Fox TV cartoon antics of Apu Nahasapeemapetilon, the Indian Kwik-E-Mart clerk on The Simpsons.
Hinduism, followed by 930 million people worldwide, 98% in India, actually is a 19th-century term for a spectrum of ancient teachings, just as Christianity covers denominations as varied as Catholics, Baptists and Jehovah's Witnesses.
As Christians are unified by the centrality of Christ, so Hindus, divided among thousands of sects and sub-sects, are unified by "one, all-pervasive supreme God, though he or she may be worshiped in many forms," says Suhag Shukla.
Five of the seven states listed by the U.S. State Department as supporting terrorism are Muslim, as are a majority of foreign organizations listed as engaged in terrorism.
A Hindu distinguishes the religion of the churches from the religion of Jesus Christ.
The Hindu is not satisfied merely to accept Christ in theory, but he strives hard to live the life, which Jesus lived, to lead a life of renunciation, of self-control and of love to all.
Hindu society is founded on, and governed by, the laws made by these three great sages.
The story of the birth of Rama and his brothers, their education and marriages, the exile of Sri Rama, the carrying off and recovery of Sita, his wife, the destruction of Ravana, the Rakshasa King of Lanka, and the reign of Sri Rama, are described in detail in Ramayana.
The more you know of India and Hinduism, the more you will honour and love it and the more thankful to the Lord you will be that you were born in India as a Hindu.
The preceptor for a whole society should be able to act as a perennial source of inspiration to the people, embodying the highest and the noblest national values and ethos.
It is the one supreme symbol held in universal reverence by all sects and castes, and all creeds and faiths of the Hindu people.
It is in fact the greatest unifying symbol of the entire Hindu world.
KANCHEEPURAM : Enlisting of all sub-sects in Naidu community under most backward class and declaring holiday for `Ugadi' in Tamil Nadu are some of the demands of the Federation of All Naidu Associations.
A free registration and exchange of horoscopes of brides and bridegrooms would be encouraged at the conference venue, he said.
Swami Radha is a German-born seeker who took initiation from Swami Sivananda of Rishikesh in 1956 and went on to found her own hatha-yoga school in Canada.
Hindu Astrology All India Astrological Services, by Mr.
Mother Anasuya Devi of Jillellamudi village South India was a renowned, if controversial, figure who shocked Hindu traditionalists by teaching that the world is real and not an illusion or maya as is commonly held.
OM, the most sacred syllable and quintessential symbol of Hinduism, represents the first manifestation of the unmanifest Brahman.
After including Yoga followers, Hinduism has around 1.05 billion followers worldwide.
Caste still plays a significant role in Hindu society; however, post Independence, caste is losing favour in India and caste-based discrimination has been illegitimised.
In Hinduism, Apa 'Water' is one of the Vasus in most Puranic lists, though he does not appear among them in earlier lists.
You can find it there under the keyword Apa (http://en.wikipedia.org/wiki/Apa)The list of previous authors is available here: version history (http://en.wikipedia.org/w/index.php?title=Apaandaction=history).
Membership - Chronology - Religions, branches, traditions, denominations, sects, cults.
|
0.99989 |
Implementing hardware and software, we can make our home more calm and comfort. Smart Home Application allows to managing devices in the houses via mobile. Check the best examples and see how differ they could be.
The system consists of separate monitoring sensors and control devices, which the user places at own discretion in the premises. Under your needs, the user configures the scenario of events and selects the required number of sensors and devices.
A smart home is a set of solutions for automating everyday activities that will save you from routine. Here and household appliances – from vacuum cleaners to appliances controlled from a smartphone – and systems that control everything that happens in the apartment.
In fact, you can check App Development for the Smart Home story which can improve the quality of life. Comfort consists of small things, and a smart house will take care of everything. Down with the alarming thoughts: it's enough to send a message to the smart socket from the smartphone, and it will disconnect the device, which is powered by it.
What features are main for this app?
To simplify the possible complex variations, the application system can include a set of situational models. It means that the main case must connect to one or several devices, following a certain rule. An excellent example – when it starts to rain, the system connects the canopy. Or if you want to have the movie night, you need to activate several applications at the same time by pressing one button – reduce the room lightening, lower the shutters on the windows, turn on the TV screen.
Built-in AI systems allow you to manage devices with the help of voice commands. You certainly have already asked Siri about something interesting. Since this technology is constantly evolving, analysis of various voice and verbal combinations is becoming deeply, and the commands are performed more accurately.
If you are going to leave the house for a while on a journey, then worries about hacking can be quite reasonable. Using applications for a smart home, you can remotely monitor the situation in the house through a fictitious presence. For example, when it gets dark, light can turn on in the house periodically, pretending that the owners are at home.
You understand, that even being in the older age, many people wish to remain independent. This is a normal, healthy desire and it needs to be supported. Using the smart home application, you can monitor the situation with the help of motion sensors. And if a person, for example, falls, you will quickly find it and call an ambulance.
The value of the app and how to develop it?
Low price and support for the most budget sensors in the market can reduce the cost of a typical system. Compatibility with different standards and communication protocols makes it possible to choose between sensors and devices from well-known manufacturers and their more budget counterparts.
As a rule, people plan to implement this system during repair at the design stage of engineering networks. They begin to search the Internet for “smart homes” and get to the websites of companies that offer expensive solutions and expose fabulous accounts, promising that everything will be difficult, correct and reliable.
Users understand that they cannot afford it, and they look towards more simple solutions. You must think about the simplicity of this app that the people can trust you and don’t afraid the complex system. You need to explain to the user that the use and connection of the system are easy and easy. You will encourage him to do this for your own purposes.
It is understandable that people want to live not only comfortably but also with taste. So you have to explain to people that they can place the speakers everywhere and also download the program to the phone. As a result, they can put different tracks in different rooms; make everywhere the sound of the same or different volume directly from their smartphone or computer.
|
0.957989 |
The civil war, commonly known as the War of the Reform, that engulfed Mexico between 1858 and 1861 brought to light the underlying conflicts that had been present in Mexican society since independence. The conservative faction launched the Plan of Tacubaya and, with the support of the military and the clergy, dissolved congress and arrested Juárez. Juárez escaped and established a "government in exile" in Querétaro (the liberals later moved their capital to Veracruz). The initial military advantage was held by the conservatives, who were better armed and had plentiful supplies, but by 1860 the situation was reversed. The final battle took place just before Christmas 1860. The victorious liberal army entered Mexico City on January 1, 1861.
In March 1861, Juárez won the presidential election, but the war left the treasury depleted. Trade was stagnant, and foreign creditors were demanding full repayment of Mexican debts. Juárez proceeded to declare a moratorium on all foreign debt repayments. In October 1861, Spain, Britain, and France decided to launch a joint occupation of the Mexican Gulf coast to force repayment. In December troops from the three nations landed at Veracruz and began deliberations. Because the representatives of the three nations could not agree on the means to enforce the collection of the debt, Britain and Spain recalled their armies. Spurred by dreams of reestablishing an empire in the New World, the French remained and, with the support of Mexican conservatives, embarked on an occupation of Mexico.
In Puebla, the French troops encountered strong resistance led by one of Juárez's trusted men, General Ignacio Zaragoza, who defeated the foreigners on May 5, 1862 (May 5 is celebrated today as one of Mexico's two national holidays). The following May, Puebla was surrounded once again by French troops, who laid siege to the city for two months until it surrendered. The fall of Puebla meant easy access to Mexico City, and Juárez decided to evacuate the capital after receiving approval from congress.
The French encountered no resistance to their occupation of Mexico City. In June 1863, a provisional government was chosen, and in October a delegation of Mexican conservatives invited Ferdinand Maximilian Joseph von Habsburg of Austria to accept the Mexican crown, all according to the plans of French emperor Napoleon III. Maximilian was a well-intentioned monarch who accepted the crown believing that this act responded to the desire of a majority of Mexicans. Before departing for Mexico, Maximilian signed an agreement with Napoleon III, under which Maximilian assumed the debts incurred for the upkeep of the French army in Mexico. On June 12, 1864, the Emperor Maximilian I and his Belgian wife, Marie Charlotte Amélie Léopoldine, now called Empress Carlota, arrived in Mexico City. The republican government under Juárez retreated to the far north.
Maximilian, schooled in the European liberal tradition, was a strong supporter of Mexican nationalism. He soon found resistance from all quarters of the political spectrum, however. The conservatives expected the emperor to act against the Reform Laws, but Maximilian refused to revoke them. Mexican liberals appealed for military assistance from the United States on the basis of the French violation of the 1823 Monroe Doctrine, but the United States was involved in its own civil war. The end of the Civil War in the United States in 1865, however, prompted a more assertive foreign policy toward Mexico and released manpower and arms that were directed to help Juárez in his fight against the French. In Europe, France was increasingly threatened by a belligerent Prussia. By November 1866, Napoleon III began recalling his troops stationed in Mexico. Conservative forces switched sides and began supporting the Mexican liberals. United republican forces resumed their campaign on February 19, 1867, and on May 15, Maximilian surrendered. He was tried and, on Juárez's orders, was executed on June 19.
|
0.992963 |
It’s tempting to view IT optimization primarily as a mechanism for reducing IT service delivery costs, or for cutting the costs associated with IT capital projects. But IT doesn’t operate by the same rules as other parts of your business. That’s because you can leverage IT to reduce costs throughout the enterprise. In fact, a CEO recently told me that he was willing to spend more on IT if it would help him achieve significant cost reductions in other parts of the business. For him, the point was decreasing overall costs. He didn’t care where the savings came from.
The key is striking the right balance between IT capabilities and costs to maximize business value. Think about it like this: What’s the ratio of value-delivered to IT cost? Value here should be defined as a combination of increasing revenues, decreasing (overall) costs, reducing business risk and building new business capabilities. It may be helpful to consider the answer in terms of IT investment and spending in four categories: growth, innovation, maintenance and productivity. The percentage of your IT dollars allocated to each of these categories may vary based on the economy or other external factors (e.g., competitive positioning), but maintaining an appropriately balanced IT investment portfolio is key to long-term business success.
How does IT support your company’s value proposition? Are you spending the majority of your IT dollars accordingly?
In this challenging economy, am I allocating enough IT funds to build the capabilities we’ll need during the coming rebound?
Are all current IT capital projects being effectively managed and resourced to control overall costs and minimize delivery risks?
Are there opportunities to reduce IT operating costs in a manner which doesn’t impact overall business performance?
Whether your goal is to reduce total dollars spent (SG&A and IT), or to maximize “bang for the buck,” a balanced approach to IT optimization can be a great means of getting there..
|
0.999992 |
What is the difference between GPS and A-GPS (Assisted GPS)?
GPS uses the network of satellites to get your location, while A-GPS (Assisted GPS) uses the network of satellites along with information from the cell towers of your mobile operator to pinpoint your location. This added dimension makes A-GPS faster and more accurate.
|
0.993539 |
I understand that the main taste of dashi is umami (glutamic acid and inosinic acid). Aside from umami, is it possible to taste the difference? Would it be possible to teach someone to tell the source by tasting? Also, will the different sources cause different types of allergies?
For this question, let's suppose the dashi had been used as the broth for a plain Udon - no other ingredients, just the Udon and the dashi.
Sure, it's really not that difficult if you've actually tasted several kinds of dashi-jiru. It's more a matter of experience. There is decidedly a flavor to each category of dashi; it's not just "umami" or you would be able to get away with just throwing in a bunch of MSG into a bowl of water. But the flavor is mostly from aroma, like with other types of soup stock, since you haven't added any of the basic "tastes" other than the kombu and occasionally residual salt from the dried fish at the point the dashi is made.
At home, I often make a vegetarian one with kelp and porcini, sometimes with added cabbage; this the closest I've been able to come to the katsuo-dashi taste without actually using fish ingredients. It's a variation of a more standard Japanese kombu-dashi that's made with kombu and sometimes dried shiitake, but works for a broader range of dishes than the shiitake or minimalist kombu-dashi.
Katsuo-dashi: Made with dried, cured skipjack tuna (aka bonito), and shaved. It has a slightly bitter taste. It is best made with the addition dried kombu. Tastes richer with either thicker shavings of katsuo-bushi or just-before-cooking shaving, when possible.
Niboshi-dashi: Made with small dried whole fish (sometimes with heads removed, sometimes not, depending on the type of fish). Generally has a more pungent, richer flavor. There are several potential kinds of fish used and there was once substantial regional variation in this category, so the taste can vary quite a bit depending on the exact fish variety and how the fish was cured.
Kombu-dashi: Made just with kombu, usually for adding a little bit of aroma and complexity to simple dishes, but rarely for soups, except for nabemono, where you'd typically have additional seafood or meats in the hot-pot. (I usually lump kombu-shiitake or kombu-porcini dashi in this category, but for no particular reason).
Tori-dashi or gara-dashi: Made with chicken or other fowl bones. Except for fact that mirepoix isn't usually in the Japanese equivalent of this, so you won't have the celery base, it's similar to a typical Western-style chicken broth.
Ton-kotsu-dashi, made with pork bones. Generally pretty hearty and full of fat.
Dashi is the analog to a soup stock, so it generally does not have added salt. Accordingly, you won't serve a straight dashi with an udon; you'd turn it into a broth, generally called "kakejiru". This will contain added salt, shoyu, and usually some combination of sugar, sake and/or mirin.
Before you put salt in it, you'll mostly "smell" a difference. After that, you can certainly at least distinguish between katsuo and kombu dashi, and with a little practice, you can distinguish between niboshi-dashi and katsuo-dashi. For me, the difference between katsuo-dashi and the instant "hon-dashi" is usually not subtle; the hon-dashi is a bit aggressive and harsh to my taste, but not everyone feels that way. (For what it's worth, I'm as close as practical to being a vegetarian for someone who regularly dines out in Japan). Tonkotsu-dashi and tori-dashi are somewhat rarely used in home cooking (though there are some exceptions in west Japan and Okinawa), but the taste difference is pretty obvious.
I actually tend to prefer niboshi-dashi over katsuo-dashi, but it depends on the type of fish. There's a dried sardine from Korea that I find a little harder to stomach when used in dashi.
I can't speak to allergies, but I suppose it's possible that different types of fish may cause some people to react differently. I suppose you'd have to ask an allergist.
Not the answer you're looking for? Browse other questions tagged japanese-cuisine dashi or ask your own question.
Where can I buy Glico Curry online in the United States?
|
0.944156 |
Not to be confused with House music.
For the British band, see The House Band.
A house band is a group of musicians, often centrally organized by a band leader, who regularly play at an establishment. It is widely used to refer both to the bands who work on entertainment programs on television or radio, and to bands which are the regular performers at a nightclub, especially jazz and R&B clubs. The term can also refer to a group that plays sessions for a specific recording studio. House bands on television shows usually play only cover songs instead of originals, and they play during times that commercials would be seen by the home viewing audience. Therefore, only those present in the studio during the show's taping see their full performances.
House bands emerged with jazz music in Chicago during the 1920s. The practice of using regular backing musicians during studio sessions became customary as a means for record companies to save money and add convenience at a time when the music industry had seen increased studio costs and musical specialization. With the advent of television in the 1950s, bands from the swing era of jazz typically performed on variety show programs as house bands, starting a television institution that survives to the present. One of the best-remembered, and longest-running, house bands was the NBC Orchestra of The Tonight Show Starring Johnny Carson and his predecessors. Late-night television offered security and survival for the big band, led by trumpeter Doc Severinsen, while the trends in popular music continually changed around them.Late Night with David Letterman, which began in 1982, featured Paul Shaffer and The World's Most Dangerous Band, who, unlike previous house bands, incorporated contemporary rhythm and blues and rock music. The band continued that blend with Letterman when he left for CBS to start Late Show in 1993. House bands remain a late-night talk show fixture, with many of them also serving as unsuspecting straight men for the host's jokes, musically introducing guests, playing in and out of commercials, composing original pieces of music for sketches, and backing up musical guests.The Roots became the first hip hop house band on late-night television when they joined Late Night with Jimmy Fallon in 2009.
Record labels have often employed a core group of musicians to serve as a house band or house orchestra, specifically for recording sessions. These groups can come to be regarded as an important component of a label's distinctive "sound". This use of house bands, first popularized in the 1920s, was revived during the 1960s, most notably at Motown and at Stax Records. Some of these house bands, such as Booker T. & the M.G.'s (Stax), had parallel careers as main artists in their own right.
Note: Individuals listed may not have performed in some or any of the groups listed.
^ Lanford, Jill J. (August 29, 1985). "House Bands: Music's Unsung Heroes". Spartanburg Herald-Journal. p. D1. Retrieved January 15, 2013.
^ a b c Shipton, Alyn (8 July 2003). "House Band". In John Shepherd. Continuum Encyclopedia of Popular Music of the World Part 1 Performance and Production. Continuum International Publishing Group. p. 31. ISBN 9780826463227. Retrieved 14 January 2013.
^ Shuker, Roy (2012). Understanding Popular Music Culture (4th ed.). London: Routledge. p. 55. ISBN 978-0415517133.
^ a b Decker, Todd (2011). Music Makes Me: Fred Astaire and Jazz (1st ed.). Berkeley: University of California Press. pp. 163–4. ISBN 978-0520268883.
^ Dodd, Katie (January–February 2010). "Music of the Night" (PDF). M Music & Musicians. M Music Media, LLC. 1 (1): 28–32. ISSN 2156-2377. Retrieved January 15, 2013.
^ Deggans, Eric (March 1, 2009). "Revolutionizing late-night television". The Post and Courier via Tampa Bay Times. p. 2A. Retrieved January 15, 2013.
^ "Philadelphia International Records Page". soulwalking.co.uk. Retrieved 7 October 2011.
^ "Credits". Roland Chambers. allmusic. Retrieved 7 October 2011.
^ a b c d e f "Great White Way Orchestra (Musical group)". Victor Libaray. Retrieved 7 October 2011.
^ a b c "Metropolitan Orchestra". National Jukebox. Library of Congress. Retrieved 7 October 2011.
^ "Victor Discography: Metropolitan Orchestra (Musical group)". Victor Library. Retrieved 7 October 2011.
^ "Victor Military Band (Musical group)". Victor Library. Retrieved 7 October 2011.
^ "Victor Military Band Discography". Victor Military Band. discogs. Retrieved 7 October 2011.
^ "Victor Orchestra Personnel". The Mainspring Press Record Collectors' Blog. Mainspring Press. Archived from the original on 10 August 2011. Retrieved 7 October 2011.
|
0.999982 |
TCP/IP(Transmission Control Protocol/Internet Protocol) is built into the UNIX operating system. It is used by the Internet, making it the de facto standard for transmitting data over networks.
TCP - ensures the verification of the correct delivery of data from client to server. Data can be lost in the intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the data is correctly and completely received.
TCP is typically used by application software that require guaranteed delivery. It is a sliding window protocol that provides handling for both timeouts and retransmissions.
TCP creates a complete duplex virtual connection between 2 endpoints. Each endpoint is defined by an IP address and a TCP port number and is implemented as a finite state machine.
The byte stream is transferred in segments. The window size determines the number of bytes of data that can be sent before an verification from the receiver is necessary.
IP - is responsible for transferring packet of data from one node to another. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world.
HTTP stands for HyperText Transfer Protocol. Tim Berners-Lee developed the web protocol to act as web addresses. By using the protocol, you can connect to web servers without the server knowing who you are. The request to the site does not reveal if you have visited the site previously. Therefore, the protocol is basically stateless, unlike FTP, which is interactive.
The only documentation for early versions of the HTTP/1.0 protocol consisted of a discussion draft in HTML form. This documentation is available for historical reasons only, since it has been replaced by the Internet Drafts, Informational RFC, and now Standard track documents, and does not reflect current practice among WWW applications. The purpose for the protocol is so you can retrieve information, promptly and with minimal hassle.
Connection is the establishment of a connection by the client to the server. Request is the act of sending a request message, by the client, to the server. Response is the sending, by the server, of a response to the client. Finally, the closing of the connection by either both parties.
What is FTP: FTP stands for "file transfer protocol." FTP is the Internet's tool for transmitting files between certain computers. Basically it is a way of sending and receiving files over the internet. FTP enables you to gain access to files that are stored on a hard drive on someone else's computer which is connected to the internet.
FTP is just one of many ways to share information and data over the internet. With FTP the main objective is to have individual/direct control over a certain file. You are able to find out where the file comes from and where the file must go. The easiest way to use FTP is to use a program that is specifically designed to use the File Transfer Protocol.
The 2 different types of files are text and binary, as you could have guessed there is and FTP method for each of these files. The text files consists of letters and numbers that make up text, where as the Binary files are those that are comprised of sounds pics, and programs. You can only transfer these files if they are in the proper format or they will not work.
- Country code top-level domains : Used by a country. It is two letters long, for example "ca" for Canada.
Internet domain names are registered with any of several registrars. To find out if a domain name is taken, one popular domain name registrar is Go Daddy (www.godaddy.com).
The web protocol (http://) is a convention or standard that controls or enables the connection, communication, and data transfer between two computing endpoints. This web protocol was developed by Tim Berners-Lee and is seen at the beginning of web addresses.
The domain is a location on the internet (often referred to as web addresses). Domain names are hostnames that provide more easily memorable names to stand in for numeric IP addresses. They allow for any service to move to a different location in the topology of the Internet (or another internet), which would then have a different IP address.
A file name is a special kind of string used to uniquely identify a file stored on the file system of a computer.The file is the particular page or document that you are seeking.
In order to ensure that products that are developed can work with other products, a series of networking standards have been developed to address how the device connects to the network as well as how the devices communicate. The most widely used standard with wired networks is the Ethernet. Some of the wireless standards that currently exist, classified by their range are: short range connecting network elements within a small perimeter (Bluetooth, wireless USB, UWB, WiHD, ZigBee), medium range connecting computers to LAN (Wi-Fi) and long range developed to provide internet access to a large area for fixed and mobile users(WiMax, Cellular standards).
This page was last edited on 9 January 2013, at 10:50.
|
0.999945 |
Dummies At Sea is creating vlogs of our adventures at sea!
Hi there! My name's Alicia. My husband, Patrick, and I have recently decided to make a major life change. Here's a little bit of our story: We both had regular jobs - he was a mechanic for 10 years and I've worked in retail for the last 8 years. In October of 2015 we opened our own trucking company, and although we started to turn a profit in only month 4 of being open, we found that all of the stress that the business entailed wasn't worth it. So, in October of 2016 we said goodbye to the trucking company and closed it down. We decided that we wanted to downsize and simplify our lives. Our house now is 3 bedrooms, 2 bathrooms, 1500 sq. ft. - too big for us. So for a while we looked into getting a small cabin that was around 600 sq. ft., but we felt that plan wasn't right for us either. We have always wanted to travel, and have always like the idea of being able to travel, but we felt constrained by time and financial demands usually associated with a traditional lifestyle. Lately we've been researching boating and living aboard and feel that this is the right path for us. Living aboard will allow us to comfortably downsize since we can only take the things that we really need. It will also be a way of life that is going to allow us to travel since our vehicle will be our home. By selling the house, our cars, and nearly all our possessions, we have effectively reduced our bills almost 90%. Overall, we are just so ready to take the plunge and dive in to this adventurous and exciting new lifestyle, and we want YOU to come along with us on this new journey. Boating is very new to both Patrick and myself, and even though we've been doing extensive research and taking classes over the last few months, most of our experience is as a passenger on small lake craft. We are constantly learning new things every day and we know we still have so much more to learn. So, consider being a part of our Patreon family and experience this new lifestyle with us as we live aboard "The Polar Express".
Don't want to make a per video or monthly donation but still want to support us?
For as little as $2 per month you are helping us to continue to make fun and informative videos.
You are a part of our patreon family now! To express our gratitude for your contribution, you will be apart of the 'Group Shout Out' in our videos.
Thank you! For all of our patrons who pledge $5 or more per video, we will feature your questions in every Q&A episode.
In addition, you'll also be apart of the 'Group Shout Out'!
By pledging $10 or more per video, every year at Christmas time, you will receive a personalized Christmas card from us along with a souvenier from one of our destinations. In addition you will also receive all of previous 'rewards' listed above.
Thank you so much! By making this pledge we will give you a personal shout out in all of our videos. Additionally, you will be apart of the other 'rewards' listed above.
Wow!! You are truly keeping the 'Polar Express' afloat! We feel like you should get to enjoy being on it as much as we do, so once a year you will have a chance to cruise with us and liveaboard the 'Polar Express'! Of course, in addition, you will also be apart of all of the 'rewards' listed above.
We are grateful to every single patron for being so generous! To the patrons who help us reach our $500 goal - you are welcome to send us a wallet size photo of yourself which we will put in our handmade patron Christmas tree collage. This reward is available to all patrons regardless of size of donation.
|
0.999773 |
Imagine you are on vacation or walk in a park, having a lovely time with your significant other. As you are walking or sitting on a bank you meet a man with a beautiful dog.
You ask if you can pet the dog, which is very friendly, and the man says yes and explains that it is a very expensive purebred dog. As you are chatting, the man realizes he forgot his wallet at the convenience store around the corner and asks if you would mind watching the dog for a few minutes while he goes to pick it up. You agree.
As he walks around the corner you are approached by an impeccably dressed woman who asks if she can pet your dog. You allow her to pet the dog, and she comments on how beautiful it is and that she has always wanted a dog such as this.
She then offers you $500 for the dog, but you tell her the dog does not belong to you; therefore you can't sell it to her. She hands you her business card and asks you to pass it on to the gentleman who owns the dog to see if he is interested in selling the dog.
The man returns without having found his wallet. He looks very troubled and explains that he needed the money in his wallet to pay off a gambling debt – or some other hard-luck tale. He asks you if you want to buy the dog for $200. You decide to take advantage of the situation – shame on you – and offer the man $200 for the dog, figuring you can turn around and call the woman, sell the dog for $500 and make an instant $300 profit.
The man agrees to sell you the dog for $200 since he is desperate. He leaves and you call the woman to give her the good news, only to discover that the number on her card does not work. Now you are the proud owner of a dog you didn't really want – and a considerably lighter wallet.
Be aware when talking with strangers anywhere, especially if they start telling you a hard-luck story about needing money and losing a wallet, or something similar.
Realize that these professional scam artists often work in pairs – like the man and woman in the scenario above – and you will never be able to out-scam a professional scam artist. There is no such thing as an honest quick buck. Just enjoy your vacation and keep hold of your hard-earned money by being honest and aware.
I have just met a scammer by the name of Benjamin Ganley. I have put in a deposit and by the time I was to pick up the dog, he twisted the story. So for everyone's sake, beware. I hope his name is posted so that whoever deals with this person gets his punishment!
|
0.999997 |
The Matrix: Path of Neo is the third video game based on the Matrix series and the second developed by Shiny Entertainment. Players control the character Neo, participating in scenes from the films. It was released on November 8, 2005 in North America. In Shiny Entertainment's first licensed Matrix game, Enter The Matrix, only sideline characters were playable. It did not feature the series' main protagonist Neo, and due to its nature as an extension of the films' storyline, had few recreations of scenes in the film trilogy. David Perry, president of Shiny Entertainment Inc, has stated that Path of Neo is "basically the game that gamers wanted first time around... The Neo Game!".
Your PC will need a graphics card thats as powerful as a GeForce 7600 GS/Radeon X1550 and it should be paired with either a Athlon XP 1600+/Pentium 4 1.8GHz CPU to match the The Matrix: Path OF Neo recommended system specs. This PC setup will deliver 60 Frames Per Second on High graphics settings on 1080p monitor resolution. Make sure your GPU can run DirectX 9 or The Matrix: Path OF Neo won’t run. To summarise, The Matrix: Path OF Neo needs around a 13 year old PC to play at recommended settings.
What's your user review score for The Matrix: Path OF Neo?
Please login to add your score for The Matrix: Path OF Neo Graphics played on the pc.
Please login to add your score for The Matrix: Path OF Neo Lifespan played on the pc.
Please login to add your score for The Matrix: Path OF Neo Value played on the pc.
|
0.999997 |
Here you will get java program to find largest number in array using recursion.
Output Given Array: 5 12 10 6 15 Largest Number is 15 Comment below if you have any queries regarding above program.
Here you will get program for heap sort in java. Heap sort is a sorting algorithm that uses heap data structure. Its best, worst and average time complexity is O (n log n). How heap sort algorithm works? First we make max heap from given set of elements. In max heap each parent node is greater than or equal to its left and right child. Now first node is swapped by last node and size of heap is reduced by 1.
|
0.999985 |
A natural birth is one that occurs without the help of drugs such as an epidural. Some women with low risk pregnancies opt for this natural way of giving birth in order to avoid the possible risks that drugs can pose.
Many women choose this form of delivery to avoid the risks that medications can cause in the mother and the baby. Other women opt for a natural birth because they want to experience giving birth in a more natural way, more in contact with the baby and letting nature taking its course.
What does a natural birth consist of?
Going through labour and delivery without the help of medications, including analgesics such as epidurals.
Using few or no medical interventions such as episiotomies or foetal monitoring.
The woman leads the birth itself, moving and choosing the position that is most comfortable, with the doctor and midwife or partner assisting her.
Some women choose to give birth outside the hospital environment, e.g. in water or at home.
Before a natural birth it is necessary to prepare a birth plan and prepare the environment in which the woman will give birth, as well as the team that will support the birth. The pregnant woman can count on the help of doctors, nurses and doulas that help during the whole birth. It is also essential that the mother be informed about all types of pain management techniques; to go without pain medication, she will need measures such as breathing techniques to reduce the pains of childbirth.
Aftercare will be similar to that of a mother who gives birth in a conventional delivery, and will vary depending on the course of delivery, how the mother is and whether she has had many complications or interventions.
|
0.999997 |
The Metazoic family Castoridae is basically a continuation of the family we have today. The beaver is one of the largest rodents we have today. The modern basic body form is round, humpbacked, small head and eyes and a short muzzle. It's most noticable features are the short, flat tail and large incisors. During the Metazoic, the family branches out, and takes on several forms. Not only the familiar animal I just described, but there are also forms that resembles rats, squirrels and even guinea pigs. Most species have either short tails or no tails. The longest tails belong to Trogonomys, which are tree-dwelling, rat-like creatures. In most species, the tails are naked, scaly, in some they are covered in fine fur. The feet are naked and webbed in all but Trogonomys. The head is still short and the incisors are long, sharp and powerful. The ears are either very small, or absent in most species. The fur is slick and oily, the claws are long and sharp. The legs are generally short, or in some cases have turned to flippers. The body has become elongate in some forms, with the hind feet being fused with the tail to form a large flipper in the back, which is paddled in an up and down motion for swimming. The ears and nostrils are capable of closing while underwater and the eyes have a special nictitating membrane that closes over the eyes while they are underwater.
The largest member of the family is in the genus Phocapotamus. This animal closely resembles a hippopotamus with flippers instead of legs. The head of this animal is very large and blunt. They cannot walk very well on land. They live alone or in couples. There is no tail in this animal, and they are covered with very short, slick fur. The smallest member of this family is in the genus Castorella. These are small replicas of modern beavers. Like modern beavers, the species in this family also build nests, or dams. Trogonomys lives in trees and builds a nest among the branches to house it's young. The nest is usually constructed of twigs and leaves. Other species cut up saplings, as they do today and build their dams in running water. Most species live singly or in couples. Though in the genus Caprymnus, they tend to live in the largest groups of all Castorids. Caprymnus somewhat resembles a large guinea pig, or the capybara.
Contrary to popular belief, no species of this family eats fish. They are all strictly vegetarians. Their diet consists of aquatic plants, flowers, leaves, grass, and fungi. Though a large species like Caprymnus is safer in large groups, they do tend to fall prey to some predators. Foxes, barofelids and mongooses are their major enemies. They can defend themselves by swimming, or if they are cornered, they can use their sharp claws and teeth for defense.
|
0.946431 |
Published 04/25/2019 12:26:29 pm at 04/25/2019 12:26:29 pm in Williamsburg Brooklyn Area Code.
williamsburg brooklyn area code brooklyn wikipedia the downtown brooklyn skyline the manhattan bridge far left and the brooklyn bridge near left are seen across the east river from lower manhattan at.
williamsburg brooklyn area code, williamsburg charter high school thewcsorgindexphp , the new zip code in williamsburg brooklyn zip data maps blog zip data maps blog logo,brooklyn zip codes postal codes lookup database , brooklyn flea market williamsburg all you need to know full view, mysterious tiny apartment door appears in williamsburg brooklyn architecture, internet providers in brooklyn compare internet providers , guide to williamsburg brooklyn restaurants bars shops clubs domino park skyline, brooklyn zip code guide by neighborhood row of houses greenpoint brooklyn new york, best rooftop bars in brooklyn places to drink with a view this output, best hotels in brooklyn hotels from night kayak .
|
0.999985 |
What is a Lot in Securities Trading?
In the financial markets, a lot represents the standardized number of units of a financial instrument as set out by an exchange or similar regulatory body. The number of units is determined by the lot size. In the stock market, most stocks trade in a lot size of 100 shares, although some higher priced stocks may trade in lots of 10 shares. Each market has its own lot size.
When investors and traders purchase and sell financial instruments in the capital markets, they do so with lots. A lot is a fixed quantity of units and depends on the financial security traded.
For stocks, the typical lot size is 100 shares. This is known as a round lot. A round lot can also refer to a number of shares that can evenly be divided by 100, such as 300, 1,200, and 15,500 shares.
Customers can still place orders in odd lots, which is an order less than 100 shares. An order for 35 shares is an odd lot, while an order for 535 shares has five round lots and one odd lot for 35 shares.
Similar to stocks, the round lot for exchange-traded securities, such as an exchange-traded fund (ETF), is 100 shares.
A lot is the standardized number of units in which a financial instrument trades.
Shares trade in 100 share units, called round lots, but can also be traded in odd lots.
Bonds can be sold in lots of $10,000 or higher, although face values may be as low as $1,000 which individual investors can purchase.
A trader can buy or sell as many futures as they like, although the underlying amount that contract controls is fixed based on the contract size.
One option represents 100 shares of the underlying stock.
Forex is traded in micro, mini, and standard lots.
The bond market is dominated by institutional investors who buy debt from bond issuers in large sums. The standard trading unit or lot for a US government bond is $1 million. The municipal bond market has a smaller lot per trade at $100,000. Other bonds may trade in increments of $10,000.
That doesn't mean a trader or investors needs to buy bonds in that quantity. Bonds typically have a face value of $1,000 to $10,000 (some are even lower). An investor can buy as many bonds as they like, yet it still may be an odd lot.
In terms of options, a lot represents the number of contracts contained in one derivative security. One equity option contract represents 100 underlying shares of a company’s stock. In other words, the lot for one options contract is 100 shares.
For example, an options trader purchased one Bank of America (BAC) call option last month. The option has a strike price of $24.50 and expires this month. If the options holder exercises his call option today when the underlying stock, BAC, is trading at $26.15, he can purchase 100 shares of BAC at the strike price of $24.50. One option contract gives him the right to purchase the lot of 100 shares at the agreed strike price.
With such standardization, investors always know exactly how many units they are buying with each contract and can easily assess what price per unit they are paying. Without such standardization, valuing and trading options would be needlessly cumbersome and time-consuming.
The smallest options trade a trader can make is for one contract, and that represents 100 shares. Therefore, it is not possible to trade options for a smaller amount than 100 shares unless the underlying security trades in a smaller lot (extremely rare).
When it comes to the futures market, lots are known as contract sizes. The underlying asset of one futures contract could be an equity, a bond, interest rates, commodity, index, currency, and so on. Therefore, the contract size varies depending on the type of contract that is traded. For example, one futures contract for corn, soybeans, wheat, or oats has a lot size of 5,000 bushels of the commodity. The lot unit for one Canadian dollar futures contract is 100,000 CAD, one British pound contract is 62,500 GBP, one Japanese yen contract is 12,500,00 JPY, and one Euro futures contract is 125,000 EUR.
Unlike stocks, bonds, and ETFs in which odd lots can be purchased, the standard contract sizes for options and futures are fixed and non-negotiable. However, derivatives traders purchasing and selling forward contracts can customize the contract or lot size of these contracts, since forwards are non-standardized contracts that are created by the parties involved.
Standardized lots are set by the exchange and allow for greater liquidity in the financial markets. With increased liquidity comes reduced spreads, creating an efficient process for all participants involved.
When trading currencies, there are micro, mini, and standard lots. A micro lot is 1,000 of the base currency, a mini lot is 10,000, and a standard lot is 100,000. While it is possible to exchange currencies at a bank or currency exchange in amounts less than 1,000, when trading through a forex broker typically the smallest trade size is 1,000 unless expressed stated otherwise.
In the options and futures markets, trading in lots isn't as much of a concern since you can trade any number of contracts desired. Each stock option will represent 100 shares, and each futures contract controls the contract size of the underlying asset.
In forex, a person can trade a minimum of 1,000 of the base currency, in any increment of 1,000. For example, they could trade 1,451,000. That is 14 standard lots, five mini lots, and one micro lot.
In a stock trade, a person can trade in odd lots of less than 100 shares, but odd lot orders less than 100 shares won't be shown on the bid or ask unless the odd lots total more than a round lot.
Assume that a stock has a bid of $50.10 and an offer of $50.35. These are the bid and offer because there are at least 100 shares being bid and offered at those levels. If a trader were to place an order for 50 shares at $50.20, the bid would still stay at $50.10 and the 50 share order at $50.20 wouldn't be visible on the level II to most traders. The reason is that the order is not for a round lot. Round lots change the price while odd lots do not.
Assume another trader decides to also place a 70 share order at $50.20. There are now more than 100 shares being bid at $50.20, so the bid will increase to $50.20.
Smart beta investing combines the benefits of passive investing and the advantages of active investing strategies.
Novice or introductory traders can use micro-lots, a contract for 1000 units of a base currency, to minimize trade size and reduce potential losses.
|
0.988916 |
Our health depends on the accurate transmission of genetic information. Multiple mutations, due to errors in DNA replication, DNA repair, and chromosome segregation, cause cancer. The need for multiple mutations selects for genetic instability, mutations that themselves increase mutation rates and thus contribute to the resistance of cancer to therapy. Studying the events that lead to genetic instability and the genes that mutate to cause instability is difficult in patients or animal models. We have made a diploid yeast model that allows us to select for mutations that improve cell proliferation by inactivating specific genes, thus leading to the evolution of genetic instability. The human homologs of genes that often mutate to cause instability in yeast will be candidate targets for mutations that cause genetic instability in human cancer and we will collaborate with a human cancer geneticist to follow these leads. The proposed work has three parts: 1) To examine the evolution of genetic instability in diploid yeast cells. Cells will be mutagenized and pools of mutant clones will be selected for stepwise inactivation of growth suppressor genes and activation of growth promoting genes. Experiments will find the mutations that cause genetic instability in 100 independently evolved examples of genetic instability, including selections for the activation of growth-promoting genes as well as the inactivation of growth suppressing genes. Preliminary results reveal mutations in genes that have not previously been implicated in genetic instability in yeast but have been implicated in human cancer. 2) For a selected subset of genes, chosen for their relevance to human cancer, more detailed experiments will examine mechanism of instability by characterizing the mutations that unstable strains produce and the mechanism of instability. Two initial examples will focus on the role of Holliday junction resolvases in stimulating mitotic recombination and determining whether different types of mutation that accelerate progress through G1 are equally like to cause genetic instability 3) Tumors are metabolically different from normal tissue and their cells are often starved. Preliminary experiments show that sudden glucose starvation leads to a rapid arrest of the yeast cell cycle and experiments will investigate how starvation arrests the yeast cell cycle and ask if this arrest or mutations that perturb the arrest lead to genetic instability.
All human cancers are genetically unstable, meaning that tumor cells accumulate mutations faster than other cells in our body, and explaining why tumor cells can mutate to become resistant to cancer therapies. We have genetically engineered the baker's yeast, Saccharomyces cerevisiae, to make it a model for how genetic instability arises during the selection for mutations that allow cells to grow and divide faster. The results of our work will improve the accuracy of cancer diagnosis and identify targets for drugs that could be used to reduce genetic instability, thus improving the efficacy of existing cancer therapies.
|
0.999752 |
Do you trust inXile to pause your combat for you, or would you rather do it yourself? That's what the Torment: Tides of Numenara developers would like to know before they lock the game's scrapping down, and they're hitting up Kickstarter backers for their opinions on the matter. The team posed the question - turn-based or real-time with pause? - as part of a typically lengthy development update , giving backers plenty of information on both approaches before they vote with their keyboarding fingers. In turn, I'm going to ask you the same question: would you prefer Torment to feature turn-based combat like Fallout 1/2, or real-time with pause like Baldur's Gate? Your opinions won't change anything, but hey: opinions are fun.
In inXile's opinion, the advantages of the real-time with pause approach are that "combat is resolved more quickly, even with a large number of combatants" and that "it is more flexible: the player can pause a lot or a little depending on whether they're looking for a fast pace or a slow one".
Turn-based, meanwhile, will allow for "more thoughtful" combat, and combat "truer to tabletop RPGs". Among other things it also "allows greater depth of choice: you have time to explore all your options, so we can include more options, and more complicated options, without overwhelming the player." The game's creators are leaning more towards turn-based, which I am too - I found Baldur's Gate's/Planescape's fighting a little pernickety, all told. AI is another issue: I much prefer having total control over my party members, rather than having to constantly manage their feeble attempts to assert their own minds.
The whole post is an interesting read if you've been following the game, or you like delving into the nitty-gritty of RPG systems. If you've backed Torment on Kickstarter, you can vote for your choice in the backer's only forum .
|
0.998697 |
The goal is to create a scannable year at a glance, where you can see the key activities and events for your year. This will help you better anticipate and plan. The most important step it to identify three outcomes for the year. A simple way to do this is ask yourself, if the year were over, what are three results you want under your belt?
To complete the template, first, list your personal events that you can think of. This can include recurring items, such as bills or taxes or birthdays. Next, list any work activities and events that you can think of. You can think of this as a map of results for your year. You can simply list any key outcomes that you want for certain months. Think of it as a rough sketch unless you have hard dates set for things. This helps you visualize your time for the year.
|
0.999976 |
Job Descriptions 1. Able to serve customers with a positive attitude. 2. Maintain workplace in a clean manner. 3. Make drinks with high integrity for customers. 4. Work closely with other staff members.
|
0.991924 |
We have heard people talking about benefits of marketing automation. They believe that marketing automation system help marketers achieve desirable results. It is by far the most effective, reliable and a faster way to improve marketing results and accountability. Investing in a marketing automation is the first step to improve marketing results. But, a number of the marketers fail to get it right after this step. If you can't take advantage or benefit from the marketing automation system you invested in, all of your money and efforts have gone in vain.
In order to get maximum out of your marketing automation system, follow below listed points.
1. Avoid complex systems - Pick a simple and easy system that you will actually use. Marketers often face difficulties working with complex automation systems and end up not using it.
2. Use it ASAP - You should be able to use it right away without spending too much time on staff training and all. A simple system would allow you to do this, whereas if you pick a complex system, you first have to spend time to train staff and overcome the complexities.
3. Set realistic targets - Remember that to achieve a desired result it needs time. Don't expect your revenue to shoot up right from the point you have installed the automation system.
4. Measure your campaign effectiveness - A marketing automation system allows you to launch and automatically measure cross campaign effectiveness. Your task is to adjust marketing efforts and implement tactics to generate quick reports.
5. Use your Automation system to its utmost capabilities - Your marketing automation systems are capable enough to handle complex and multi-step campaigns. They can efficiently monitor & nurture prospects which may seem difficult to do manually.
6. Feed your system with quality content - Marketers often think that the automation system will create content. No, it won't. Either create it or get it outsourced. You need to feed it to your marketing automation system.
7. Sales & Marketing teams to work in collaboration - Ask your marketing team to be in regular touch with the sales team. It is to ensure that your marketing programs are generating quality leads. You marketing automation system works as a bridge between the two departments.
|
0.976195 |
I'm interested in a woodworking career, can you give me advice?
Making beautiful things out of wood is the easy part. YouTube is obviously a wealth of information about tools, techniques, tips. There are co-working spaces, workshops, and community colleges with woodworking programs in every major American city. And, for those who are more committed, there are places like Anderson Ranch, William Ng Woodworking School, Marc Adams, and dozens of others all over the country. For someone who wants the support of academia, there are undergrad/grad programs with a furniture emphasis at schools like University of Wisconsin at Madison, SDSU, RIT, RISD, SCAD, CCA, etc. And, you can find cabinet/woodworking shops out there willing to pay you peanuts to sweep and take lunch orders while you slowly absorb some skills... if you can afford paying those dues.
But! Like I said... making cool stuff is the easy part. Selling it is the hard part.
There are a bunch of different ways to get paid doing "woodworking" -whether that means building cabinets, restaurant interiors, small items sold on Etsy, or art commissions paid for by grants/fellowships in an academic setting. Something I didn't appreciate when I started is that you have to find people who are going to give you money for the pieces you make. Supporting a hobby is easy... you can sell pieces to friends/family at cost, or even give things away... and, that keeps things fun, and that's what I recommend to most of the folks that email me who are considering a mid-career change. I say -keep it as a hobby and find some other way to make money that doesn't crush your soul. I can't tell you how many people have left stable jobs, spent a considerable amount of their savings setting up a workshop, made a solid effort at generating business, and then called me in desperation after a few years to ask, "What am I doing wrong?? How do YOU sell your work? What's the secret?"
Getting paid enough from your craft to afford rent, live in a decent neighborhood, support a family, shop at Whole Foods, and have enough left over for savings or going on vacation is difficult. It's especially hard when you are trying to maintain a high standard of quality and are picky about what jobs you take. There are proven business models for a cabinet shop, but none really for heirloom quality "studio furniture". I feel like setting out to make a "good" living building your own un-compromised designs is as naive/unlikely as the young person stepping off the bus in Hollywood to "make it big." Can it happen? Yes. But, it's the exception, and comes with certain sacrifices and concessions.
Most "successful" woodworkers I know have a parallel career (in addition to making the impressive pieces on their websites). A guy I know who has an exquisite piece in the Smithsonian owns a cabinetry business where he manages a crew from 7am until 3pm, and when his guys go home, he can then work on his more artful stuff. Other friends take on custom corporate work. Some teach or work part-time as "handy-men" for $30-$50/hr. In recent years, I've seen plenty of makers attempt to subsidize their passion by being "content creators" on social media. Getting to a place where you can say no to certain jobs, or curtail other stable income activities takes many years -of making contacts, earning the business of repeat customers, and building a reputation.
"Success" is a moving target and that's why it's hard to give advice. My path is totally individual. I can't tell you how to get to this point... and what I want, or what my definition of success is will continue to change and evolve. There's no recipe or formula. I'm sure there are approaches or ideas that I would insist don't work that someone else (more driven or ambitious or foolish enough) could make work. As far as I know, success is realized from a mixture of innate talent, good fortune (meeting the right person at the right time, getting press, etc.) and hard work.
You only have control over one of those factors.
So, just do your best work, and have a vision for what you want.
When apprentices tell me they just want to do what I'm doing, I tell them "don't." It's the same advice that I was given by my woodworking mentors when I first started: "Don't do it! Find something else... sell insurance or learn to code or whatever... don't try to make a living at this... it's too hard!" And, if I can convince you (with a ton of compelling evidence) not to do it... well, then you weren't meant to do it. Only someone with a ridiculous self-confidence and stubbornness will have the gall to ignore that advice... and it'll be that stubbornness that will be required when it gets tough and the consistent weekly corporate paycheck looks really nice.
I strongly recommend reading the book Boss Life, by Paul Downs. I wish I'd read it when I first started. It starkly lays out the real economics of running a woodworking/craft-focused business, month-to-month for a year. Obviously, Paul's business or approach won't apply to everyone, but the book reveals many of the questions that entrepreneurs need to be asking themselves.
I do feel profoundly grateful that I've been able to sustain this career as a full-time designer/maker since 2002, and I credit many understanding, generous, and repeat customers as well as supportive family and friends for that. I didn't start with a trust fund or inheritance, but for most of my professional life I've had a supportive partner/spouse and that makes a huge difference. If you are fortunate enough to have a trust fund, an inheritance, or some other means to float your woodworking enterprise, I suppose you can disregard much of the above. But, you know the old joke: "How do you make $1,000,000 crafting furniture?"
Can i come visit the studio?
If you are considering commissioning a custom piece, please give us a ring, or send an email with some details and we'd be happy to schedule an appointment.
If you are considering moving your business to Downtown Stockton and would like to ask about our experience, please get in touch via phone or email and we can set up a time to chat/visit.
If you are just hoping to see the studio, chat about woodworking, etc. please know that we are incredibly grateful and humbled by your interest. But, we are extremely busy and already feel like there isn't enough time spent with family, friends, or the outdoors. So, at the moment, we're not able to host drop-ins or casual visits. There are various events throughout the year when we open the studio to the public. Please follow our Instagram page for news about upcoming events. We'd love to see you there. Thank you for your understanding.
|
0.999657 |
Marshall, Gillian Elizabeth (2008) The electrophysiological and molecular effects of chronic beta-adrenoceptor antagonist therapy on human atrium. PhD thesis, University of Glasgow.
The chronic treatment of patients with a β-adrenoceptor antagonist is associated with prolongation of the atrial cell action potential duration (APD), potentially contributing to the ability of these drugs to prevent atrial fibrillation (AF). The mechanisms underlying this APD prolongation are not fully understood but may involve pharmacological remodelling of atrial K+ currents and underlying ion channel subunits. This project aimed to test the hypothesis that various characteristics of human atrial K+ currents, including voltage, time and rate dependency, differ between patients treated and not treated with a β-blocker as a result of altered expression of ion channel pore-forming and accessory subunits.
Human atrial myocytes were isolated enzymatically from right atrial appendage tissue obtained from consenting patients, in sinus rhythm, undergoing cardiac surgery. Using whole cell patch clamping, K+ currents were recorded at physiological temperature. Treatment of patients with β-blockers for a minimum of 4 weeks duration was associated with a significant, 34% reduction in the transient outward K+ current (ITO) density but no change in the sustained outward current (IKSUS). There was a reduction in the Ba2+-sensitive, inwardly rectifying K+ current (IK1) but only at -120 mV and the physiological significance of this is unclear. The reduction in ITO density was not secondary to changes in the voltage dependency of the current, as determined by Boltzmann curve fits. There was no difference in the time dependent inactivation or re-activation of ITO between cells from non β-blocked and β-blocked patients, indicating these current characteristics were not contributing to β-blocker induced APD prolongation. The density of ITO decreased significantly with increasing stimulation rate in cells from both patient groups but remained significantly reduced in β-blocked patients at all rates studied.
To determine a possible mechanism underlying the reduction in ITO density, the expression of Kv4.3 mRNA, the pore-forming subunit responsible for this current, was compared in right atrial appendage tissue from non β-blocked and β-blocked patients using real-time RT-PCR. mRNA levels were normalised to the expression of both 28S, a marker of total RNA, and the housekeeping gene GAPDH. The levels of mRNA for the accessory subunits KChIP2, KChAP, Kvβ1 and 2 and Frequenin, which modify Kv4.3 expression and function, were also measured. No change was found in the relative mRNA levels of any of these ion channel subunits in association with chronic β-blockade. mRNA for the pore-forming subunits Kir 2.1 and 2.2 and Kv1.5 which are responsible for IK1 and IKSUS respectively, in addition to mRNA for the pore-forming subunits underlying the L-type calcium current and sodium-calcium exchanger were also measured. Again, no significant changes in expression were found in association with chronic β-blockade.
The possibility of ion channel remodelling at a translational level was investigated by measuring Kv4.3 protein levels using Western blotting with a monoclonal anti-Kv4.3 antibody. Kv4.3 protein levels were normalised to GAPDH which was used as a loading control. Chronic β-blockade did not change the ratio of the level of Kv4.3 protein relative to GAPDH.
In conclusion, chronic treatment of patients with a β-blocker is associated with a reduction in atrial ITO density which may contribute to the APD prolongation reported in cells from these patients. However, this cannot be explained by changes in the expression of Kv4.3 or by changes in the expression of its regulatory accessory subunit genes.
|
0.930371 |
A husband divorced his wife after discovering intimate photos of her with another man on Google Maps. The man was planning a route ahead of a drive when he spotted an image of his wife on a bench stroking the hair of another man with his head in her lap.
It dates back to 2013, but the man angrily confronted his wife with the evidence of her past infidelity. And the couple, whose names have not been revealed, later divorced after the woman admitted to having had an affair. Google Street View lets you look inside garage where search engine was founded 20 years ago Ironically, she was photographed with her lover on a bench by the city's Puente de los Suspiros de Barranco (Bridge of Sighs of the Ravine).
The man recently shared the photographs on Facebook where they made a big impression on users. One social media user, San Pateste, said: "What a small world it is... It would have been enough if she said to her husband that she did not love him anymore." The photograph is one of a long line of bizarre images taken for the Google Maps and Google Street View online resources.
|
0.99615 |
As an immigrant from Colombia, naturalized citizen of the United States, long-time New Yorker, and educator, I find president Trump’s authoritarian nationalism, anti-immigrant stance, and crude conception of what it means to be an American deeply disturbing.
The history of U.S. immigration is filled with searing tensions that sway between tolerance and intolerance. It’s the American paradox of simultaneously welcoming and rejecting the proverbial “stranger.” It’s the saga of the outsider struggling to become an insider. And it’s the lived narrative of diverse immigrant pathways in building new identities in a nation conflicted by imposed whiteness and racial and ethnic differences.
Keeping with the notion of difference, I provide a counternarrative to the Trumpian storyline by invoking my personal journey as an immigrant New Yorker. And by outlining the contours of my personal pathway, I underscore how – during times of societal and individual adversity – the personal is political.
During the 1950s my parents migrated to Queens, New York, with five children in tow. We arrived in New York City during the “Golden Age” of American capitalism. Yet, all that glitters is not gold. By custom and design, neighborhoods and workplaces were racially and ethnically segregated. In this charged environment, economic survival and learning the cultural ropes was not for the faint of heart.
My father was well educated. He spoke and wrote impeccable English. Nonetheless, his only option was to work as a factory laborer. Because of my family’s dire economic circumstances, I began to work part-time at the age of twelve. As an adolescent, I shined shoes, delivered newspapers, worked as a caddy, and did grunt factory labor. These work experiences mirror the working-class reality of today’s immigrant youth. Contributing to the family’s economic survival is the quintessential immigrant experience.
My Americanization emerged while growing up in New York City public housing. As a “Spanish project kid” I experienced the “hidden injuries” of poverty, marginalization, and institutional discrimination. Yet growing up in a community of African-Americans and Puerto Ricans, I learned important life lessons from what Dr. Martin Luther King insightfully termed “The Beloved Community.” I absorbed from neighbors and friends that – in the face of persistent adversity – racial and ethnic differences are a bountiful source of fortitude and resiliency. During my formative project years, the sense of social and economic justice that informs my notion of self, my identity as a New Yorker, and the core values defining my professional and activist work began to take root.
Paraphrasing the insight: “Folks make history, but not of their own making,” provides an entry point for explaining my evolving political consciousness as a Latino New Yorker. I am, as all us are, influenced by the sweeping arc of historical forces. The War in Viet Nam, the Black Civil Rights Movement, the Woman’s Movement, and working-class Latino social struggles indelibly marked my development as a politically engaged New Yorker and new American.
Throughout my immigrant journey, as I struggled to become a useful member of my adopted country, I was fortunate enough to earn a Ph.D. from Columbia University. As the adage states: “The past is prologue.” In this respect, my graduate training, as an urban and regional planner was profoundly influenced by my experiences as a working-class immigrant and as a politically aware first-generation New Yorker. And as an activist urban planner and critical academic my professional practices focus on the ravages of neo-liberal economic restructuring, gentrification, income inequality and immigrant socio-economic marginalization. This is my agenda and my commitment as a Latino academic and as a New Yorker.
From my perch, life is a multi-layered process. It is enriched by critically reflecting on one’s experiences and engaging in the humane struggle for the general good. This is my pathway. This is how I contribute in “Making America Great” for all. This is my immigrant story.
Sandy, thank you for the kind and supportive comments. To be honest, although the personal is political, I have never crafted an autobiographical piece before. Hence, this is tierra incognita for me.
Excellent path of a Colombian and working class man, and best responsed for the white racist, authoritariam, and plutocrat government. Beautiful growing process of a New Yorker academic and activist .
Interesting personal history. We must keep the resitance flourish and be progresive, and infuse new blood to the American democracy.
|
0.999985 |
Are there any guidelines on math formulas formatting? Unfortunately SO doesn't support LaTeX, but should I probably wrap expressions in italics, or backticks for code, or place each formula into a separate paragraph?
Some posts advise to use Google Spreadsheets, and visually that is almost as good as LaTeX on math.stackexchange.com, but unfortunately that would be incredibly tedious.
I don't see a problem with that answer; the formulas seem easy enough to follow. In some cases (lengthy or complicated formulas perhaps) you might wish to use code block formatting to make them easier to follow, but for simple formulas this usually shouldn't be necessary.
Resist the urge to use inline code formatting just for the heck of it. It works great for making actual code easier to read (less confusion between, say, O and 0) but otherwise just makes the text look busy.
If I understand what you're trying to do correctly, it seems you want your curve to match the blue linear function better until it crosses it. I would suggest adding a portion of the linear function to your curve. y = m·x and y = bx would yield y = bx + a·m·x where a is a value between 0 and 1, ie y = 3x + ⅓ · 2·x if b was 3 and m was 2. Then you'd be adding ⅓ the value of 2·x to the curve and effectively pushing the curve up toward the line generated by the function y = 2·x.
should give you about what you want.
Often, one can alternatively use unicode characters directly to achieve much the same effect. That gives a nicer view in the plaintext editor.
I mostly use Vim digraphs for typing the Unicode versions.
Whether it's actually worth to apply these prettifications to an existing post depends of course on the content quality and on how much the readability suffers from the used unformatted-math.
In the case you found, an emphatic YES!
Whenever you find usage (in descriptive text) of ^ to mean exponentiation in a tag where it doesn't (i.e. in c, c++, c#, java it means bitwise-XOR), replace it with a real superscript using <sup></sup> as leftroundabout mentioned.
If you find ^ used in a code block where exponentiation is intended, don't edit because that's a huge change to the meaning. Instead, leave a comment, and also downvote (if an answer) or vote to close as duplicate of the language-appropriate question about the meaning of ^ (if a question).
Do be aware, however, that MATLAB and some forms of BASIC do use ^ for exponentiation. Probably some others too.
Is the code formatting broken?
How should I edit posts to receive upvotes?
For what should one use the annotated-posts page?
Is it acceptable to edit out “stack snippet” formatting that has no benefit over `code formatting`?
|
0.928546 |
The high-energy word for today is Bustle, with three different ways of using it.
As an intransitive verb, Bustle means to move about, briskly, busily, and/or carefree.
As a noun, Bustle means an activity, often noisy and energetic. However, Bustle also refers to a framework that supports the back of a woman's dress or a skirt. Make sure you know the difference between a noisy crowd and woman's apparel when using this word.
|
0.999976 |
I like to be watched and to meet new people. So i bet this place is pretty good for me lol.
"My dream is to have vacation in New York. I would be glad if you help me with it."
Target: Big Apple is my dream!
|
0.935717 |
What is a sublease agreement?
A subleasing agreement is a legal contract, between two or more parties known as a sublessor and sublessee, that includes all the details of the arrangement, such as the length of the sublease and the costs involved.
The term of this sublease begins on [DATE] and ends of [DATE], unless otherwise extended via a written instrument signed by the parties hereto. The rental fee for the premises is [DOLLAR AMOUNT] per month, and this fee must be paid by sublessee in advance on the first day of each month. The rental fee must be paid via check sent to [ADDRESS]. At the end of the term, the sublessee will vacate the premises.
All charges for utilities (including but not limited to electric, heat, and water) in relation to the premises, which are to be paid by the sublessor under its lease agreement, shall be paid by the sublessee for the term of this sublease agreement.
Upon conclusion or expiration of the term, sublessee shall surrender and deliver to the sublessor the premises, including everything that was contained therein prior to sublessee’s occupancy, in the same condition as they were at the beginning of the term, excepting reasonable wear and tear. The sublessee is and will remain solely liable for any loss or damage to the premises, or anything contained therein, occurring during the term of this Agreement.
Sublessee shall pay to sublessor a deposit in the amount of [DOLLAR AMOUNT] to cover any loss or damage or any expense that sublessor may have in restoring the premises or anything contained therein to the condition they were at the beginning of the term. Only if the premises and everything contained therein is returned to the sublessor at the end of the term of this sublease agreement in the same condition as they were prior to the sublessee’s occupancy of the premises, will sublessor be obligated to refund such deposit.
PandaTip: The above clause is important clause no. 1. The sublessee needs to be held liable for any damages to the property or furniture, etc.
Upon the sublessee taking possession of the premises, the sublessor will provide the sublessee with an inventory form, to be signed by the sublessee acknowledging the contents within the premises.
This sublease agreement incorporates the original lease agreement between the sublessor and the sublessor’s lessor, a copy of which has been provided to the sublessee, and is attached hereto and incorporated herein by this reference. The sublessee agrees to assume all of the obligations and responsibilities of the sublessor under such original lease for the duration of this sublease agreement.
PandaTip: Important clause no. 2 is above. Remember that you are still responsible for your lease. Therefore, you will need to make sure that the sublessee signs up to all the same obligations. The above clause is a way to do this.
In the event of any legal action concerning this sublease, the prevailing party shall be entitled to its reasonable attorney’s fees and court costs.
This lease agreement constitutes the entire agreement between the parties, and no additions, deletions or modifications may be made to this agreement without the written consent of the parties.
If the sublessee is under 18 years of age, then his or her legal guardian or parent hereby guarantees and agrees to perform all of the terms, covenants and conditions of this sublease by affixing his or her signature in the space provided below.
This sublease shall be binding upon both parties following approval by the landlord as provided in this sublease agreement below.
PandaTip: Important clause no. 3 is above. Unless your lease agreement gives you carte blanche to sublease, there is a third party to this agreement; namely, the landlord.
By their respective signatures below, the parties hereby bind themselves to this sublease agreement upon the landlord’s signature set forth below.
I hereby give my consent to subletting of the premises as set out in this sublease agreement.
(1) Sublessor’s original lease agreement.
|
0.860206 |
so 6-40=20 more men needs to be hired.
Q16) There are 3 runners Aman, Badal and Carol. Aman beats Badal by 20m amd Carol by 34m. Badal beats Carol by 21m. Find length of race.
Q17) I wanted to buy 2 dozen bananas but I am 30 Rupees short. So I bought 20 bananas and 2 rupees is left with me. Price of 1 banana?
Q18) 100 kg grapes contains 98% water. After few days, due to evaporation some water evaporates and it contains 94% of water. Find weight of grapes.
100/3 = 33.33 kg new weight of grapes.
Q19) If 8 coins are tossed, what is the probability that no two heads appear consecutively.
|
0.972988 |
1 Division of Biogerontology, Andrus Gerontology Center, and Department of Biological Sciences, University of Southern California, Los Angeles, CA 90089–0191, USA.
2 Max Planck Institute for Demographic Research, Rostock, 18057 Germany.
The protein kinase Akt/protein kinase B (PKB) is implicated in insulin signaling in mammals and functions in a pathway that regulates longevity and stress resistance in Caenorhabditis elegans. We screened for long-lived mutants in nondividing yeastSaccharomyces cerevisiae and identified mutations in adenylate cyclase and SCH9, which is homologous to Akt/PKB, that increase resistance to oxidants and extend life-span by up to threefold. Stress-resistance transcription factors Msn2/Msn4 and protein kinase Rim15 were required for this life-span extension. These results indicate that longevity is associated with increased investment in maintenance and show that highly conserved genes play similar roles in life-span regulation in S. cerevisiae and higher eukaryotes.
Mutations that extend life-span inC. elegans, Drosophila melanogaster, and mice are associated with increased resistance to oxidative stress (1,2). However, the mechanisms that regulate aging in these multicellular organisms are poorly understood. As in higher eukaryotes, the unicellular yeast Saccharomyces cerevisiae undergoes an age-dependent increase in cell dysfunction and mortality rates (3, 4). Aging in yeast is associated with an enlargement of the cell and a slowing in the budding rate, and is commonly measured by counting the number of buds generated by a single mother cell (replicative life-span) (5, 6). The replicative life-span of yeast is regulated by the Sir2 protein, which mediates chromatin silencing in a nicotinamide adenine dinucleotide–dependent manner (6, 7). However, yeast can also age chronologically as a population of nondividing cells (2, 4, 6). Saccharomyces cerevisiae grown in complete glucose medium [synthetic complete (SC) medium] stop dividing after 24 to 48 hours and survive for 5 to 7 days while maintaining high metabolic rates (2, 8, 9), a situation more akin to their experience in nature where they are likely to survive as nondividing populations exposed to scarce nutrients. For these reasons, and to avoid extended growth and entry into the hypometabolic stationary phase induced by incubation in the nutrient-richer yeast extract/peptone/dextrose (YPD) medium (10), our studies were performed exclusively in SC medium. The survival of nondividing yeast is shortened by null mutations in either or both superoxide dismutases (SODs) (2, 11, 12) and is modestly extended by overexpressing the antiapoptotic protein Bcl-2 (8).
To understand the molecular mechanism that regulates yeast longevity, we transposon-mutagenized yeast cells and isolated long-lived mutants (13). Because of the association between stress resistance and longevity in higher eukaryotes, we screened for mutants that survived both a 1-hour heat stress at 52°C and a 9-day treatment with the superoxide-generating agent paraquat (1 mM). From 2 billion cells screened, we isolated 4000 thermotolerant colonies and 40 paraquat-resistant colonies carrying transposons. From the 4040 stress-resistant mutants, we isolated nine that were able to survive to day 9, when most of the wild-type cells are dead. The only two long-lived mutants isolated independently in both the paraquat and heat shock selections, designated Tn3-5 and Tn3-24, were also the longest lived (Fig. 1A), suggesting that resistance to multiple stresses is associated with increased longevity. Allele rescue of the mutants revealed that transposons had integrated in the promoter region of the Sch9 protein kinase gene (sch9::mTn) (Tn3-5) (33 base pairs upstream of the start codon) and in the NH2-terminal regulatory region of adenylate cyclase (cyr1::mTn) (Tn3-24) (between codon 208 and 209). The mean life-spans ofsch9::mTn and cyr1::mTn were extended by 30 and 90%, respectively. Transformation of Tn3-5 cells with wild-type SCH9, and of Tn3-24 cells withCYR1, abolished the survival extension, strongly suggesting that the decreased expression or activity of Sch9 and Cyr1 extends survival (not shown).
Mutations in CYR1 and in SCH9increase chronological life-span of S. cerevisiae. Survival of (A) the wild type (DBY746), and transposon-mutagenizedcyr1::mTn (Tn3-24) andsch9::mTn (Tn3-5); (B) the wild type and sch9Δ; (C) sch9Δ transformed with vector alone wild-type SCH9 or with a mutated sch9 encoding for a catalytically inactive proteins (Sch9K441A, Sch9D556R). Cell viability was measured every 2 days starting at day 3 (14). Experiments were repeated between three and seven times with two or more samples per experiment with similar results. The average of all experiments is shown. The mean life-span increase incyr1::mTn (90%), sch9::mTn(30%), and sch9Δ (300%) is significant [P < 0.05, analysis of variance (ANOVA)].
To investigate further the role of SCH9 in chronological survival, we deleted the SCH9 gene (14). The sch9Δ mutants grew slowly, but survived three times longer than wild-type cells (Fig. 1B). To determine whether the protein kinase activity of Sch9 accelerates mortality in nondividing yeast, we transformed mutants with either wild-type SCH9 or with forms of SCH9bearing kinase-inactivating mutations: sch9K441A and sch9D556R (15). Transformation ofsch9Δ with wild-type SCH9 reversed the life-span extension, whereas transformation with the genes encoding for the inactive Sch9K441A or Sch9D556R kinases did not (Fig. 1C).
Both Sch9 and Cyr1 function in pathways that mediate glucose-dependent signaling, stimulate growth and glycolysis, and decrease stress resistance, glycogen accumulation, and gluconeogenesis (16). The COOH-terminal region of Sch9 is highly homologous to the AGC family of serine/threonine kinases, which includes Akt/PKB, whereas the NH2-terminal region contains a C2 phospholipid and calcium-binding motif. The 327–amino acid serine/threonine kinase domain of yeast Sch9 is, respectively, 47 and 45% identical to that of C. elegans AKT-2 and AKT-1, which function downstream of the insulin-receptor homolog DAF-2 in a longevity/diapause regulatory pathway (14, 17, 18). In this domain conserved from yeast to mammals, Sch9 is also 49% identical to human AKT-1/AKT-2/PKB, which are implicated in biological functions including insulin signaling, the translocation of glucose transporter, apoptosis, and cellular proliferation (19).
The CYR1 gene encodes for adenylate cyclase, which stimulates cyclic adenosine monophosphate (cAMP)–dependent protein kinase (PKA) activity required for cell cycle progression and growth. The catalytic subunits of PKA are also 35 to 42% identical to C. elegans and human AKT-1/AKT-2, although PKA belongs to a different family of serine/threonine kinase. The inactivation of the Ras/cAMP/PKA pathway in S. cerevisiae increases resistance to thermal stress, in part, by activating transcription factors Msn2 and Msn4, which induce the expression of genes encoding for several heat shock proteins, catalase (CTT1), and the DNA damage inducible geneDDR2 (14, 16). MnSOD also appears to be regulated in a similar manner (20). To determine whetherMSN2/MSN4 mediate survival extension, we deleted both genes in the cyr1::mTn mutants. The absence of both transcription factors abolished the life-span extension conferred bycyr1::mTn, but did not affect the survival of wild-type cells (Fig. 2A). By contrast, the deletion of MSN2/MSN4 did not reverse the survival extension in sch9Δ cells (Fig. 2B).
Transcription factors Msn2, Msn4, and protein kinase Rim15 are required for the chronological life-span extension ofcyr1::mTn and sch9Δ mutants. (A) Survival of the wild type andcyr1::mTn mutants lacking either the stress-resistance genes MSN2/MSN4 or RIM15. (B) Survival of the wild type and sch9Δ mutants lacking either MSN2/MSN4 or RIM15. Experiments were repeated between three and seven times with two or more samples per experiment with similar results. The average of all experiments is shown.
The protein kinase Rim15 regulates genes containing a PDS (postdiauxic shift) element T(T/A)AG3AT involved in the induction of thermotolerance and starvation resistance by a Msn2/Msn4-independent mechanism (21). To test the role of Rim15 in survival, we generated sch9Δrim15Δ mutants. The life-span of the double mutant was decreased compared to sch9Δ (Fig. 2B). The deletion ofRIM15 also abolished the life-span extension incyr1::mTn cells (Fig. 2A). However, it is difficult to establish whether Rim15 mediates the survival extension in these mutants, because rim15 single mutants are short-lived (Fig. 2A).
To test whether the long-lived strains were stress-resistant, we exposed the mutants to hydrogen peroxide, menadione, or heat. All mutants were resistant to a 1-hour heat shock treatment at 55°C (Fig. 3A). Similarly, 3- to 5-day-old mutants were resistant to a 30-min treatment with 100 mM hydrogen peroxide (Fig. 3B) or with the superoxide/H2O2-generating agent menadione (20 μM) (Fig. 3C).
Heat-shock and oxidative stress resistance are increased in long-lived mutants. (A) Serial dilutions (1:1 to 1:1000, left to right) of cells removed from day 1 postdiauxic phase cultures were spotted onto YPD plates and incubated at 30°C (control) or 55°C (heat-shocked) for 1 hour. Pictures were taken after a 4-day incubation at 30°C. The experiment was performed twice with two or more samples per experiment with similar results. Cells removed from days 3 or 5 in the postdiauxic phase were (B) diluted to an OD600 (optical density at 600 nm) of 1 in expired medium and incubated with hydrogen peroxide (100 mM) for 30 min or (C) diluted to an OD600 of 0.1 in potassium phosphate buffer and treated with 20 μM of the superoxide/H2O2-generating agent menadione for 60 min. Viability was measured by plating cells onto YPD plates after the treatment. The experiments were performed twice with similar results. The average of the two experiments is shown.
In yeast sod2Δ mutants, superoxide specifically inactivates aconitase and other [4Fe-4S] cluster enzymes and causes the loss of mitochondrial function and cell death (11, 12). To investigate further the role of superoxide toxicity in aging, we monitored the activity and reactivation of mitochondrial aconitase, which can also serve as an indirect measure of superoxide concentration (22). In agreement with the pattern of resistance to superoxide toxicity (Fig. 3C), aconitase specific activity decreased by 50% in wild-type cells, and by 30% in cyr1::mTnmutants, but did not decrease in sch9::mTn andsch9Δ mutants at day 7 compared to day 3 (14). The percent reactivation of aconitase was lowest in the long-lived sch9Δ mutants and highest in wild-type cells (Fig. 4A) and correlated with death rates (Fig. 4B), suggesting that cyr1 andsch9 mutants increase survival, in part, by preventing superoxide toxicity. However, the overexpression of bothSOD1 and SOD2 only increases survival by 30% (9), indicating that additional systems, regulated by Msn2, Msn4, and Rim15, are responsible for the major portion of chronological life-span extension in cyr1::mTn andsch9Δ mutants.
Mutations in cyr1 andsch9 delay the reversible inactivation of the superoxide-sensitive enzyme aconitase in the mitochondria. (A) Mitochondrial aconitase percent reactivation after treatment of whole-cell extracts of yeast removed from cultures at day 5 through 7 with agents (iron and Na2S) able to reactivate superoxide inactivated [4Fe-4S] clusters. (B) Death rate reported as the fraction of cells that lose viability in the 24-hour period following the indicated day.
There are many phenotypic similarities between long-lived mutants inS. cerevisiae, C. elegans, Drosophila, and mice (1, 2). Caenorhabditis elegans age-1 and daf-2 mutations extend the life-span in adult organisms by 65 to 100%, by decreasing AKT-1/AKT2 signaling and activating transcription factor DAF-16 (14, 18, 23). These changes are associated with the induction of superoxide dismutase (MnSOD), catalase, and the heat shock proteins HSP70 and HSP90 (14, 17). A role for oxidants in the aging ofC. elegans was confirmed by the extended survival of wild-type worms treated with small synthetic SOD/catalase mimetics (24). Thus, the yeast Gpr1/Cyr1/PKA/Msn2/4-Sch9/Rim15 and the C. elegansDAF-2/AGE-1/AKT/DAF16 pathways play similar roles in regulating longevity and stress resistance (14). Analogously, aDrosophila line with a mutation in the heterotrimeric guanosine triphosphate–binding protein (G protein)–coupled receptor homolog MTH gene displays a 35% increase in life-span and is resistant to starvation and paraquat toxicity (25). Furthermore, in flies, aconitase undergoes age-dependent oxidation and inactivation (26), and the overexpression of SOD1 increases survival by up to 40% (27, 28). A mutation in a signal-transduction gene also increases resistance to stress and lengthens survival in mammals. A knockout mutation in the signal transduction p66SHC gene increases resistance to paraquat and hydrogen peroxide and extends survival by 30% in mice (29).
We propose that yeast Sch9 and PKA and worm AKT-1/AKT-2 evolved from common ancestors that regulated metabolism, stress resistance, and longevity in order to overcome periods of starvation. Analogous mechanisms triggered by low nutrients may be responsible for the extended longevity of dietary restricted rodents (3). The phenotypic similarities of long-lived mutants ranging from yeast to mice (1, 2), and the role of the conserved yeast Sch9 and PKA and mammalian Akt/PKB in glucose metabolism, raise the possibility that the fundamental mechanism of aging may be conserved from yeast to humans.
, Nature 408, 239 (2000).
, Neurobiol. Aging 20, 479 (1999).
C. E. Finch, Longevity, Senescence, and the Genome (University Press, Chicago, 1990).
, Science 280, 855 (1998).
, J. Bacteriol. 171, 37 (1989).
, Annu. Rev. Microbiol. 52, 533 (1998).
, Science 289, 2126 (2000).
, J. Cell Biol. 137, 1581 (1997).
L.-L. Liou, P. Fabrizio, V. N. Moy, J. W. Vaupel, J. S. Valentine, E. Butler Gralla, and V. D. Longo, unpublished results; L.-L. Liou, thesis, University of California Los Angeles, Los Angeles (1999).
, Mol. Microbiol. 19, 1159 (1996).
, J. Biol. Chem. 271, 12275 (1996).
, Arch. Biochem. Biophys. 365, 131 (1999).
] using the yeast insertion library provided by M. Snyder.
, EMBO J. 18, 5953 (1999).
, Exp. Cell Res. 253, 210 (1999).
, Mol. Microbiol. 23, 303 (1997).
, EMBO J. 19, 2569 (2000).
, J. Biol. Chem. 267, 8757 (1992).
, Science 249, 908 (1990).
, Science 282, 943 (1998).
, Proc. Natl. Acad. Sci. U.S.A. 94, 11168 (1997).
, Mol. Cell. Biol. 19, 216 (1999).
Supported by NIH grants AG 08761-10 (J. W. Vaupel, V.D.L.) and AG09793 (T. H. McNeill), by an American Federation of Aging Research grant (V.D.L.), and by a John Douglas French Alzheimer's Foundation grant (V.D.L.). We thank J. Martin and G. Fenimore for their generous donations, C. Finch and E. Gralla for careful reading of the manuscript, D. Thiele, J. Hirsh, M. Carlson, S. Garrett, A. Mitchell, and J. Field for providing yeast plasmids, J. Vaupel for generously allowing the use of Max Planck Institute facilities and instruments, and M. Wei for performing data analysis.
|
0.999998 |
In the given context, the word ebbing means that the momentum of the movement is giving away, it is falling to pieces.
1. The flowing back of the tide as the water returns to the sea ( opposed to flood, flow).
2. A flowing backward or away; decline or decay: the ebb of a once great nation.
3. A point of decline: His fortunes were at a low ebb.
From the above, we derive the meaning of ebbing: a gradual decline (in size or strength or power or number).
Ebb tide: the ebbing tide: They sailed on the ebb tide.
At a low ebb: In a poor or depressed state: She was at a low ebb after the operation.
On the ebb: Ebbing or getting less: His power is on the ebb.
|
0.999936 |
Extract - Laudon, K.C. and Guercio Traver, C.
E-commerce business strategies - Laudon, K.C. and Guercio Traver, C.
|
0.983378 |
"Israel George Lash (1810 - 1878) was a Congressional Representative from North Carolina; born in Bethania, North Carolina, August 18, 1810. He attended the common schools and the local academy in his native city; engaged in mercantile pursuits and subsequently became a cigar manufacturer; also engaged in banking in Salem, North Carolina; delegate to the State constitutional convention in 1868; upon the readmission of the State of North Carolina to representation was elected as a Republican to the Fortieth Congress; reelected to the Forty-first Congress and served from July 20, 1868, to March 4, 1871; was not a candidate for renomination in 1870; again engaged in banking in Salem (now Winston-Salem) N.C., until his death there on April 1, 1878; interment in the Moravian Cemetery, Bethania, N.C."
What is significant about this man to me is that he is connected to the Lash family of Brookberry Farm, off Meadowlark Drive in Winston-Salem, NC.
"US Congressman. Banker and cigar maker Lash was elected as a U.S. Representative from North Carolina from 1868 to 1871."
The following photo was taken in 1865.
"Salem branch: There were three individuals initially appointed in 1815 to act as agents for the Bank of Cape Fear in Salem. Charles F. Bagge, cashier (July 1815+), in some records also referred to as president, even though this was an agency operation until 1847; Emanuel Schober (July 1815), John Christian Blum, agent (July 1815 -1827); Friedrich Heinrich (Henry) Schumann, agent & cashier (1828 -1847), a physician who also was involved with the Salem Manufacturing Company and its cotton mill in Salem; Israel George Lash, cashier (1847 - 1866)."
"As early as 1815 the Bank of Cape Fear, Wilmington, N.C., appointed agents in Salem. Two years before the founding of Winston, the formal business of banking was launched in Salem with the establishment of a branqh of the same Bank. Israel G. Loesch, or Lash, was the first banker. The bank was housed in a brick building located at what is now the southwest corner of Bank and Main streets. This branch bank seems to have prospered until it went down in the general financial crash of the Civil War. In 1866, Lash opened a bank of his own, the First National Bank of Salem, using the same building which had sheltered the branch of the Bank of Cape Fear. Following the death of Israel Lash in 1879, the bank closed its doors and the banking center of the community moved into the new village of Winston.
"The Wachovia Bank & Trust Company dates back to the establishment of the Wachovia National Bank in June, 1879. This institution had as its president Wyatt F. Bowman, E. Belo as vice-president, W. A. Lemly (formerly associated with Israel Lash in Salem) as cashier, and James A. Gray as assistant cashier. Lemly was president of this flourishing institution from 1882 to 1906 and James A. Gray from the latter date to 1911. The bank started business with a capital of $100,000 and in about two months it was increased to $150,000. In 1888 the bank was moved from its original build- ing on Main Street to the corner of Main and Third streets, where it occupied a three-story building on the present site of the Main office of the Wachovia Bank and Trust Company.
"In 1893, the Wachovia Loan and Trust Company was organized by F. H. Fries and his nephew, H. F. Shaifner. Its first home was in a modest one-story wooden building on the east side of Main Street between Second and Third in Winston. The directors were James A. Gray, J. E. Gihner, C. H. Fogle, J. C. Buxton, J. H. Millis, T. L. Vaughn and R. J. Reynolds. Two of these directors, Messrs. Gray and Buxton, were closely identified with the Wachovia National Bank! Gray was elected a vice-president of the Trust Company at the beginning but was not active until later."
Another tidbit about Lash and Gray, Debbie McCann found an 1878 deed in which Israel G Lash sold a plot of land to James A Gray.
|
0.945184 |
The Java Class Constructor Because we've made the field variables private, we need another way to assign values to them. One way to do this is with something called a constructor .... Change the package of C class different than A and B and declare constructor of A without any access modifier (i.e. package level). If there is a restriction that your constructor has to be public, then you may have to determine from the stacktrace who is caller of your constructor (and whether C is in it or not).
The next class, Employee, is a contrived example of a class in which the extensible class's constructor calls an overridable method (setSalaryRange()). Employee.java package dustin.examples.overridable; /** * Simple employee class that is intended to be a parent of a specific type of * employee class.
Change the package of C class different than A and B and declare constructor of A without any access modifier (i.e. package level). If there is a restriction that your constructor has to be public, then you may have to determine from the stacktrace who is caller of your constructor (and whether C is in it or not).
|
0.89946 |
Telegraphy is the long-distance transmission of written messages without physical transport of letters. It is a compound term formed from the Greek words tele (τηλε) = far and graphein (γραφειν) = write. Radiotelegraphy or wireless telegraphy transmits messages using radio. Telegraphy includes recent forms of data transmission such as fax, email, telephone networks and computer networks in general.
A telegraph is a device for transmitting and receiving messages over long distances, i.e., for telegraphy. The word telegraph alone now generally refers to an electrical telegraph. Wireless telegraphy is also known as "CW", for continuous wave (a carrier modulated by on-off keying), as opposed to the earlier radio technique of using a spark gap.
A telegraph message sent by an electrical telegraph operator (or telegrapher) using Morse code, or a printing telegraph operator using plain text was known as a telegram or cablegram, often shortened to a cable or a wire message. Later, a telegram sent by a Telex network, a switched network of teleprinters similar to a telephone network, was known as a Telex message.
Before long distance telephone services were readily available or affordable, telegram services were very popular and the only way to convey information speedily over very long distances. Telegrams were often used to confirm business dealings and were commonly used to create binding legal documents for business dealings.
A wire picture or wire photo was a newspaper picture that was sent from a remote location by a facsimile telegraph. The teleostereograph machine, a forerunner to the modern electronic fax, was developed by AT&T's Bell Labs in the 1920s; however the first commercial use of image facsimile telegraph devices date back to the 1800s.
The first telegraphs came in the form of optical telegraphs, including the use of smoke signals, beacons or reflected light, which have existed since ancient times. A semaphore network invented by Claude Chappe operated in France from 1792 through 1846. It helped Napoleon enough to be widely imitated in Europe and the U.S. The Prussian system was put into effect in the 1830s. The last commercial semaphore link ceased operation in Sweden in 1880.
Semaphores were able to convey information more precisely than smoke signals and beacons, and consumed no fuel. Messages could be sent at much greater speed than post riders and could serve entire regions. However, like beacons, smoke and reflected light signals they were highly dependent on good weather and daylight to work (practical electrical lighting was not available until about 1880). They required operators and towers every 30 km (20 mi), and could only accommodate about two words per minute. This was useful to governments, but too expensive for most commercial uses other than commodity price information. Electric telegraphs were to reduce the cost of sending a message thirtyfold compared to semaphores, and could be utilized non-stop, 24 hours per day, independent of the weather or daylight.
Elevated locations where optical telegraphs were placed for maximum visibility were renamed to Telegraph Hill, such as Telegraph Hill, San Francisco , and Telegraph Hill in the PNC Bank Arts Center in New Jersey .
One very early experiment in electrical telegraphy was an electrochemical telegraph created by the German physician, anatomist and inventor Samuel Thomas von Sömmering in 1809, based on an earlier, less robust design of 1804 by Catalan polymath and scientist Francisco Salvá i Campillo. Both their designs employed multiple wires (up to 35) in order to visually represent most Latin letters and numerals. Thus, messages could be conveyed electrically up to a few kilometers (in von Sömmering's design), with each of the telegraph receiver's wires immersed in a separate glass tube of acid. As an electrical current was applied by the sender representing each digit of a message, it would at the recipient's end electrolyse the acid in its corresponding tube, releasing a stream of hydrogen bubbles next to its associated letter or numeral. The telegraph receiver's operator would visually observe the bubbles and could then record the transmitted message, albeit at a very low baud rate.
One of the earliest electromagnetic telegraph designs was created by Baron Schilling in 1832.
Carl Friedrich Gauss and Wilhelm Weber built and first used for regular communication the electromagnetic telegraph in 1833 in Göttingen , connecting Göttingen Observatory and the Institute of Physics, covering a distance of about 1 km . The setup consisted of a coil which could be moved up and down over the end of two magnetic steel bars. The resulting induction current was transmitted through two wires to the receiver, consisting of a galvanometer. The direction of the current could be reversed by commuting the two wires in a special switch. Therefore, Gauß and Weber chose to encode the alphabet in a binary code, using positive current and negative as the two states.
A replica commissioned by Weber for the 1873 World Fair based on his original designs is on display in the collection of historical instruments in the Department of Physics at University of Göttingen .There are two versions of the first message sent by Gauß and Weber: the more official one is based on a note in Gauss's own handwriting stating that "Wissen vor meinen – Sein vor scheinen" ("knowing before opining, being before seeming") was the first message sent over the electromagnetic telegraph. The more anecdotal version told in Göttingen observatory is that the first message was sent to notify Weber that the observatory's servant was on the way to the institute of physics, and just read "Michelmann kömmt" ("Michelmann is on his way"), possibly as a test who would arrive first.
The first commercial electrical telegraph was constructed by Sir William Fothergill Cooke and Sir Charles Wheatstone and entered use on the Great Western Railway in Britain . It ran for from Paddington station to West Drayton and came into operation on 9 July 1839. It was patented in the United Kingdom in 1837. In 1843 Scottish inventor Alexander Bain invented a device that could be considered the first facsimile machine. He called his invention a "recording telegraph". Bain's telegraph was able to transmit images by electrical wires. In 1855 an Italian abbot, Giovanni Caselli, also created an electric telegraph that could transmit images. Caselli called his invention "Pantelegraph". Pantelegraph was successfully tested and approved for a telegraph line between Paris and Lyon .
An electrical telegraph was independently developed and patented in the United States in 1837 by Samuel F. B. Morse. His assistant, Alfred Vail, developed the Morse code signaling alphabet with Morse. America's first telegram was sent by Morse on 6 January 1838, across two miles (3 km) of wire at Speedwell Ironworks near Morristown, New Jersey . The message read "A patient waiter is no loser." On 24 May 1844, he sent the message "What hath God wrought" (quoting Numbers 23:23) from the Old Supreme Court Chamber in the Capitol in Washington to the old Mt. Clare Depot in Baltimore . This message was chosen by Annie Ellsworth of Lafayette, Indiana, the daughter of Patent Commissioner Henry Leavitt Ellsworth. The Morse/Vail telegraph was quickly deployed in the following two decades; the overland telegraph connected the west coast of the continent to the east coast by 24 October 1861, bringing an end to the Pony Express.
The famous telegram sent by Samuel F.
Morse from the Capitol in Washington to Alfred Vail in Baltimore in 1844: "What hath God wrought"
The first commercially successful transatlantic telegraph cable was successfully completed on 18 July 1866. Earlier transatlantic submarine cables installations were attempted in 1857, 1858 and 1865. The 1857 cable only operated intermittently for a few days or weeks before it failed. The study of underwater telegraph cables accelerated interest in mathematical analysis of very long transmission lines. The telegraph lines from Britain to India were connected in 1870 (those several companies combined to form the Eastern Telegraph Company in 1872).
Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. This brought news reportage from the rest of the world.
Further advancements in telegraph technology occurred in the early 1870s, when Thomas Edison devised a full duplex two-way telegraph and then doubled its capacity with the invention of quadruplex telegraphy in 1874. Edison filed for a U.S. patent on the duplex telegraph on 1 September 1874 and received on 9 August 1892.
Nikola Tesla and other scientists and inventors showed the usefulness of wireless telegraphy, radiotelegraphy, or radio, beginning in the 1890s. Alexander Stepanovich Popov demonstrated to the public his wireless radio receiver, which was also used as a lightning detector, on 7 May 1895. before a group of reporters on a stormy August evening in 1895 he proudly demonstrated his wireless receiver. It was attached to a long 30 foot pole that he held aloft to maximize the signal. When asked by one of the reporters if it was a good idea to hold this metal rod in the middle of a storm he replied that all was well. After being struck (and nearly killed) by a bolt of lightning he proudly announced to the world that his invention also served as a 'lightning detector'.
Albert Turpain sent and received his first radio signal, using Morse code, in France , up to 25 meters in 1895.
Guglielmo Marconi sent and received his first radio signal in Italy up to 6 kilometres in 1896. On 13 May 1897, Marconi, assisted by George Kemp, a Cardiff Post Office engineer, transmitted the first wireless signals over water to Lavernock (near Penarth in Wales ) from Flat Holm . Having failed to interest the Italian government, the twenty-two year old inventor brought his telegraphy system to Britain and met William Preece, a Welshman, who was a major figure in the field and Chief Engineer of the General Post Office. A pair of masts about high were erected, at Lavernock Point and on Flat Holm. The receiving mast at Lavernock Point was a high pole topped with a cylindrical cap of zinc connected to a detector with insulated copper wire. At Flat Holm the sending equipment included a Ruhmkorff coil with an eight-cell battery. The first trial on 11 and 12 May failed but on the 13th the mast at Lavernock was extended to and the signals, in Morse code, were received clearly. The message sent was "ARE YOU READY"; the Morse slip signed by Marconi and Kemp is now in the National Museum of Wales.
In 1898 Popov accomplished successful experiments of wireless communication between a naval base and a battleship.
In 1900 the crew of the Russian coast defense ship General-Admiral Graf Apraksin as well as stranded Finnish fishermen were saved in the Gulf of Finland because of exchange of distress telegrams between two radiostations, located at Hogland island and inside a Russian naval base in Kotka . Both stations of wireless telegraphy were built under Popov's instructions.
In 1901, Marconi radiotelegraphed the letter "S" across the Atlantic Ocean from his station in Poldhu, Cornwall to St. John's, Newfoundland .
A continuing goal in telegraphy has been to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to increase the sending rate was the development of telegraphese.
Other research focused on the multiplexing of telegraph connections. By passing several simultaneous connections through an existing copper wire, capacity could be upgraded without the laying of new cable, a process which remained very costly. Several technologies were developed like Frequency-division multiplexing. Long submarine communications cables became possible in segments with vacuum tube amplifiers between them.
With the invention of the teletypewriter, telegraphic encoding became fully automated. Early teletypewriters used the ITA-1 Baudot code, a five-bit code. This yielded only thirty-two codes, so it was over-defined into two "shifts," "letters" and "figures". An explicit, unshared shift code prefaced each set of letters and figures.
The airline industry remains one of the last users of teletype and in a few situations still sends messages over the SITA or AFTN networks. For example, The British Airways operations computer system (FICO) still used teletype to communicate with other airline computer systems. The same goes for PARS (Programmed Airline Reservation System) and IPARS that used a similar shifted six-bit Teletype code, because it requires only eight bits per character, saving bandwidth and money. A teletype message is often much smaller than the equivalent EDIFACT or XML message. In recent years as airlines have had access to improved bandwidth in remote locations, IATA standard XML is replacing Teletype as well as EDI.
The first electrical telegraph developed a standard signaling system for telecommunications. The "mark" state was defined as the powered state of the wire. In this way, it was immediately apparent when the line itself failed. The moving pointer telegraphs started the pointer's motion with a "start bit" that pulled the line to the unpowered "space" state. In early Telex machines, the start bit triggered a wheeled commutator run by a motor with a precise speed (later, digital electronics). The commutator distributed the bits from the line to a series of relays that would "capture" the bits. A "stop bit" was then sent at the powered "mark state" to assure that the commutator would have time to stop, and be ready for the next character. The stop bit triggered the printing mechanism. Stop bits initially lasted 1.42 baud times (later extended to two as signaling rates increased), in order to give the mechanism time to finish and stop vibrating. Hence an ITA-2 Murray code symbol took 1 start, 5 data, and 1.42 stop (total 7.42) baud times to transmit.
By 1935, message routing was the last great barrier to full automation. Large telegraphy providers began to develop systems that used telephone-like rotary dialing to connect teletypes. These machines were called "Telex". Telex machines first performed rotary-telephone-style pulse dialing for circuit switching, and then sent data by Baudot code. This "type A" Telex routing functionally automated message routing.
The first wide-coverage Telex network was implemented in Germany during the 1930s as a network used to communicate within the government.
At the rate of 45.45 (±0.5%) baud — considered speedy at the time — up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication.
Canada-wide automatic teleprinter exchange service was introduced by the CPR Telegraph Company and CN Telegraph in July 1957 (the two companies, operated by rival Canadian National Railway and Canadian Pacific Railway would join to form CNCP Telecommunications in 1967). This service supplemented the existing international Telex service that was put in place in November 1956. Canadian Telex customers could connect with nineteen European countries in addition to eighteen Latin American, African, and trans-Pacific countries. The major exchanges were located in Montreal (01), Toronto (02), Winnipeg (03).
In 1958, Western Union Telegraph Company started to build a Telex network in the United States. This Telex network started as a satellite exchange located in New York City and expanded to a nationwide network. Western Union chose Siemens & Halske AG, now Siemens AG, and ITT to supply the exchange equipment, provisioned the exchange trunks via the Western Union national microwave system and leased the exchange to customer site facilities from the local telephone company. Teleprinter equipment was originally provided by Siemens & Halske AG and later by Teletype Corporation. Initial direct International Telex service was offered by Western Union, via W.U. International, in the summer of 1960 with limited service to London and Paris.
In 1962, the major exchanges were located in New York City (1), Chicago (2), San Francisco (3), Kansas City (4) and Atlanta (5). The Telex network expanded by adding the final parent exchanges cities of Los Angeles (6), Dallas (7), Philadelphia (8) and Boston (9) starting in 1966.
The Telex numbering plan, usually a six-digit number in the United States, was based on the major exchange where the customer's Telex machine terminated. For example, all Telex customers that terminated in the New York City exchange were assigned a Telex number that started with a first digit "1". Further, all Chicago based customers had Telex numbers that started with a first digit of "2". This numbering plan was maintained by Western Union as the Telex exchanges proliferated to smaller cities in the United States. The Western Union Telex network was built on three levels of exchanges. The highest level was made up of the nine exchange cities previously mentioned. Each of these cities had the dual capability of terminating both Telex customer lines and setting up trunk connections to multiple distant Telex exchanges. The second level of exchanges, located in large cities such as Buffalo, Cleveland, Miami, Newark, Pittsburgh and Seattle, were similar to the highest level of exchanges in capability of terminating Telex customer lines and setting up trunk connections. However, these second level exchanges had a smaller customer line capacity and only had trunk circuits to regional cities. The third level of exchanges, located in small to medium sized cities, could terminate Telex customer lines and had a single trunk group running to its parent exchange.
Loop signaling was offered in two different configurations for Western Union Telex in the United States. The first option, sometimes called local or loop service, provided a 60 milliampere loop circuit from the exchange to the customer teleprinter. The second option, sometimes called long distance or polar was used when a 60 milliampere connection could not be achieved, provided a ground return polar circuit using 35 milliamperes on separate send and receive wires. By the 1970s, and under pressure from the Bell operating companies wanting to modernize their cable plant and lower the adjacent circuit noise that these Telex circuits sometimes caused, Western Union migrated customers to a third option called F1F2. This F1F2 option replaced the DC voltage of the local and long distance options with modems at the exchange and subscriber ends of the Telex circuit.
In 1970, Cuba and Pakistan were still running 45.5 baud type A Telex. Telex is still widely used in some developing countries' bureaucracies, probably because of its reliability and low cost. The UN asserted at one time that more political entities were reliably available by Telex than by any other single method.
Around 1960[?], some nations began to use the "figures" Baudot codes to perform "Type B" Telex routing.
Telex grew around the world very rapidly. Long before automatic telephony was available, most countries, even in central Africa and Asia, had at least a few high-frequency (shortwave) Telex links. Often these radio links were first established by government postal and telegraph services (PTTs). The most common radio standard, CCITT R.44 had error-corrected retransmitting time-division multiplexing of radio channels. Most impoverished PTTs operated their Telex-on-radio (TOR) channels non-stop, to get the maximum value from them.
The cost of TOR equipment has continued to fall. Although initially specialised equipment was required, many amateur radio operators now operate TOR (also known as RTTY) with special software and inexpensive hardware to adapt computer sound cards to short-wave radios.
Modern "cablegrams" or "telegrams" actually operate over dedicated Telex networks, using TOR whenever required.
Telex messages are routed by addressing them to a Telex address, e.g. "14910 ERIC S", where 14910 is the subscriber number, ERIC is an abbreviation for the subscriber's name (in this case Telefonaktiebolaget L.M. Ericsson in Sweden) and S is the country code. Solutions also exist for the automatic routing of messages to different Telex terminals within a subscriber organization, by using different terminal identities, e.g. "+T148".
A major advantage of Telex is that the receipt of the message by the recipient could be confirmed with a high degree of certainty by the "answerback". At the beginning of the message, the sender would transmit a WRU (Who aRe yoU) code, and the recipient machine would automatically initiate a response which was usually encoded in a rotating drum with pegs, much like a music box. The position of the pegs sent an unambiguous identifying code to the sender, so the sender could verify connection to the correct recipient. The WRU code would also be sent at the end of the message, so a correct response would confirm that the connection had remained unbroken during the message transmission. This gave Telex a major advantage over less verifiable forms of communications such as telephone and fax.
One use of Telex circuits, in use until the widescale adoption of x.400 and Internet email, was to facilitate a message handling system, allowing local email systems to exchange messages with other email and Telex systems via a central routing operation, or switch. One of the largest such switches was operated by Royal Dutch Shell as recently as 1994, permitting the exchange of messages between a number of IBM Officevision, Digital Equipment Corporation All-In-One and Microsoft Mail systems. In addition to permitting email to be sent to Telex addresses, formal coding conventions adopted in the composition of Telex messages enabled automatic routing of Telexes to email recipients.
The Teletypewriter eXchange (TWX) was developed by the Bell System in the United States and originally ran at 45.45 baud or 60 words per minute, using five level Baudot code. Bell later developed a second generation of TWX called "four row" that ran at 110 baud, using eight level ASCII code. The Bell System offered both "3-row" Baudot and "4-row" ASCII TWX service up to the late 1970s.
TWX used the public switched telephone network. In addition to having separate Area Codes (510, 610, 710 and 810) for the TWX service, the TWX lines were also set up with a special Class of Service to prevent connections to and from POTS to TWX and vice versa.
The code/speed conversion between "3-row" Baudot and "4-row" ASCII TWX service was accomplished using a special Bell "10A/B board" via a live operator. A TWX customer would place a call to the 10A/B board operator for Baudot - ASCII calls, ASCII - Baudot calls and also TWX Conference calls. The code / speed conversion was done by a Western Electric unit that provided this capability. There were multiple code / speed conversion units at each operator position.
Western Union purchased the TWX system from AT&T in January 1969. The TWX system and the special area codes (510, 610, 710 and 810) continued right up to 1981 when Western Union completed the conversion to the Western Union Telex II system. Any remaining "3-row" Baudot customers were converted to Western Union Telex service during the period 1979 to 1981.
The modem for this service was the Bell 101 dataset, which is the direct ancestor of the Bell 103 modem that launched computer time-sharing. The 101 was revolutionary, because it ran on ordinary telephone subscriber lines, allowing the Bell System to run TWX along with POTS on a single public switched telephone network.
Bell's original consent agreement limited it to international dial telephony. The Western Union Telegraph Company had given up its international telegraphic operation in a 1939 bid to monopolize U.S. telegraphy by taking over ITT's PTT business. The result was a de-emphasis on Telex in the U.S. and a "cat's cradle" of international Telex and telegraphy companies. The Federal Communications Commission refered to these companies as "International Record Carriers" (IRCs).
Western Union Telegraph Company developed a subsidiary named Western Union Cable System. This company later was renamed as Western Union International (WUI) when it was spun-off by Western Union as an independent company. WUI was purchased by MCI Communications in 1983 and operated as a subsidiary of MCI International.
ITT's "World Communications" division (later known as ITT World Communications) was amalgamated from many smaller companies: "Federal Telegraph", "All American Cables and Radio", "Globe Wireless", and the common carrier division of Mackay Marine. ITT World Communications was purchased by Western Union in 1987.
RCA Communications (later known as RCA Global Communications) had specialized in global radiotelegraphic connections. In 1986 it was purchased by MCI International.
Before World War I, the Tropical Radiotelegraph Company (later known as Tropical Radio Telecommunications, or TRT) put radio telegraphs on ships for its owner, the United Fruit Company , to enable them to deliver bananas to the best-paying markets. Communications expanded to UFC's plantations, and were eventually provided to local governments. TRT eventually became the national carrier for many small Central American nations.
The French Telegraph Cable Company (later known as FTC Communications, or just FTCC), which was owned by French investors, had always been in the U.S. It laid undersea cable from the U.S. to France. It was formed by Monsieur Puyer-Quartier. International telegrams routed via FTCC were routed using the telegraphic routing ID "PQ", which are the initials of the founder of the company.
Firestone Rubber developed its own IRC, the "Trans-Liberia Radiotelegraph Company". It operated shortwave from Akron, Ohio to the rubber plantations in Liberia . TL is still based in Akron.
Around 1965, DARPA commissioned a study of decentralized switching systems. Some of the ideas developed in this study provided inspiration for the development of the ARPANET packet switching research network, which later grew to become the public Internet.
As the PSTN became a digital network, T-carrier "synchronous" networks became commonplace in the U.S. A T1 line has a "frame" of 193 bits that repeats 8000 times per second. The first bit, called the "sync" bit, alternates between 1 and 0 to identify the start of the frames. The rest of the frame provides 8 bits for each of 24 separate voice or data channels. Customarily, a T-1 link is sent over a balanced twisted pair, isolated with transformers to prevent current flow. Europeans adopted a similar system (E-1) of 32 channels (with one channel for frame synchronisation).
Later, SONET and SDH were adapted to combine carrier channels into groups that could be sent over optic fiber. The capacity of an optic fiber is often extended with wavelength division multiplexing, rather than rerigging new fibre. Rigging several fibres in the same structures as the first fibre is usually easy and inexpensive, and many fibre installations include unused spare "dark fibre", "dark wavelengths", and unused parts of the SONET frame, so-called "virtual channels."
In 2002, the Internet was used by Kevin Warwick at the University of Reading to communicate neural signals, in purely electronic form, telegraphically between the nervous systems of two humans, potentially opening up a new form of communication combining the Internet and telegraphy.
, the fastest well-defined communication channel used for telegraphy is the SONET standard OC-768, which sends about 40 gigabits per second.
The theoretical maximum capacity of an optic fiber is more than 1012 bits (one terabit or one trillion bits) per second . In 2006, no existing encoding system approached this theoretical limit, even with wavelength division multiplexing.
Since the Internet operates over any digital transmission medium, further evolution of telegraphic technology will be effectively concealed from users.
As of 2007, the Internet carried the majority of telegraphic messages in the form of e-mail .
E-mail was first invented for Multics in the late 1960s. At first, e-mail was possible only between different accounts on the same computer (typically a mainframe). UUCP allowed different computers to be connected to allow e-mails to be relayed from computer to computer. With the growth of the Internet, e-mail began to be possible between any two computers with access to the Internet.
Various private networks like UUNET (founded 1987), the Well (1985), and GEnie (1985) had e-mail from the 1970s, but subscriptions were quite expensive for an individual, US$25 to US$50 per month, just for e-mail. Internet use was then largely limited to government, academia and other government contractors until the net was opened to commercial use in the 1980s.
By the early 1990s, modems made e-mail a viable alternative to Telex systems in a business environment. But individual e-mail accounts were not widely available until local Internet service providers were in place, although demand grew rapidly, as e-mail was seen as the Internet's killer app. The broad user base created by the demand for e-mail smoothed the way for the rapid acceptance of the World Wide Web in the mid-1990s.
On Monday, 12 July 1999, a final telegram was sent from the National Liberty Ship Memorial, the SS Jeremiah O'Brien, in San Francisco Bay to President Bill Clinton in the White House. Officials of Globe Wireless reported that "The message was 95 words, and it took six or eight minutes to copy it." They then transmitted the message to the White House via e-mail. That event was also used to mark the final commercial U.S. ship-to-shore telegraph message transmitted from North America by Globe Wireless, a company founded in 1911. Sent from its wireless station at Half Moon Bay, California , the sign-off message was a repeat of Samuel F. B. Morse's message 155 years earlier, "What hath God wrought?"
Eircom, Ireland's largest telecommunication company and former PTT, formally discontinued Telex services on 30 July 2002.Western Union announced the discontinuation of all of its telegram services effective from 31 January 2006. Only 20,000 telegrams were sent in 2005, compared with 20 million in 1929. According to Western Union, which still offers money transfer services, its last telegram was sent Friday, 27 January 2006. The company stated that this was its "final transition from a communications company to a financial services company." Telegram service in the United States and Canada is still available, operated by iTelegram and Globegram. Some companies, like Swedish TeliaSonera, still deliver telegrams as nostalgic novelty items, rather than a primary means of communication.
In the Netherlands, the telegram service was sold by KPN to Unitel Telegram Services in 2001 . On 9 February 2007, according to the online edition of the Telegraaf newspaper, the Netherlands national telecommunications company KPN pulled the plug on the last Telex machine in the Netherlands after having operated a Telex network since 1933. As their Telex service had only 200 remaining customers, it was decided that it was no longer worthwhile to continue to offer Telex within the Netherlands. It is, however, still possible to send Telex messages to foreign customers through the Internet. In Belgium, traditional telex operations ceased 28 February 2007. The Belgacom Telex services were replaced by RealTelex, an internet based Telex alternative.
In Japan, NTT provides a telegram (denpou) service that is today used mainly for special occasions such as weddings, funerals, graduations, etc. Local offices offer telegrams printed on special decorated paper and envelopes.
In New Zealand , while general public use telegrams have been discontinued, a modern variant has arisen for businesses, mainly utilities and the like, to send urgent confidential messages to their customers, leveraging off the perception that these are important messages. New Zealand Post describes the service as "a cost effective debt collection tool designed to help you to recover overdue money from your customers. New Zealand Post Telegrams are delivered by a courier in a Telegram branded envelope on Telegram branded paper. This has proven to be an effective method to spur customers into immediate action".
In the United Kingdom , the international telegram service formerly provided by British Telecom has been spun off as an independent company which promotes the use of telegrams as a retro greeting card or invitation.
In Australia, Australia Post's TELeGRAM service "combines new age demands with old world charm to offer you a quick, convenient way to send a message that matters." Messages can be submitted online or by telephone, and can be printed on a range of template designs. The printed telegrams are dispatched using Express Post Mail Service or the Ordinary Mail Service. Orders received before 15:00 are dispatched on the same day. The cost of the service, being AUD4.50 for Ordinary and AUD8.50 for Express Post Mail Services in comparison with AUD0.55 for an Australia-wide postage fee, makes this service too expensive for day-to-day communication.
In Mexico , the telegram is still used as a low-cost communication service for people who cannot afford or do not have the computer skills required to send an e-mail.
In Nepal , the Telex service has been discontinued as of January 1, 2009. Nepal Telecom states the reason for its decision due to "availability of advanced technology in data communication."
In Bahrain, Batelco still offers telegram services. They are thought to be more formal than an email or a fax, but less so than a letter. So should a death or anything of importance occur, telegrams would be sent.
In Switzerland, UTS took over telegram services from the national PTTs. Telegrams could still be sent to and from most countries, also to those which are mentioned in this article.
Prior to the electrical telegraph, all but very small amounts of information could be moved only a few miles per hour, as fast as a human or animal could travel. The telegraph freed communication from the constraints of geography. It isolated the message (information) from the physical movement of objects or the process.
Telegraphy allowed organizations to actively control physical processes at a distance (for example: railroad signaling and switching of rolling stock), multiplying the effectiveness and functions of communication. "... Once space was, in the phrase of the day, annihilated, once everyone was in the same place for the purposes of trade, time as a new region of experience, uncertainty, speculation, and exploration was opened up to the forces of commerce."
Worldwide telegraphy changed the gathering of information for news reporting. Since the same messages and information would now travel far and wide, the telegraph demanded a language "stripped of the local, the regional; and colloquial". Media language had to be standardized, which led to the gradual disappearance of different forms of speech and styles of journalism and storytelling. It is believed that objective journalism finds its roots in the communicative strictures of the telegraph.
The word "Telegraph" still appears in the names of numerous periodicals in various countries, a remnant of the long period when Telegraphy was a major means for newspapers to obtain news information (see Telegraph ).
Entores Ltd v Miles Far East Corporation is a landmark English Court of Appeal decision in contract law on the moment of acceptance of a contract over telex.
Roswell, New Mexico was named after Annie Ellsworth's future husband, Roswell Smith.
Briggs, Asa and Burke, Peter: "A Social History of the Media: From Gutenberg to the Internet," p110. Polity, Cambridge, 2005.
Briggs, Asa and Burke, Peter: "A Social History of the Media: From Gutenberg to the Internet," p117. Polity, Cambridge, 2005.
Wark, McKenzie (1997) "The Virtual Republic," Allen & Unwin, St. Leonards.
Anglo-American Telegraph Company, Ltd. Records, 1866 – 1947 Archives Center, National Museum of American History, Smithsonian Institution.
|
0.731817 |
LaTeX ( pronEng|ˈleɪtɛk or IPA|/ˈleɪtɛx/) is a document markup language and document preparation system for the TeX typesetting program. Within the typesetting system, its name is styled as .
LaTeX is most widely used by mathematicians, scientists, engineers, philosophers, scholars in academia and the commercial world, and other professionals. [ cite web | url = http://www.ctan.org/what_is_tex.html | title = What are TeX, LaTeX and friends? ] As a primary or intermediate format (e.g. translating DocBook and other XML-based formats to PDF), LaTeX is used because of the high quality of typesetting achievable by TeX. The typesetting system offers programmable desktop publishing features and extensive facilities for automating most aspects of typesetting and desktop publishing, including numbering and cross-referencing, tables and figures, page layout and bibliographies.
LaTeX is intended to provide a high-level language that accesses the power of TeX. LaTeX essentially comprises a collection of TeX macros and a program to process LaTeX documents. Because the TeX formatting commands are very low-level, it is usually much simpler for end-users to use LaTeX.
LaTeX was originally written in the early 1980s by Leslie Lamport at ).
The term LaTeX refers only to the language in which documents are written, not to the text editor itself. In order to create a document in LaTeX, a .tex file must be created using some form of text editor. While many text editors work, many people prefer to use one of several editors designed specifically for working with LaTeX.
Distributed under the terms of the LaTeX Project Public License (LPPL), LaTeX is free software.
LaTeX is based on the idea that authors should be able to focus on the meaning of what they are writing without being distracted by the visual presentation of the information. In preparing a LaTeX document, the author specifies the logical structure using familiar concepts such as "chapter", "section", "table", "figure", etc., and lets the LaTeX system worry about the presentation of these structures. It therefore encourages the separation of layout from content while still allowing manual typesetting adjustments where needed. This is similar to the mechanism by which many word processors allow styles to be defined globally for an entire document or the CSS mechanism used by HTML.
LaTeX can be arbitrarily extended by using the underlying macro language to develop custom formats. Such macros are often collected into "packages," which are available to address special formatting issues such as complicated mathematical content or graphics. Indeed, in the example above the eqnarray environment is deprecated by the amsmath package, [ cite web | url = http://www.tug.org/pracjourn/2006-4/madsen/ | first=Lars | last=Madsen | title = Avoid eqnarray!? ] which provides the typographically better align environment for the same purpose.
LaTeX is usually pronEng|ˈleɪtɛk or IPA|/ˈlɑːtɛk/ in English (that is, not with the IPA|/ks/ pronunciation English speakers normally associate with "X", but with a IPA|/k/).The last character in the name comes from a capital Χ (chi), as the name of TeX derives from the Greek τέχνη (skill, art, technique); for this reason, TeX's creator Donald Knuth promotes a IPA|/tɛx/ pronunciation (that is, with a voiceless velar fricative as in Modern Greek, or the last sound of the German word "Bach", similar to the Spanish "j" sound). Lamport, on the other hand, has said he does not favor or discourage any pronunciation for LaTeX.
As a macro package, LaTeX provides a set of macros for TeX to interpret. There are many other macro packages for TeX, including Plain TeX, GNU Texinfo, AMSTeX, and ConTeXt.
When TeX "compiles" a document, the processing loop (from the user's point of view) goes like this: Macros > TeX > Driver > Output. Different implementations of each of these steps are typically available in TeX distributions. Traditional TeX will output a DVI file, which is usually converted to a PostScript file. More recently, Hàn Thế Thành and others have written a new implementation of TeX called pdfTeX, which also outputs to PDF and takes advantages of features available in that format. The XeTeX engine developed by Jonathan Kew merges modern font technologies and Unicode with TeX.
The default font for LaTeX is Knuth's Computer Modern, which gives default documents created with LaTeX the same distinctive look as those created with plain TeX.
There are numerous commercial implementations of the entire TeX system. System vendors may add extra features like additional typefaces and telephone support. LyX is a free visual document processor that uses LaTeX for a back-end. TeXmacs is a free, WYSIWYG editor with similar functionalities as LaTeX but a different typesetting engine. Other WYSIWYG editors that produce LaTeX include Scientific Word on MS Windows.
A number of TeX distributions are available, including TeX Live (multiplatform), teTeX (deprecated in favour of TeX Live, Unix), fpTeX (deprecated), MiKTeX (Windows), MacTeX, gwTeX (Mac OS X), OzTeX (Mac OS Classic), AmigaTeX (no longer available) and PasTeX (AmigaOS) available on the Aminet repository.
|
0.997464 |
Why would Magic Leap, a company preparing to launch its first augmented reality headset this year, need a developer for iPhone and iPad apps? It's not as crazy as it sounds.
The company has published a job listing for a senior iOS application developer to "help create the future of Mixed Reality computing."
On the surface, it's an odd request, particularly since the company's Lumin OS is derived from Linux and the Android Open Source Project.
However, developer documentation on Magic Leap's Creator Portal brings some clarity to how the role is relevant to the forthcoming Magic Leap One.
In the "Best Practices in Game Design" article (registration required), the company describes how multi-user experiences can involve smartphones, tablets, desktop computers, game consoles, and even televisions.
"As the 'Internet of Things' extends to more and more devices, spatial computing can become more a part of everyone's lives," the company states on its website.
The article continues to describe how multi-user experiences can involve players taking on varying roles in their pursuit of the game's objective. For example, players wearing the Magic Leap One could participate in an augmented reality gaming experience, while friends might serve as a "dungeon master" via a laptop or a mobile device to direct the action or story.
Multiplayer experiences appear to be the next big thing for augmented reality. Unity has made strides to facilitate cross-platform experiences with its ARInterface. Likewise, a number of AR cloud companies and platforms have emerged to enable developers to build multi-player apps.
It's a refreshing approach in a world where gamers on PlayStation and Xbox platforms are rarely able to enjoy multi-player games across platforms and iOS and Android users often find themselves siloed among their own kind.
|
0.961613 |
You ask where I've been this afternoon? I started craving pasta as I was posting them earlier today so I decided to go make some. This will be my dinner tonight with a nice salad and a loaf of crusty Italian bread. Keep in mind, you can buy frozen ravioli and make this recipe, but I felt adventurous today and made my own. Once you realize how easy it is to make your own ravioli, you will keep on making them. This one's a keeper my friends!
Place the flour in a mound on a smooth work area and make a well in the center. Pour the 2 beaten eggs and the two egg yolks in the middle of the well. Then slowly bring the flour into your eggs until every thing is mixed well. Next knead the dough by hand, and add more flour if you need in order to get a smooth consistency. Cut the dough in half and roll out each half very thin. Continue to roll out as many sheets as you can get from your dough.
Lay out the first piece of pasta dough on the table and place 1/4-ounce mounds of stuffing 2 inches apart. Using a pastry brush, brush egg white around each bit of stuffing, making the dough damp but not wet. Take the second piece of dough and lay on top of stuffing. Press around each ravioli being careful not to squeeze the stuffing out. Using a round ravioli cutter with jagged edges, cut each ravioli round and lay on a sheet of parchment paper until ready to boil. Gently place ravioli in boiling water and cook for 8 to 10 minutes.
In a large sauté pan, add the butter and melt. Add the garlic and shallots and sauté until golden brown. Add lobster meat, breadcrumbs, and chives and sauté 2 to 3 minutes. Remove from heat and cool for 30 minutes or until room temperature. Follow directions above to fill and cook your ravioli.
Melt the butter in a large sauce pan over medium heat. Add the shallots and garlic and saute for three minutes until soft. Add the wine or chicken broth and heavy cream. Cook for 12 minutes over medium low heat. Add the salt and parsley and stir. Add the cooked ravioli to the sauce and let cook together for about 1 minute before serving.
|
0.99408 |
Trading and Investment Ideas for Indian Equities: Zee Entertainment : Reason for Backwardation ?
Zee Entertainment : Reason for Backwardation ?
Zee Entertainment is trading in Normal backwardation for Jan, 2014.
It Implies a backwardation of almost Rs 6.00/-. Why ?
This backwardation in stock lead me to think about the reasons of backwardation in this stock ?
The reason of this discounting was the expectation of Bonus Redeemable Preference Shares (RPS) (Here). The company has already declared RPS in may and market is expecting a go ahead from the court on 20th Dec, 2013. The date on which its case for issue of RPS is scheduled in Munbai High Court.
It seems logical that, If the court accepts the issue (In any case they will accept, it is the matter of time), the company can very well fix record date in Jan, 2014 and distribute the RPS. But Mumbai High Court Case no (CSPL/695/2013) has listed the matter as (Listed for Final hearing). Looking at past orders of other such cases in different courts, including Mumbai for such matters, I feel that court may have final hearing, but it may give a new date for final order (I have to admit, I am not a lawyer or have legal background, I am saying this based on my experience of analyzing mergers, acquisitions and takeovers of different companies, which also involves regulatory processes). The new date can be after few days or weeks, which may change the calculation of Ex- Date and hence the discounting.
Also I have a different view on adjustment of Corporate action on RPS by NSE on its derivative segment (Here) . It is perceived that since the face value of RPS is Rs 21/- , Which is less that 10 % of stock price of Rs 241/- (22 May, 2013). But I beg to differ that these RPS can be considered as dividend and benefit can be passed on in derivatives segment. Because Though the face value of RPS would be Rs 21/- (Rs1 * 21 RPS), But Its market value may be different because of different expectation of yield and duration of RPS.
The Discount in Zee Entertainment, is because of perception and I have discussed the pro and Cons of such perception.
PS : I intend to take position in derivative segment of this stock.
|
0.992931 |
A recent study has reported that the abundance of several proteins found in the blood could accurately predict lifestyle factors that affect biological aging.
Aging is influenced by various factors such as genetics and lifestyle factors. In a new study published in Scientific Reports, a research team has generated a mathematical model that could predict the chronological age of an individual based on the level of 77 proteins found in the blood plasma. The study showed that factors such as high body mass index (BMI), smoking, and soda consumption affect the abundance of these plasma proteins and increase the predicted age of a person. The researchers have previously identified the plasma proteins in a study to discover biological markers for cancer and cardiovascular disease. Some of the proteins are associated with inflammation and immune function.
The study measured the concentration of the proteins in the blood plasma of a cohort of 976 individuals at 14 to 94 years of age. The study surveyed the lifestyle habits of the participants, consisting of an equal number of males and females living in Sweden. The scientists developed a mathematical model to predict the chronological age of the participants based on their plasma protein profiles.
The model correctly predicted the chronological age of the participants to within 5 years. Lifestyle factors such as high BMI, smoking, and soda consumption increased the predicted age by up to 6 years. Among the participants, soda intake was correlated with higher consumption of foods such as pizza, fries, sweets, and white bread. Participants who consumed fatty fish at least 3 times a week, drank 3 to 6 cups of coffee per day or performed moderate to vigorous daily exercise were predicted to be 4 to 6 years younger than their actual chronological age.
Although the plasma proteins measured in the study may not function directly in the aging process, the abundance of these proteins reflects the biological changes that occur during aging. The plasma protein profiles could be used to help individuals adapt lifestyle changes that could reduce their risk of developing diseases and increase their life expectancy.
|
0.999707 |
How do you prevent a sewer line back-up?
Functional sewer lines are a cooperative effort between homeowners, and the towns or cities they live in. Municipalities are responsible for installation, maintenance and repairs on the main sanitary sewage line. Homeowners are responsible for maintaining the service line, that part of the pipe that extends from their property into the main line.
Sewer lines back up for two reasons. Either the pipes are not wide enough to handle the volume of water flowing through the line, or an obstruction of some kind is interfering with the line’s capacity to handle the volume of water flowing through it.
The former situation sometimes arises after a particularly heavy rain. It will lead to multiple backups throughout the area, and residents may see water overflowing through their floor drains.
The latter situation can be caused by a number of different things. One of the most common is a tree root. Tree roots won’t grow into a sewer line that’s intact, but they will seek out the moisture they can siphon off from a cracked line. Root intrusion is a particularly common cause of service line obstruction in houses that were built before 1980. Snaking and cleaning lines regularly as a routine maintenance precaution will minimize the root intrusion problems.
Structural defects can also cause sewer line blockages. These defects include things like the collapse of pipes, the corrosion of pipes and offset joints, or cracks and sags in the lines. If these types of defects occur, resulting flooding can be serious and will frequently require an overhaul and reconstruction of existing sewer lines or service lines. Vandalism can also cause sewage backups when people throw bricks, rocks or other items into drains or manholes.
• Grease, dairy items and food scraps: Warm grease congeals as it cools. Although it may pass through vertical pipes, it will become a solid mass and obstruct any horizontal trap or pipe. This is why all establishments that cook and serve food are required by law to have grease traps.
• Garbage including hair, cigarette butts, cat litter, aquarium gravel and other household refuse.
• Paper products such as disposable diapers, feminine hygiene products, paper towels and sanitary wipes. Paper expands when it gets wet. Since it’s constituted from cellulose, paper can take a long time to break down.
Commercial drain cleaners are not effective solutions in the long run for clogged drains. They can damage pipes and harm the environment. The easiest and most inexpensive way to ensure your pipes and sewer service lines are draining the way they should be is to make sure that nothing goes down them that shouldn’t be going down them and to snake them once a year.
|
0.974409 |
Image recognition systems require large image data sets for the training process. The annotation of such data sets through users requires a lot of time and effort, and thereby presents the bottleneck in the development of recognition systems. In order to simplify the creation of image recognition systems it is necessary to develop interaction concepts for optimizing the usability of labeling systems. Semi-automatic approaches are capable of solving the labeling task by clustering the image data unsupervised and presenting this ordered set to a user for manual labeling. A labeling interface based on self-organizing maps (SOM) was developed and its usability was investigated in an extensive user study with 24 participants. The evaluation showed that SOM-based visualizations are suitable for speeding up the labeling process and simplifying the task for users. Based on the results of the user study, further concepts were developed to improve the usability.
|
0.962574 |
film reflection essay in nursing student was silent, it featured a large number of Vitaphone shorts at the beginning; people liked it! Studios also assumed that an audience would not sit through a film any longer than a short film. She was 20 years old and already had 16 years of screen appearances behind her. Nobody could say The Jazz Singer was not a huge success, but the Hollywood did not feel satisfied with the big accomplishment that The Jazz Singer had made, they wanted to be better and more dictatorial, so they kept developing their movies with this spirit.
After some time away from the limelight, Leigh regrouped to star in A Streetcar Named Desire in 1949 and won her second Best Actress Oscar for the most highly lauded performance of her career. She in history and philosophy in the year and it is here she knew and later married Ludlow Ogden Smith, Katharines acting interest developed from college by participating in plays. It deserved the name "evolution". Harry accepted Whiteys advice in a short time, because he had the same idea too, therefore the Hollywood. It was clear that another change was necessary and it must be more powerful and effective. First, the Hollywood sign of course, it is more than a symbol of the Hollywood culture, it could be called a perfect record of the Hollywoods development. The most popular were horror and comedy films. It was an amazing news for Hollywood, which encouraged them to produce even more "dirty" movie; but in contrast, As a result, many parents, educators, civic organizations and religious groups pushed continuously for censorship, which is an organization to examine the movies, in the first. However, it wasnt long before Leighs struggles with bipolar disorder began to show, and after suffering a miscarriage in 1944 she began to abuse alcohol and received shock therapy in an attempt to regain control. So there was nothing that Hays could still consider about this after publishing the list, but it was only his own thought.
|
0.539935 |
"FOX 9" redirects here. For the Boise, Idaho station also known as Fox 9, see KNIN-TV. For the Yuma, Arizona/El Centro, California station also known as Fox 9, see KECY-TV.
For other stations formerly branded as UPN 9, see UPN 9.
Not to be confused with KMPS (AM).
KMSP-TV, virtual and VHF digital channel 9, is a Fox owned-and-operated television station licensed to Minneapolis, Minnesota, United States. KMSP-TV is owned by the Fox Television Stations subsidiary of Fox Corporation, as part of a television duopoly with WFTC, the Minneapolis–Saint Paul area's MyNetworkTV owned-and-operated station. The two outlets share studios on Viking Drive in Eden Prairie, and a transmission tower in Shoreview.
KMSP-TV is also carried in Canada on Shaw Cable's Thunder Bay, Ontario system and on Bell MTS Fibe TV in the province of Manitoba.
The Family Broadcasting Corporation in Minneapolis, owner of radio station KEYD (1440 AM, now KYCR), filed an application with the FCC for a construction permit for a new commercial television station to be operated on Channel 9 on November 24, 1953. WLOL and WDGY (now KTLK) also expressed interest, but withdrew their applications in 1954, assuring that the new station would go to KEYD and its owner, Family Broadcasting. KEYD-TV began broadcasting on January 9, 1955 and was affiliated with the DuMont Television Network. During this time, Harry Reasoner, a graduate of Minneapolis West High School and the University of Minnesota, was hired as the station's first news anchor and news director. However, DuMont shut down in late 1955, leaving the station as an independent outlet; on June 3, 1956, the KEYD stations were sold to United Television, whose principals at the time included several stockholders of Pittsburgh station WENS, for $1.5 million. The new owners immediately sold off KEYD radio, refocused KEYD-TV's programming on films and sports, and shut down the news department; Reasoner was hired by CBS News a few months later. Reasoner became a host for CBS's 60 Minutes when it launched in 1968.
Channel 9 changed its call letters to KMGM-TV on May 23, 1956. At the time, the station was in negotiations with Metro-Goldwyn-Mayer to acquire the Twin Cities television rights to the company's films, along with selling a 25 percent stake in KMGM-TV to the studio. Negotiations broke down later that month over the cost of the films; additionally, Loew's, MGM's parent company at the time, filed a petition with the FCC against the call sign change, claiming that the use of KMGM was unauthorized and a violation of MGM's trademark. The FCC ruled against Loew's that October, saying that its call sign assignment policies were limited to preventing confusion between stations in a given area. The agreements to lease MGM's pre-1949 films and sell 25 percent of the station to Loew's were both completed that November; KMGM was the third station, after future sister station KTTV in Los Angeles and KTVR in Denver, to enter into such an arrangement.
National Telefilm Associates, which later purchased WNTA-TV in the New York City area, purchased the 75 percent of United Television not owned by MGM for $650,000 in November 1957, joining it to the NTA Film Network until it ended in 1961. After taking control, NTA expanded KMGM-TV's hours of operation as part of an overhaul of channel 9's schedule that also included the addition of newscasts. A few months later, on February 10, 1958, NTA bought MGM's stake for $130,000 and announced that it would change channel 9's calls to KMSP-TV; the call sign change took effect that March over the objections of KSTP-TV (channel 5). National Theatres, a theater chain whose broadcast holdings already included WDAF AM-TV in Kansas City, began the process of acquiring NTA in November 1958; in April 1959, it purchased 88 percent of the company. 20th Century-Fox, the former parent company of National Theatres, bought KMSP-TV for $4.1 million on November 9, 1959, retaining the United Television corporate name. The KMSP call letters were featured on prop television cameras in the May 29, 1963 episode of the CBS sitcom The Many Loves of Dobie Gillis, produced by 20th Century Fox Television; the show was loosely set in the Twin Cities area. The episode was titled "The Call of the, Like, Wild".
During its early years until 1972, the station's studios and offices were located in a lower level of the Foshay Tower in downtown Minneapolis; the transmitter was located on top of the tower, the tallest structure in the area until 1971, along with WCCO-TV (channel 4) and WTCN-TV (channel 11, now KARE).
KMSP-TV took over the ABC affiliation from WTCN-TV on April 16, 1961. Throughout its years with ABC, KMSP was notorious for having a sub-standard news department with large staff turnover. Ratings were dismal with KMSP obtaining only one-third of the viewing audience of each of their two competitors, CBS affiliate WCCO-TV and NBC affiliate KSTP-TV. The station's transmitter was moved in 1971 to a new tower constructed by KMSP in Shoreview, while the studios and offices relocated in 1972 to Edina on York Avenue South, across from Southdale Shopping Center.
In the late 1970s, ABC steadily rose to first place in the network ratings. Accordingly, the network sought to upgrade its slate of affiliates, which were made up of some stations that either had poor signals or poorly performing local programming. In December 1977, ABC warned KMSP that it would yank its affiliation unless improvements were made and fast. In early 1978, to cash in on ABC's improved ratings, KMSP re-branded itself "ABC9" (approximately 20 years before the use of a network's name in a station's on-air branding became commonplace among U.S. affiliates), and retooled its newscast. Despite the changes, KMSP's news department remained a distant third behind WCCO-TV and KSTP-TV.
On August 29, 1978, ABC announced that KSTP-TV would become the network's new Twin Cities affiliate the following spring. The signing of channel 5 made nationwide news, as it had been an NBC affiliate for three decades. KSTP-TV looked forward to affiliating with the top network, as third-place NBC had been in a long ratings slump. In retaliation for losing ABC, KMSP-TV immediately removed all ABC branding and regularly preempted network programming. Channel 9 then attempted to affiliate with NBC, thinking The Tonight Show would be a good lead-out from their 10 p.m. newscast, despite low prime time ratings. However, NBC, miffed at losing one of its strongest affiliates, and not wanting to pick up ABC's rejects, turned down KMSP's offer almost immediately and signed an affiliation agreement with independent station WTCN-TV. As a result of being rejected by both ABC and NBC, KMSP-TV prepared to become an independent station. Although it now faced having to buy an additional 19 hours of programming per day, it also would not have to invest nearly as much into its news department. Most of the on-air and off-air staffers resigned, not wanting to work for a down-scaled independent operation.
The affiliation switch occurred on March 5, 1979, and KMSP debuted its new independent schedule featuring cartoons, syndicated shows and even the locally based American Wrestling Association, with much of the station's programming having been acquired from WTCN-TV. To emphasize that the station's programming decisions would be influenced by viewers instead of a network, KMSP rebranded itself as "Receptive Channel 9", and an antenna was shown atop the station's logo in station identifications. The station became quite aggressive in acquiring programming, obtaining broadcast rights to several state high school sports championships from the MSHSL, the NHL's Minnesota North Stars and the Minnesota Twins baseball team.
As it turned out, KMSP's transition into an independent station turned out to be a blessing in disguise. It was far more successful than the station ever had been as an ABC affiliate. It became a regional superstation, available on nearly every cable system in Minnesota as well as large portions of North Dakota, South Dakota, Iowa and Wisconsin. Over time, it became one of the most successful and profitable independent stations in the country.
KMSP went through another ownership change on June 9, 1981, when 20th Century-Fox spun off United Television as an independent company owned by Fox shareholders; the transaction was approved alongside the $700 million sale of 20th Century-Fox to Marvin Davis. Chris-Craft Industries, which in 1977 had acquired an interest in 20th Century-Fox that by 1981 comprised 22 percent of Fox's stock, received a 19 percent stake in United Television; later in June, it filed with the FCC for control of United, as it now owned 32 percent of its stock. Two years later, Chris-Craft, though its BHC subsidiary, increased its stake in United Television to 50.1 percent and gained majority control of the company.
KMSP-TV remained an independent station through 1986, when it became one of the original charter affiliates of the newly launched Fox network. This suited channel 9, as it wanted the prestige of being a network affiliate without being tied down to a network-dominated program schedule; at the time, Fox only programmed a nightly talk show and, starting in 1987, two nights of prime time programming; the network would start its full-week programming schedule in 1993. For its first few years with Fox, the station served as the de facto Fox affiliate for nearly all of Minnesota and South Dakota.
However, the station did not remain a Fox affiliate for long. By 1988, KMSP was one of several Fox affiliates nationwide that were disappointed with the network's weak programming offerings, particularly on Saturday nights, which were bogging down KMSP's otherwise successful independent lineup. That January, channel 9 dropped Fox's Saturday night lineup; the move did not sit well with Fox, and in July 1988 the network announced that it would not renew its affiliations with KMSP and Chris-Craft sister station KPTV in Portland, Oregon. Fox then signed an agreement with KITN (channel 29, now WFTC) to become its new Twin Cities affiliate, and KMSP reverted to being an independent station full-time. In 1992, the station relocated to its current studio facilities on Viking Drive in Eden Prairie. Along with the other United Television stations, KMSP carried programming from the Prime Time Entertainment Network from 1993 to 1995.
By the early 1990s, Fox had exploded in popularity; it had begun carrying strong shows that were starting to rival the program offerings of the "Big Three" networks, and had just picked up the broadcast rights to the NFL's National Football Conference. In response to this, in October 1993, Chris-Craft/United Television partnered with Paramount Pictures (which was acquired by Viacom in 1994) to form the United Paramount Network (UPN) and both companies made independent stations that both companies respectively owned in several large and mid-sized U.S. cities charter stations of the new network.
UPN launched on January 16, 1995 (with the two-hour premiere of Star Trek: Voyager), with channel 9 becoming a UPN owned-and-operated station due to Chris-Craft/United's ownership stake in the network—making it the second network-owned station in the Twin Cities (alongside CBS-owned WCCO-TV). Over time, KMSP became one of UPN's most successful affiliates in terms of viewership. In addition, the station was still enjoying success with local sports programming featuring the Minnesota Twins, as well as the MSHSL championships. KMSP was stripped of its status as a UPN owned-and-operated station in 2000, after Viacom exercised a contractual clause to buy out Chris-Craft's stake in the network, although the station remained with UPN as an affiliate for another two years. Around this time, Viacom bought CBS.
News Corporation, through its Fox Television Stations subsidiary, agreed to purchase Chris-Craft Industries and its stations, including KMSP-TV, for $5.35 billion in August 2000 (this brought KMSP, along with San Antonio's KMOL-TV and Salt Lake City's KTVX, back under common ownership with 20th Century Fox); the deal followed a bidding war with Viacom. The sale was completed on July 31, 2001. While Fox pledged to retain the Chris-Craft stations' UPN affiliations through at least the 2000–01 season, and Chris-Craft agreed to an 18-month renewal for its UPN affiliates in January 2001, an affiliation swap was expected once KMSP's affiliation agreement with UPN ran out in 2002, given Fox's presumed preference to have its programming on a station that it already owned. Additionally, KMSP's signal was much stronger than that of WFTC, it was a VHF station that had been on the air much longer than UHF outlet WFTC. Most importantly, Fox had been aggressively expanding local news programming on its stations, and KMSP had an established news department whereas WFTC's news department did not begin operations until April 2001. The move was made easier when, in July 2001, Fox agreed to trade KTVX and KMOL (now WOAI-TV) to Clear Channel Communications in exchange for WFTC, a transaction completed that October.
The affiliation switch, officially announced in May 2002, occurred on September 8, 2002 (accompanied by a "Make the Switch" ad campaign that was seen on both stations), as Fox programming returned to KMSP-TV after a 14-year absence, while WFTC took the UPN affiliation; KMSP was the only former Chris-Craft station that was acquired and kept by Fox that did not retain its UPN affiliation. The station began carrying Fox's entire programming schedule at that time, including the Fox Box children's block (which later returned to WFTC as 4KidsTV, until the block was discontinued by Fox in December 2008 due to a dispute with 4Kids Entertainment). The affiliation swap coincided with the start of the 2002 NFL season; KMSP effectively became the "home" station for the NFL's Minnesota Vikings as a result of Fox holding the broadcast rights to the National Football Conference (from 1994 to 2001, most Vikings games were aired on WFTC). Finally, in 2014, with the launch of Xploration Station which replaced Weekend Marketplace which WFTC carried, KMSP-TV began clearing the entire Fox network schedule for good.
Since Fox has affiliates in most media markets and the Federal Communications Commission's syndication exclusivity regulations normally require cable systems to only carry a given network's local affiliate, and Fox prefers only an area's affiliate be carried as opposed to a distant station for ratings tabulation purposes, KMSP was eventually removed from most cable providers outside the Twin Cities. By this time, these areas had enough stations to provide local Fox affiliates. KMSP thus effectively lost the "regional superstation" status it had held for almost a quarter-century, dating back to when it was an independent station. Due to the advent of digital television, many stations in smaller markets previously served by KMSP began operating UPN-affiliated digital subchannels towards the end of the network's run to replace that network's programming in those markets, which in turn became MyNetworkTV or CW affiliates.
The digital signals of KMSP and WFTC each contain three subchannels. Through the use of virtual channels, WFTC's subchannels are associated with channel 9.
In November 2009, KMSP began broadcasting a standard definition simulcast of WFTC on its second subchannel (virtual channel 29.2), with WFTC's adding a standard definition simulcast of KMSP on its second subchannel (virtual channel 9.2) in turn. This ensures reception of both stations, even in cases where the digital channels that KMSP and WFTC operate are not actually receivable.
On June 19, 2014, KMSP-TV announced plans that, effective June 24, 2014, they will broadcast their 9.1 virtual channel via RF channel 29 (with RF channel 9 mapping to PSIP 9.9) to take advantage of its broader coverage area and allow viewers with UHF-only antennas to receive the station in high definition. The Minneapolis—St. Paul market is unique in that all three television duopolies in the market, which besides KMSP/WFTC, include Twin Cities Public Television's KTCA/KTCI and Hubbard Broadcasting's KSTP and KSTC, have merged their various signals onto the same VHF PSIP channel slots for easier viewer reference (with all but KMSP-TV transmitting on UHF). KMSP and WFTC unified all of their over-the-air channels as virtual subchannels of KMSP. As a result, the PSIPs of WFTC changed to channel 9.
KMSP-TV originally broadcast its digital signal on UHF channel 26, which was remapped as virtual channel 9 on digital television receivers through the use of PSIP. The station shut down its analog signal, over VHF channel 9, on June 12, 2009, the official date in which full-power television stations in the United States transitioned from analog to digital broadcasts under federal mandate. The station's digital signal relocated from its pre-transition UHF channel 26 to VHF channel 9 for post-transition operations.
KMSP presently broadcasts 59½ hours of locally produced newscasts each week (with 10 hours each weekday, four hours on Saturdays and 5½ hours on Sundays); in regards to the number of hours devoted to news programming, it is the highest newscast output among Minneapolis' broadcast television stations.
The station's first news director and news anchor was Harry Reasoner when KMSP signed on (as KEYD-TV) in 1955. Despite the station's focus on live coverage of news and sports, as well as awards from the University of Minnesota Journalism School and the Northwest Radio–TV News Association, KEYD's newscasts were generally in fourth place in the ratings. After channel 9's ownership changed in 1956, the news operation was closed down. News programming returned to the station after NTA bought KMGM-TV in 1957.
The station, which had long been a distant third to WCCO-TV and KSTP-TV in the Twin Cities news ratings, began an aggressive campaign in 1973 to gain ground against its competition. After a nationwide search, management hired Ben Boyett and Phil Bremen to anchor a newscast with a new set and format, known as newsnine. The new format did not really draw many new viewers, and the station's low news budget, ill-conceived promotion and frequent technical glitches didn't help matters. One botched campaign for a news series on venereal disease, in the spring of 1974, resulted in lawsuits from two young women that claimed that their likenesses were used in promos without their permission, thus damaging their reputations. By the fall of 1975, Boyett and Bremen would be gone, replaced by respected veteran newsman Don Harrison and the station's first female anchor, Cathie Mann. These changes did little to take channel 9 out of third place, and despite ABC becoming the #1 network by 1977, KMSP's newscasts still struggled.
After KMSP lost the ABC affiliation in 1979, the station's news operation was relaunched with a prime time newscast, which was paired with the syndicated Independent Network News in the early 1980s. The newscast's budget and ratings would increase by the end of the 1980s; after KMSP rejoined Fox in 2002, the station's prime time newscast, aided by Fox's prime time lineup, frequently outrated the newscasts on KSTP-TV. Following Fox's acquisition of WFTC in 2001, that station's existing news operation was moved to the KMSP studios; after Fox canceled channel 29's newscast in 2006, some of WFTC's news staff joined KMSP.
On May 11, 2009, KMSP became the second station in the Twin Cities (behind KARE-TV) to broadcast local newscasts in high-definition.
On June 16, 2006, during one of the station's newscasts, KMSP broadcast a "video news release" about convertibles produced by General Motors. The narrator, Medialink publicist Andrew Schmertz, was introduced as reporter André Schmertz. On March 24, 2011, the Federal Communications Commission levied a $4,000 fine against KMSP for airing the video news release without disclosing the corporate source of the segment to its viewers, following complaints filed by the Free Press and The Center for Media and Democracy in 2006 and 2007.
The KMSP TV Tower is located in Shoreview, Minnesota. KMSP owns the tower, which stands 1,466 feet (447 m) tall, but shares it with sister station WFTC and the Twin Cities Public Television stations, KTCA and KTCI. Several FM stations are also on the tower: KQRS-FM ("92 KQRS"), KXXR ("93X"), KTCZ ("Cities 97"), KTIS-FM, KSJN, KFXN-FM ("The Fan"), KDWB, KEEY ("K102"), KMNB ("Buz'n @ 102.9"), and KZJK ("104.1 Jack FM").
In addition to the main transmitter in Shoreview, KMSP's signal is relayed to outlying parts of Minnesota through a network of translators.
FCC Listing of All Low Power, Full Power, and Translators, both Analog and Digital.
Historical reference to KEYD-TV and AM, Pavek Museum of Broadcasting.
^ a b "History of KMSP-TV". KMSP-TV. August 11, 2015. Retrieved March 13, 2019.
^ "Minneapolis Dropout" (PDF). Broadcasting–Telecasting. April 26, 1954. p. 55. Retrieved June 10, 2016.
^ "Initial Rulings Favor Two Vhf Grants" (PDF). Broadcasting–Telecasting. May 24, 1954. p. 134. Retrieved June 10, 2016.
^ a b c d e f g h i j k "Twin Cities Television Milestones". Pavek Museum of Broadcasting. Retrieved July 20, 2016.
^ a b "Harry Reasoner Found". St. Louis Park Historical Society. March 2007. Archived from the original on November 21, 2008. Retrieved June 10, 2016.
^ a b "Brisk buying surge swaps four stations, $7.7 million" (PDF). Broadcasting–Telecasting. April 9, 1956. pp. 35–6. Retrieved June 10, 2016.
^ a b "FCC Okays $1.5 Split Sale Of Twin Cities' KEYD-AM-TV" (PDF). Broadcasting–Telecasting. May 28, 1956. p. 79. Retrieved June 10, 2016.
^ "Sy Weintrab, Others to Buy KEYD, Minn". The Billboard. April 14, 1956. p. 5. Retrieved June 10, 2016.
^ a b c Daniel, Douglaas K. (2009). Harry Reasoner: A Life in the News. University of Texas Press. pp. 54–8. ISBN 0292782365. Retrieved June 10, 2016.
^ "MGM May Get 25% of Minneapolis TV" (PDF). Broadcasting–Telecasting. September 10, 1956. p. 91. Retrieved June 10, 2016.
^ "Meredith Stations Buy M-G-M Films". The Billboard. September 29, 1956. p. 8. Retrieved June 11, 2016.
^ "Loew's Hits KMGM-TV Call" (PDF). Broadcasting–Telecasting. September 17, 1956. p. 9. Retrieved June 11, 2016.
^ "Loew's Protest Thrown Out" (PDF). Broadcasting–Telecasting. October 22, 1956. p. 93. Retrieved June 11, 2016.
^ "Loew's Closes Deal For Share in KMGM-TV" (PDF). Broadcasting–Telecasting. November 5, 1956. p. 9. Retrieved June 11, 2016.
^ "KMGM-TV Sold To Natl. Telefilm" (PDF). Broadcasting–Telecasting. August 26, 1957. pp. 79–80. Retrieved June 11, 2016.
^ "NTA Gets FCC Okay On Buy Of KMGM-TV" (PDF). Broadcasting. November 25, 1957. pp. 80–1. Retrieved June 11, 2016.
^ "NTA Announces Appointment Of Swartz to Manage KMGM-TV" (PDF). Broadcasting. December 2, 1957. p. 64. Retrieved June 11, 2016.
^ a b "Don Swartz Named KMGM Gen. Mgr". The Billboard. December 2, 1957. p. 12. Retrieved July 22, 2016.
^ "NTA Becomes Owner Of KMGM-TV After 25% Purchase From Loew's" (PDF). Broadcasting. February 10, 1958. pp. 78–9. Retrieved June 11, 2016.
^ "KMGM-TV Changes To KMSP (TV)" (PDF). Broadcasting. March 31, 1958. p. 86. Retrieved June 11, 2016.
^ "Natl. Theatres Starts NTA Buy" (PDF). Broadcasting. November 17, 1958. p. 72. Retrieved July 20, 2016.
^ "Media reports" (PDF). Broadcasting. May 11, 1959. p. 60. Retrieved July 20, 2016.
^ "The Move Hedge: $4.1 million Fox deal closed for KMSP-TV" (PDF). Broadcasting. May 11, 1959. p. 72. Retrieved July 20, 2016.
^ "KMSP-TV Twin Cities joins ABC-TV, replaces WTCN" (PDF). Broadcasting. January 30, 1961. p. 9. Retrieved July 20, 2016.
^ a b c d e f g h i j k l m n Lonto, Jeff R. (2006). "Your Newsnine Station: The saga of KMSP-TV Minneapolis - St. Paul in the 1970s". Studio Z-7 Publishing. Retrieved July 20, 2016.
^ "ABC-TV bags largest game yet in affiliation hunt: KSTP-TV" (PDF). Broadcasting. September 4, 1978. pp. 19–20. Retrieved July 20, 2016.
^ "In Brief" (PDF). Broadcasting. October 2, 1978. p. 30. Retrieved July 20, 2016.
^ Scott, Vernon (June 9, 1981). "Denver oilman Marvin Davis has bought 20th Century-Fox to..." United Press International. Retrieved July 20, 2016.
^ a b "BHC Communications, Inc. Companies History". Company Histories. Funding Universe. 1997. Retrieved July 20, 2009.
^ "Bottom Line" (PDF). Broadcasting. June 21, 1981. p. 54. Retrieved July 21, 2016.
^ "Fox network begins to take shape" (PDF). Broadcasting. August 4, 1986. pp. 44–5. Retrieved July 21, 2016.
^ "How affiliates feel about the Fox network: No problems that programming can't cure" (PDF). Broadcasting. January 4, 1988. p. 90. Retrieved July 21, 2016.
^ "In Brief" (PDF). Broadcasting. July 25, 1988. p. 113. Retrieved July 21, 2016.
^ Susan, King (January 23, 1994). "Space, 2258, in the Year 1994". Los Angeles Times. p. 4. Retrieved June 25, 2009.
^ "Paramount, Chris-Craft forming fifth TV network". United Press International. October 26, 1993. Retrieved July 21, 2016.
^ Carter, Bill (March 21, 2000). "Viacom Buys Chris-Craft's Stake in UPN For $5 Million". The New York Times. Retrieved July 22, 2016.
^ Goldsmith, Jill (April 4, 2000). "Weblet soap ends: Viacom's got UPN". Variety. Retrieved July 22, 2016.
^ Hofmeister, Sallie (August 12, 2000). "News Corp. to Buy Chris-Craft Parent for $5.5 Billion, Outbidding Viacom". Los Angeles Times. Retrieved July 22, 2016.
^ Chipman, Kim (August 14, 2000). "News Corp. to buy Chris-Craft". Deseret News. Bloomberg News. Retrieved July 22, 2016.
^ Rathbun, Elizabeth A. (August 20, 2000). "How the FCC counts Fox". Broadcasting & Cable. Retrieved July 22, 2016.
^ Goldsmith, Jill (July 31, 2001). "Chris-Craft deal closed". Variety. Retrieved July 22, 2016.
^ Schlosser, Joe (August 27, 2000). "There's still a UPN—for now". Broadcasting & Cable. Retrieved July 22, 2016.
^ McClellan, Steve (January 21, 2001). "Chris-Craft stations re-up with UPN". Broadcasting & Cable. Retrieved July 22, 2016.
^ a b Kamenick, Amy (October 2, 2001). "News Corp. acquisition of Fox 29 approved". Minneapolis / St. Paul Business Journal. Retrieved July 22, 2016.
^ "Clear Channel to land KMOL-TV in a trade". San Antonio Business Journal. July 27, 2001. Retrieved July 22, 2016.
^ a b Kamenick, Amy (May 23, 2002). "Channels 9 and 29 swap affiliations". Minneapolis / St. Paul Business Journal. Retrieved July 22, 2016.
^ Gunderson, Troy (September 6, 2002). "Calling all surfers: Fox, UPN changing channels". Brainerd Dispatch. Archived from the original on December 14, 2013. Retrieved June 22, 2012.
^ "RESCAN: How to get FOX 9 over-the-air on UHF". Retrieved June 19, 2014.
^ "DTV Tentative Channel Designations for the First and the Second Rounds" (PDF). Archived from the original (PDF) on August 29, 2013. Retrieved March 24, 2012.
^ Rybak, Deborah Caulfield (June 1, 2006). "WFTC drops newscast at 10; KMSP adds it". Star Tribune. Archived from the original on June 15, 2006. Retrieved July 22, 2016.
^ FCC Levies Fines On KMSP, WMGM, TVNewsCheck, March 25, 2011.
|
0.937137 |
The Great March to war?
In ancient times, most pagan cultures attributed their origins to a variety of fanciful mythologies. They largely believed that their reality was guided by a fuzzy combination of fate and the will of the gods. Therefore, whatever their lot in life, it was fated to be so. Whatever their despotic leader/king/queen/ruler dictated was the will of the gods. Whatever calamities befell them, they understood these to be issues well beyond their control. However, as time passed, man's ingenuity in creating things (weapons of warfare, commerce systems, currency, politics, philosophies, etc.) created in man a fearlessness, which persuaded him to shed his ancient ways and become the master of his own destiny (see Gen. 11:1-9).
Through the progressive eras, man began to make huge strides in the arenas of economics, astronomy, mechanical technology, medicine, sciences, and general knowledge. Concurrent to all this, was a widening divide between man and God. Human reasoning began to replace the divine. Naturalism began to replace the supernatural. The Age of Reason brought about the rise of humanism, which promised mankind the answers to all our questions given two variables- time and science. Humanism promised humanity that whatever we could dream, we could achieve through our own ingenuity and technology. And for a while, it seemed possible. However, for all of the grand treatise and theories about human origins and greatness, humanism refused to address the two critical errors in its logic- mankind's corrupted nature, and this fallen world.
By the time humanity reached the early 20th-century, the generally accepted academic understanding of our genesis as a species was reduced down to nothing more than a series of random, evolutionary processes guided by an indifferent and impersonal energy force. Accordingly, should man's existence here be purely accidental- then life itself should serve no other purpose than to survive. Because of mankind's unwillingness to repent and turn to the one, true Creator, he increasingly became self-absorbed and blinded from the truth (Romans 1:16-31). An example of how bad information spirals quickly out of control, is how Darwinian Evolution, influenced Social Darwinism, which spawned Eugenics, which supported racism by creating racial hierarchies, and helped "scientifically" validate agencies like the Ku Klux Klan, Planned Parenthood, and the Third Reich.
By the early 20th-century, the ideological/theological void had begun producing the very worst humanity had to offer. The 19th-century German philosopher Friedrich Nietzsche once stated that if you gazed into the abyss, the abyss gazes also into you. According to David McCandless's account of 20th-Century Deaths, 450,000,000 deaths can be attributed solely to the increased violence of mankind. This does not even take into account the hundreds of millions deaths caused by things like abortion, man-caused famines and diseases, and all the unaccounted for deaths and murders. What this means is that the 20th-century became the most violent century in all of human history. Therefore, if there was any truth to what Nietzsche said, then the 20th-century treated the abyss like a reflecting pond.
The Magna Carta (1215-Runnymeade, England) was the first modern effort to free mankind from the unchecked tyranny of monarchial rule. While its original aims were limited to the landowners in medieval England, it became the first "constitution" of sorts that Europe ever had. Since Europe was still the center of world power at that time (and had been since the days of Alexander the Great), the idea of personal rights began to be transported throughout the world by way of European colonialism. Most notably, this idea took root in the British colonies of the New World, which as know, would later become the United States of America.
The overarching theme to these documents were designed around limiting government's power by acknowledging that man's rights and freedoms were both inherent, and God-given. In essence, these writings became the epitome of perfection regarding human philosophy. These documents completed what was begun centuries earlier in 1215. For the next two and a half centuries, the United States became that shining city on a hill. We became a beacon that oppressed people from around the world could flee too, to escape oppression, or just seek out a better life from whatever darkened corner of the world they came from.
The Bretton Woods Agreement created a new international monetary system that lasted from the mid-1940s to the early 1970s.The agreement pegged the value of other nations' currencies to the U.S. dollar, which, in turn, was pegged to the price of gold, fixed at $35 an ounce. With collapse of the Bretton Woods Agreement, countries could choose other ways to set the value of their currencies, including letting market forces decide.
Cold War/NATO: With both the West (US, Western Europe, Australia, Canada, etc.) and the East (Russia, China) having fought and won against the Third Reich and the Japanese Imperialists, the end brought the dividing of the spoils of war. With clear and irreparable divisions on governance and economics, both the West (capitalist democracies) and the East (communists) fragile wartime alliance had ended. This created the need for Western Europe to form an alliance of the North Atlantic Treaty Organization (NATO) to protect them from further Eastern encroachment. This geopolitical tension, created a "Cold War" between East and West, which lasted from the 1948-49 Berlin Blockade, to the collapse of the Soviet Union in 1991.
The Nuclear Arms/Space Race: Beginning with the highly secretive Manhattan Project, the world's race toward a weapon of mass destruction finally ended with the American's use of two atom bombs on Nagasaki and Hiroshima in the summer of 1945. For four brief years, the US enjoyed atomic supremacy, but this was soon to be dashed with the Soviet Union's first successful test of their own atomic weapon in 1949. Not to be outdone, both the US and USSR began a several-decades initiative to create even more devastating weapons. Relating to this was the race to space, for which, the Russians initially succeeded. However, the US was the first (and only nation as of yet) to put a man on the moon.
The rebirth of Israel: From 70AD until 1948, the nation of Israel existed only in the history books. But due to the defeat of the Ottoman Turks (1918), and the Nazi Holocaust (1933-1945), there was both opportunity (the land) and support (sympathy/determination) for the Jewish people to return to their ancient homeland. Despite intense internal opposition to US President Harry Truman from within his own administration to support, the US became the first nation to recognize the newly formed state on May 14th, 1948. Since then, the Middle East has been a hotbed of both antisemitism, violence, and war in the Muslim's repeated attempts (1948, 1956, 1967, 1973, etc.) at destroying the world's only Jewish nation.
It has been 243 years since 1776 and we have fallen very far from what our forefathers could have ever imagined. Not only have we become amoral (and immoral), but also exceedingly ignorant and lazy. After a century of cultural Marxism dumbing down the masses, the average person on the street today probably could not tell you what the Bill of Rights are, or where to find Washington D.C. on a map. Most people alive today seem to think that the way things are now, are the way they have always been. What is even more disturbing, is that most people think things will always be this way. This line of reasoning is a form of normalcy bias, by which people judge tomorrow by the persistent presence of the present. Most people today fail to realize that this experiment in human liberty (i.e., the United States) was only ever the exception to the rule, and not the norm. There are now multiple generations who either don't know, or don't care, about our exceptionalism, and thus, have begun handing over their freedoms for the promise of free handouts and security.
With a growing number of Democrats and Millennials these days militantly demanding we turn to socialism to solve our current problems, we are at the tipping point for this nation's survival. I say tipping point, because these aspiring communists only used to make up the outer fringes of society. Now, they include most universities (faculty and students), a growing number of elected officials, and even giants within the tech industry. These groups are now demanding socialism be instituted everywhere regardless of its economic and political ramifications.
Since the 1960s, our nation has embarked upon a descent into godless immorality with frightening rapidity. This has all but assured our nation's demise. This is not my own subjective reasoning, but historical fact that has played out repeatedly. Things such as embracing infanticide or deviant lifestyles has never bode well for a nation. As a nation, the wicked have now begun openly targeting children in the name of abortion, gender confusion, transgender story times, unisex bathrooms, and homosexual adoptions. As was the case with Belshazzar's party, by the time the writing is on the wall, it is already too late.
America's first "handwriting on the wall" moment, was on April 19th, 1995. This was the terrorist bombing of the Alfred P. Murrah federal building in Oklahoma City. Out of this, came the foundations for the Patriot Act.
America's second "handwriting" moment, came on September 11, 2001. It showed the world America's soft underbelly by hitting us where we were vulnerable. It had the subsequent effect of making transportation virtually unbearable ever since, with the massive increases in security.
America's third warning came in 2007 with the subprime mortgage crises. Since 1992, the government subsidizing "affordable" housing and adjustable rate mortgage (ARM) loans to people who could not afford it. As early as 2003, this begun spiraling out of control by creating a false housing bubble. We are still suffering the repercussions of this event.
America's fourth warning came in 2008-2016, with the election of Barack Hussein Obama. As President, he had done more in eight-years to destabilize our nation (morally, economically, and geopolitically), than all the previous presidents put together. It is still very likely that he and/or members of his administration will be prosecuted for high crimes and felonies in their actions against Donald Trump's campaign and 2016 election.
We have undeservedly earned a respite of sorts (if that is what you can call it) with the election of Donald J. Trump. Nevertheless, I believe Trump's election has more to do with setting up Israel for a post-American world more than anything else. Because President Trump is willing to do what no other president would regarding Israel. Yet, he is only one man. He cannot save America from all that ails her. Our nation has become so deeply divided and corrupt that introducing wickedness into law always seems to sail through the halls of congress with leaps and bounds. However, if you attempt to repeal said evil, this always seems to become an impossibility of the greatest order. How can one man fight that?
For two and a half years, the Democrats, the media, the deep state, and the nihilistic leftists of academia, pop culture, Silicon Valley, etc. have waged a relentless collusion/ obstruction witch-hunt against President Trump, his family, and anyone ever associated with him. While several former members of Trump's campaign suffered legal ramifications for their own issues, these were completely unrelated to the campaign, the Russians, or anything else connected to Trump. The witch-hunt itself has turned into a giant nothing burger, but the war waged against Trump's presidency has created considerable mistrust in the presidential election process. It has also done irreparable damage to the Department of Justice and the FBI/CIA. On a positive note, it finally exposed who the true masters of the news media really is finally.
With the end of the American dream, comes the inevitable rise of some new form of global government to fill the vacuum. That outcome is as assured, as is the rising of the sun in the east. What has prevented this new global order from rising up thus far has been the world's only superpower...a constitutional Republic. A Republic whose very existence prevents itself from subordinating herself to any other political entity. Some might counter with the idea that the US has more laws than any other nation on the planet. To which I add that we were never designed to be this way. We could answer President John Adam's aforementioned statement, with that by a quote from the Roman historian Tacitus, when he said, the more corrupt the state, the more numerous the laws. The United States' Charters of Freedom were all constructed with the sole purpose of limiting government overreach, and to give power to the people; something an increasingly ignorant and decadent culture has been willing to exploit and piecemeal away for a little peace and security.
In Isaiah 46:9-10, the prophet Isaiah records that God declares the end from the beginning...and it is not that God is making us collapse, but that He can see the future and already knew from the foundation of the world, the who, what, when, where, why this last kingdom (the Beast) would come about. Several centuries after Isaiah, the Hebrew prophet Daniel was the first to be given a detailed outline of the future Gentile powers. He received this prophecy in the most unusual of ways, by having to interpret it from a gentile king's dream (King Nebuchadnezzar).
From the days of Alexander the Great (circa 300BC) until 2019, world power (economic, military, and political) has largely remained centered in the West. Historically we know the Babylonians (under Nebuchadnezzar's grandson) was conquered by the Medo-Persians. The Greeks conquered the Persians. The Romans conquered the Greeks. The Roman Empire split in two after four-hundred years of Empire-rule. From Rome (the Western leg) geopolitical power moved from Rome to the north through the succeeding European barbarian hordes. However, the barbarians were only able to conquer the Romans, because they were already in a state of widespread decline (morally, economically, and militarily) and had become vulnerable on too many fronts. Sound familiar?
In the absence of Pax Romana, the European barbarian (Goths, Ostrogoth's, Visigoths, Vandals, Angles, Danes, Saxons, Franks, Britons, etc.) largely began to settle down and build new civilizations over the formerly held Roman territory. Within a few hundred years of Rome's collapse (circa 430AD), Europe became the prize to either conquer (i.e., Huns, Mongols, Moslems, etc.) or to trade with (Silk Road, Mediterranean, Middle East). Nevertheless, from the 800 - 1945AD, world power was largely concentrated in Europe through the rise and fall of the European nation-states.
The post-modern/ Christian era of the late 20th-century, gave way to the reintroduction of pagan religions and eastern mysticism. The 21st-century has now become a hybridized version of the worst of both the ancient ways and the age of human secularism. This hybridization is further aided by a dangerous and unprecedented technology race, which has promised everything from genetic engineering, unlimited knowledge, artificial intelligence, and even immortality. Many hope in the rise of the new gods, where technology and the supernatural are combined to infuse humanity with that one, final evolutionary push toward immortality. These technological advancements, along with humanity's departure from moral and ethical reasoning, wholly threatens to upend the current world order.
In our present state, two great powers, the nationalist and globalist, are tearing the current world order apart. Aside from my aforementioned notion that divine justice is coming back to the shores of the United States, Daniel's interpretation of the multi-metallic statue does not seem to support our independent existence in our current form. The ten-toes of iron and clay appear to be the next phase of great, gentile powers. We know these are ten kings/kingdoms, because it is later validated by the vision given to John on the island of Patmos some 600 years after Daniel. Instead of toes, these ten kings/kingdoms appear as crowns on the Beast's horns (Rev. 13:1-4, 17:9-14). They will rule briefly with The Beast (the Antichrist) for the last week of years. While it is unclear exactly how all this will unfold between the Rapture of the Church, the gap in time, the opening of the Seal Judgments, and the confirmation of the covenant (Dan. 9:27), we can know a few things with relative certainty.
I am not a prophet, nor do I claim to know the how and when all of this is supposed to transpire- I just know that it will. The USS America has struck the iceberg of God's divine justice and we are listing more noticeably to one side. We are currently over $22T in debt. We have half the nation bent on suiciding itself with socialism. Crime is becoming more pervasive and violent. People have become increasingly cynical and nihilistic. Corruption is appearing in every institution from the CIA to the Boy Scouts. The immoral majority has become a tsunami-like force that threatens (literally) to destroy anyone who stands in their way. The geopolitical threats around the world are metastasizing in size, number, and severity. The deep state Trump is currently battling is ready and willing to go to open war with the sitting president in order to keep their power in the dark. Things are not boding well for us as a nation.
Once upon a time in the west, there was a beautiful dream, an American dream, where a man could finally be free to choose his own destiny. A dream where a person could go from rags to riches, with nothing more than an idea and the will to see it through. America, with its spacious skies and its amber waves of grain. With purple mountain majesties, above the fruited plains. God's hand of protection had long been over this once, humble nation, but as she prospered, she forgot where those blessings came from.
In one of his first comments as Palestinian Authority Prime Minister, Mohammed Shtayyeh said other countries, chief among them Russia, would support Palestinian rejection of the Trump administration's "deal of the century" for Israeli-Palestinian peace. It wasn't for nothing that Shtayyeh highlighted Russia.
In recent years, Russia has expressed an interest in getting involved in the Israeli-Palestinian conflict, due to its own regional and global interests of restoring its status as a superpower. Hence Russian Foreign Minister Sergey Lavrov recently refloated the idea of hosting talks between Israel and the Palestinians in Moscow. Over the years, Moscow has on multiple occasions proposed advancing a peace agreement via a Moscow summit, but Israel has preferred to let the United States spearhead the process. The current Russian interest in the conflict is a reflection of Moscow's ambitions to establish a presence in the Middle East as a mediator, within the prism of a zero-sum game against the Americans, and amid the view that U.S. clout on the Arab street is waning. Russia, from its perspective, assumes this activity is only beneficial: The cost, in any practical or abstract sense, is insignificant, and the expected returns of restoring Russia to prominence in the Arab and Muslim world are self-evident to the Kremlin.
Russia also illustrates its desire to be a mediator on the global stage by saying and doing certain things to paint itself as a critical cog in any peace process. This always occurs simultaneous to, or immediately after, the Americans unveil their own initiatives. In 2017, for example, as talks were progressing over moving the U.S. Embassy to Jerusalem, Russia announced its desire, as a member of the Quartet, to advance a peace deal. At the time, Russia issued a surprise declaration that it recognizes west Jerusalem as the official capital of Israel, regardless of the establishment of a Palestinian capital in east Jerusalem. The Kremlin also supported direct talks between Israel and the Palestinians and expressed its interest in facilitating an agreement. The declaration emphasized Russia as a key player due to its membership in the Quartet and its permanent seat in the United Nations Security Council. Lavrov also denounced the "deal of the century" and stressed Russia's commitment to a peace deal based on U.N. resolutions and the Arab peace initiative.
In recent months, as the American "deal of the century" has gained more exposure, Moscow has intensified its efforts to advance intra-Palestinian reconciliation, including repeatedly inviting Palestinian factions for talks in Moscow. Hamas is shunned to varying degrees by the U.S. and European Union, and Russia wants to signal it can hold dialogue with the PA and the regime in Gaza.
Last month, for instance, Hamas representatives flew to Moscow and were given the opportunity to present an alternative solution to the conflict. Among the principles put forth were the rejection of the "deal of the century" and opposition to any form of normalization with Israel. Although these principles are irrelevant to negotiations with Israel, voicing them in Russia strengthens the narrative that Moscow is grabbing the reins as the only mediator capable of communicating with all the Palestinian factions, especially when they are all united in opposing the "deal of the century."
The United States and Israel must assume from past experience that as the "deal of the century" approaches its deadline, two processes will take place. First, we can assume that ignoring Russia will provoke a Russian attempt to enlist an Arab and international lobby against the U.S. proposal. Additionally, Russia will advance its own alternatives for resolving the conflict, which the Palestinians can view as a basis for negotiations, such as the summit in Moscow, the Quartet path or any other platform that will include Russia as a member of the international "club."
This situation makes it increasingly likely for a scenario to unfold in which Israel, not the Palestinians, is painted as the rejectionist side, particularly in light of the fact that a plan such as the one presented by Hamas in Moscow in completely unfeasible from Israel's perspective. In all likelihood, if it were possible to establish an international framework for a peace accord, which would include Russia, such as the Quartet, it would probably rise to the forefront. Because, among other things, Russia has expanded its leverage with the Arab world and it very much wants to partake in the prestigious "club" of nations.
First, in 2014, a powerful Islamic terrorist group called ISIS (Pres. Obama called it ISIL or the Levant) captured several cities and towns in Iraq.
The original Levant was a geographical area near the Mediterranean Sea in the Middle East that included Cyprus, Lebanon, Syria, Jordan, part of Saudi Arabia, part of Turkey, part of Egypt, part of Iraq and all of Israel.
Other Islamic terrorist groups quickly joined ISIS, and the group soon started calling itself a Muslim Caliphate (a Muslim kingdom).
God promised to bless those that bless Israel and to curse those that curse Israel.
He also promised that Israel will not be defeated at the end of the age.
In mid-March 2019, the ISIS Caliphate came to an end, and Israel still stands.
God's Word is being fulfilled.
Second, the Bible says the nations around Israel will claim the mountains of Israel, slander Israel and make God angry at the end of the age (Ezek. 36:1-7).
God added that He will be on Israel's side; Israel will possess the mountains and grow crops on them.
Israel captured the Golan Heights in 1967, annexed them in 1981, and is growing crops on some of them.
On Mar. 21, 2019, the U.S. recognized the Golan Heights as sovereign Israeli territory, and God's Word is being fulfilled.
Third, a Syrian official said, "The Syrian people remain committed to the liberation of the Golan Heights by all means at its disposal."
The Word of God says Syria and Israel will get into a war at the end of the age; and Damascus, Syria, will cease to exist in one night (Isa. 17).
Syria would be wise to consider what has happened to ISIS and to read the above about the nations around Israel claiming the mountains and making God angry.
Syria won't liberate the mountains of Israel because God has already liberated them and permanently returned them to His people Israel (Ezek. 36).
Fourth, on Mar. 22, 2019, Russia, Iran and Syria condemned U.S. recognition of Israeli sovereignty over the Golan Heights, and Turkey quickly did the same.
A large deposit of oil and natural gas has been discovered in the Golan Heights, and some think these nations will try to take it.
It is clear that these nations will be part of a group that will attack Israel from the north in the latter years and latter days to take a spoil and a prey. This will anger God, and they will perish on the mountains of Israel (Ezek. 38:12, 15, 18-23).
Fifth, on Mar. 24, 2019, it was reported that the U.S. and Israel are anticipating a breakout of violence along Israel's border with Syria and Lebanon at any moment due to U.S. recognition of Israeli sovereignty over the Golan Heights.
As a result, the U.S. has sent reinforcements to six U.S. bases in Iraq and Syria.
Sixth, on Mar. 21, 2019, U.S. Sec. of State Mike Pompeo was in Israel when Pres. Trump recognized the Golan Heights as sovereign Israeli territory.
It was the day before the Jewish holiday called Purim, and he was interviewed by Chris Mitchell of the Christian Broadcasting Network (CBN).
Mr. Mitchell said, "Jews worldwide and here in Jerusalem are talking about the fact that Esther 2,500 years ago saved the Jewish people with God's help from Haman" (a Persian or Iranian official that wanted to destroy the Jews).
Mr. Mitchell added, "And now, 2,500 years later, there's a new Haman here in the Middle East that wants to eradicate the Jewish people just like Haman did: the state of Iran."
Then, Mr. Mitchell asked, "Could it be that President Trump right now has been sort of raised for such a time as this, just like Queen Esther, to help save the Jewish people from the Iranian menace?"
Mr. Pompeo replied, "As a Christian, I certainly believe that's possible."
Israel will survive because God promised to protect her, and it is not too far-fetched to believe that God raised up Pres. Trump for Israel's sake.
Will the Great March of Return lead to a Gaza war?
Prime Minister Benjamin Netanyahu's trip to the Gaza border to see tanks and soldiers Thursday almost seemed reassuring in a week that began with a direct rocket hit on a home in the center of the country.
The sight of that demolished house and the miraculous survival of the seven family members inside served as a reminder to the Israeli public that they were not immune from the long and deadly arm of the slow brewing conflict in Gaza.
With visions of possible death randomly raining from the skies, Netanyahu's statement of a possible extensive military campaign in the Gaza Strip almost made it seem as if the government and the IDF finally planned to take steps to end the threat from the Hamas-ruled enclave.
New Right Party head and Education Minister Naftali Bennett called for the IDF "to open the gates of hell" against Hamas.
United Nations Ambassador Danny Danon warned this week during a Security Council debate, "If the terror from Gaza continues, the Hamas leadership will feel the strength of the IDF and be buried in the tunnels of Gaza."
But if Israel could easily bury Hamas, it would have done so already. In the last decade, the IDF has fought three wars against Hamas: in 2009, in 2012 and in 2014. In each conflict, it could have claimed victory, with Gaza bearing the brunt of the casualty counts and the destroyed homes.
But when the dust of war settled it was clear that far from vanquishing Hamas, its military might had only grown, from a terror organization that could barely hit Sderot 18 years ago to one that can fire deadly missiles beyond Tel Aviv.
Short of carpet bombing Gaza, or reoccupying it, another military campaign is more likely to continue the pattern of ratcheting up more demolitions and death, but is unlikely to unseat or disarm Hamas.
Netanyahu, therefore, has been slow to seek a military solution. As a result, in the last year, Hamas and Israel have been entwined in a slow dance that comes ever closer to the precipice. The country has repeatedly seemed to be on the edge of war with Gaza rockets flying like sudden sun showers that dissipate almost as quickly as they fell.
The slow, on-again, off-again drum beat of war has been helped by the Great March of Return, which began last March 30 as a six-week event and has yet to end.
Hamas was able to siphon off the frustration of the deteriorating humanitarian situation in Gaza - made worse by Palestinian Authority sanctions -- into weekly protests against Israel.
The low-level violence, including infiltration attempts, explosive devices, burning tires, stone throwing and Molotov cocktails, was enough to warrant a response but not enough to trigger a war.
The incendiary devices launched from Gaza against Israel which burned thousands of hectares of fields and forests prompted IDF retaliatory strikes but did not warrant a full-scale conflict.
IDF response to the protesters, which has included live fire and tear gas, has led to more than 270 deaths and close to 30,000 injuries, according to the Gaza Health Ministry.
In the diplomatic arena, the Palestinian Authority was able to score points by painting the protests as peaceful, and highlighting the disproportionate nature of the violence.
The success culminated this month with the United Nations Human Rights Council creating a list of Israelis it holds to be culpable of war crimes along the Gaza border, with an eye to handing the list to the International Criminal Court in The Hague.
In the field, however, the march has not changed the deteriorating situation in Gaza for the Palestinians.
Rather, as time has gone on, the ongoing protests and riots have begun to play the role of a match in a dry field. At a time of high tension - such as now, when Hamas and Israel appear on the brink of a new conflict - violence along the border, including this weekend, could provide the spark that pushes the IDF and Hamas into a full-blown war.
"Next, they propose doing away with the Electoral College, since it sunk Hillary Clinton in 2016...without the Electoral College, a Democrat would have to campaign only in California and New York. The rest of the country? Who cares? Next, the Democrats, on a party-line vote, actually passed a law in the House nationalizing elections and weakening electoral safeguards, such as voter ID laws and state statutes that require keeping voter rolls up to date. The communist-sounding For the People Act (HR 1) is a breathtaking attack on election integrity." Knight points out that this law imposes mandatory automatic voter registration, same-day voter registration, no-fault absentee balloting and early voting, as well as politicizing the Federal Elections Commission, forces taxpayer subsidization of political campaigns, and regulates political speech.
When I first became a Christian, I didn't fully understand what I had done. I had made a commitment to Christ at a Bible study on my high school campus, but I didn't know what was ahead of me. I didn't know what was going to happen, but I believed what I heard that day.
Not long after that, a guy walked up to me at school and introduced himself. He said, "Hi, my name is Mark, and I saw that you went forward and prayed to accept Jesus the other day."
I was a little resistant.
"Hey, I want to help you," he continued. "I want to take you to church."
"No," I said. "That's okay. I don't want to go to church."
But Mark was very persistent, and he wouldn't take no for an answer. Finally I relented. And not only did Mark take me to church, but he introduced me to other Christians. He had me over to his house for dinner with his parents, who also were Christians. I asked a lot of questions, and no question was too ridiculous to ask.
What Mark was doing was discipling me. And if he had not done that, I fear that I would have fallen through the cracks. Often after someone accepts Christ, he or she doesn't know what to do next. Mark helped me in that transition. And what Mark did for me, we need to do for others. That is what the Great Commission is.
If you're following Jesus as a real disciple, then you will be leading others to Christ. And if you're not leading others to Christ, are you really following Him as you ought to as a disciple? We must be salt in our world-and salt stimulates the thirst in another person.
|
0.984194 |
Please help me identify my bear. This was my childhood bear, given to me when I was a baby. I was born @ 1960. My bear is about 13 inches tall without the ears. He has no tags or button. He looks like he has a bit of a hump on his back. I think he used to make a noise when I pushed on his tummy, but now it just feels like there is a mechanism in there, but no sound really comes out. He was never a particularly soft and cuddly bear, so I don't think I played with him very much. He's a bit stiff. His eyes look to me as though they are made of glass. I think he is made of mohair, but I can not tell what he is stuffed with. His arms and legs are jointed, and his head turns. Thick black thread define his nose, mouth, and toes. Thanks very much for any help in identifying my old bear!
|
0.999998 |
How Do I Choose the Best Burger Relish?
Gherkins or other pickled cucumbers are typically the base for burger relish.
Selecting the best burger relish can be a very subjective matter. There are several types of burger relish with various flavors, but they generally fall into two broad categories, sweet or savory. A sweet relish usually draws its main flavor from pickled ingredients, such as gherkins, and sugar that are mixed with vinegar and complementary herbs such as dill, fennel or caraway, or flavorings such as red onions. A savory burger relish generally can rely on a more traditional tomato and onion flavor, sometimes accented with Worcestershire sauce, garlic and peppers. Some relishes are wholly original inventions, such as Mexican chili relishes, hot pepper relish spreads and relishes that use large amounts of horseradish and mustard.
Many commercial producers of burger relish tend to adhere to a traditional combination of ingredients. This usually is some type of basic sweet relish made with pickles, vinegar, onions and other ingredients. This relish is then combined with tomatoes or tomato products such as ketchup to make a thick red spread. These tend to be fairly mild relishes with a pleasant sweet and sour taste that goes well with the cooked meat in hamburger and can make a good choice as a tangy condiment for a large gathering.
A homemade burger relish might be a good choice when the commercially available varieties are not satisfactory. A very simple relish can be made by mixing diced gherkins, onions and garlic with some ketchup and mustard seeds. For a similar relish that has a more rustic appeal, the ketchup can be replaced with finely diced tomatoes and tomatillos, along with vinegar and sugar.
When choosing the best burger relish, another element to consider is whether the relish has any special ingredients that might enhance or detract from the taste. Ingredients such as fennel, dill or caraway can produce a sweet flavor that is not enjoyed by everyone. Similarly, some relishes contain a large amount of hot peppers or hot sauces that can be unpleasant to people who do not enjoy spicy foods. One other aspect of the relish to be aware of is how many artificial chemicals or ingredients are added to attain a specific color or texture, because these can affect the overall taste.
No matter what burger relish is preferred, a few properties are important. The first is that the relish should be easily spreadable, with the pieces of vegetables in the relish being small enough so they can be eaten without problems. A good relish also should be specifically made for hamburgers, because certain types of general-purpose relishes might be far too sweet or sour to complement the meat. The relish also should have at least a bit of acidity in the mixture so it can cut through the sometimes heavy, meaty taste of the burger.
How Do I Choose the Best Hot Dog Relish?
When putting relish on my burger, I never took into consideration all the types that are available. Though I usually prefer sweet relish, I find that it doesn't work with some foods I've tried to use it on. For example, though sweet relish goes rather well on turkey burgers, I don't like to use it on beef burgers. Maybe this is because turkey burgers aren't as rich, so the condiments are more effective. On the other hand, because beef burgers have a lot more fat, you don't taste the sweet relish as much, and it just creates a mess.
|
0.991454 |
Which credit cards are most widely accepted in Paris? I'd prefer to use AMEX when possible.
In Europe, Visa and MasterCard are more welcomed than American Express. Nonetheless, all major establishments will take AE.
|
0.943363 |
Back at my old workplace, one of my fellow engineers liked to tell people how great my commit messages were in Git. He was the kind to write “wip” (“work in progress”) before squashing all his commits. Every incremental, valuable change, for me, would be a separate commit.
I almost feel like the title of this post reads like a commit message. I tend to write fairly descriptive commit messages. I also wrote extensive and detailed diff summaries – something my fellow front-end developers would applaud, too.
This is just a post to outline the update I made to my blog layout recently, which only looks like a small change, but does encompass a bit extra in terms of updating some microdata and cleaning up some HTML.
I think the previous version had an even larger thumbnail at some point last year, but after realising that this didn’t look too crash-hot on retina MacBook Pros (like mine, hah), I reduced it to half its size.
Removed the light grey border from the thumbnail. I decided that it wasn’t necessary for an image so small.
Adjusted the font size of the post title to be smaller.
Changed the metadata to normal sentence case. Uppercase was probably hard to read.
Changed the font of the metadata. Just wanted to add a bit of interest, and this font has a slightly heavier weight so may be easier to read.
Aligned the text to the left, along with the thumbnail image. This saves vertical space, and left-aligned text is a little easier to read when there is a lot of it. I didn’t test this until just now, but if the title is extremely long, it does run on nicely to the next line without any issues. The view on tablet size is still centred. I may or may not change this – I have had a lot of compliments on the tablet view.
Used a · separator for the pieces of metadata. I didn’t want to use icons because I wanted to keep this as simple as possible without ‘noise’ and without any extra resources.
If there is no thumbnail, it will look like this post – the text will simply be aligned to the left. No space is reserved for the thumbnail.
Overall I saved a bit of vertical space and made the area more readable. If you have any suggestions for further improvements, let me know. I know the image captions could do with some work because as you can see from this post, images with white backgrounds look a bit odd with the caption sort of ‘floating’ underneath.
It looks really good! Little tweaks always make a website look brand new, and fresh. I never use image captions so I would not know what to tell you with that. What do you use for feature images? I do not want to put my whole server just for feature images on my posts. I have been meaning to ask you. Also do you use a compression plugin for them?
Thanks! I never used to use image captions either, but I felt that along with putting a descriptive alt attribute, I might as well. Better for accessibility! :P A lot of my older posts have no captions at all, but I have gotten into the habit now. It is a little bit time consuming and sometimes I am stumped for an appropriate caption.
I manually choose and crop my own feature images. :( I would like to write some logic that grabs the first image in the post (if the post has an image in it) but I haven’t tried yet. I know it will be challenging. I also only save and upload the original resolution of the images. I turned off the option in WordPress to save medium and smaller sizes because I didn’t need them and they took up space. I have considered using a CDN for images, but I don’t really like the idea of throwing my images away from my actual domain name.
I don’t use a compression plugin, but I use ImageOptim and TinyJPG/TinyPNG.com to compress my images as much as possible (without losing quality) before I upload them. I feel a bit safer doing it manually, so I can check images after they are compressed and make sure they haven’t lost any quality.
I am definitely going to download ImageOptim! I am trying to brush up on some PHP in order to maybe create a plugin or even just the function to pull the first image. Basically I want it more for social media purposes then anything else.
Go for it! It’s easy to use and does a fantastic job. Note that if you use TinyJPG or TinyPNG, they can result in ever-so-slightly lossy images.
|
0.999991 |
Dealing with Anxiety One way to manage stress and anxiety is to keep your finances in order. You will find that you will eliminate stress and anxiety if you plan your finances well and get them under control. It is advisable that you start by setting up a budget of what you will need to use for a given period of time. Creating a budget will help you in eliminating surprises as things will always be ready when you need them. Make sure that you stick to your budget to stop creating unnecessary expenses. In addition to that you should also set some money aside for emergencies. You will be able to take care of the unexpected things that you did not put in your budget. As a result it will save you from financial worries that may lead to stress. It is important that you prioritize your schedule to do away with stress. Do them step by step. You should avoid stressing yourself with a load of things that you need to complete within a short time frame. You should organize the things that you want to do from the first one to the last. Make sure you finish the first task before you move to the second task. If you prioritize your duties you will not be stressed up.
You should try to seek solitude. This means getting away from stress. Try and do something that can diverge your attention, for example, taking a cold bath. Ensure that you move away from stress to recharge your brain.
Get some sleep. You should try and sleep for a minimum of eight hours to give your brain easy time to deal with little things that life throws at you. Regular exercises is also a way of dealing with stress and anxiety. This will help to keep your brain alert and from concentrating on things that are not important. Doing regular exercise is important to the sense that they increase the rate at which heart pumps blood. This is also going to help in eliminating anxiety as it increases the rate of blood flow in the body thus improves the functionality of the brain. Another way is by simplifying your duties. Do not leave work to accumulate is when you start worrying about how you want to clear the mess. It is necessary that you make proper planning and preparation as this will help you to fight anxiety. You should make sure that you laugh more often and take a deep breath. The act of laughing and being happy is a good method of dealing with anxiety as your brain will not have time to create things that are not real. Another way is by taking a deep breath as it increases the rate of blood flow in the body hence stimulating the hormone that fights stress.
|
0.944723 |
BlackRock CEO Larry Fink thinks Americans are not saving enough for retirement. Fink said on CNBC Wednesday morning that the U.S. needs a mandatory savings policy to help Americans accumulate wealth for lengthening lifetimes that will require more retirement income. It is a theme Fink has spoken on before, but he took an impassioned tone on CNBC\’s Squawk Box, saying retirement is a bigger issue than tax policy.
Read more on the need to make financial literacy a national priority.
Fink, the head of the world\’s biggest money manager, also outlined a bullish stance on equities, and a cautious stance on bonds. \”There\’s no question in my mind that equities remain to be fairly cheap,\” he told CNBC.
He added that he believes stock markets could grow 8 to 10% over the next six years, which potentially would put the Dow Jones Industrial Average above 28,000 by the end of that run.
Says man managing +$1 trillion in bonds "@lebas_janney: Larry Fink via @CNBC "bonds are never risky if your needs are complete at maturity"
Has Tesla run too far too fast?
|
0.97181 |
Waves, tides and currents are three types of natural phenomena that occur on water and whilst they are similar in nature, they are not the same thing. While all three are related to bodies of water, they differ based on their causes, intensity and frequency among other factors . Another common misconception is that while these phenomena are known to drive the sea, the ocean itself is not responsible for the generation of waves, tides and currents. Waves for example are influenced by the action of wind on the surface of the ocean while currents are influenced by the heat from the sun on the equator and cooler poles. Tides on the other hand are caused by gravitational forces from the moon and sun. All three contain some form of moving and potential energy and slight changes can lead to much larger downstream effects that affect nearby communities and recreational users.
Waves are defined as the movement of water that occurs on the surface of water bodies like oceans, seas, lakes and rivers. While no two waves are identical, they share common traits like having a measurable height which is defined as the distance from its crest to its trough.
They are usually created by winds which transfer energy to the water as they blow over. This results in the production of small water movements known as ripples . These ripples can subsequently grow in size, length and speed to form what we know as waves. These waves are commonly also known as ocean surface waves due to them being generated from the wind passing over the surface of the water . Waves are usually influenced by a range of factors such as wind speed, duration and distance. They are also influenced by the width of the surrounding areas and the depth of the water body itself. As the wind die down, so the height of the wave decreases and while some waves can be small and gentle, if the conditions are right, waves of up to 90 feet can be formed. Powerful waves such as tidal waves or tsunamis can also be formed as a result of earthquakes, landslides or volcanic eruptions.
There are many different types of waves such as capillary waves, ripples, seas and swells and they can manifest in a range of shapes and sizes, such as small waves or big swells that can travel over long distances. The size and shape of the wave can also reveal its origin. A small and choppy wave most likely was formed locally by a storm for example while larges waves with high crests suggest origins from far away, possibly in another hemisphere. The size of a wave is usually determined by the distance which the wind blows over the open water, the length of time the wind blows for and the speed of the wind. The greater the above specified parameters, the larger the wave.
Tides are formed as a result of centrifugal force and the gravitational attraction between the Earth, Moon and Sun and are often characterised by movements of water over extended periods of time . This rise and fall of water, or rather the difference between the crests and troughs, are defined as tides.
The rotation of the Earth together with the gravitational force of the moon results in water being pulled towards the moon. This causes a rise in the water. As the moon rotates around the Earth, the areas experiencing this pull will form what is known as high tides while other areas not feeling this pull will experience a low tide. A similar effect is caused as a result of the sun however this pull is not as strong because the sun is further away from the Earth . Tides mostly occur in deep oceanic regions and are affected by a range of factors such as the alignment of the sun and moon, the pattern of tidal movements and the shape of the coastline.
Tides are categorized according to the number of high and low tides formed as well as their relative heights and as such can be classified as being semi-diurnal, diurnal or mixed. High tides are defined as when the crest of the wave reaches the coast while low tides are when the trough of the wave reaches the coast. Semidiurnal tides experience 2 highs and 2 lows of equal size every 24 hours and 50 minutes. Diurnal tides experience one high and one low while a mixed semidiurnal tide experiences 2 highs and 2 lows of different size every 24 hours and 50 minutes.
The large masses of water moving in a specific direction from one location to another are known as currents. They occur on open bodies of water like oceans and are usually measured in knots or meters per second.
Oceanic currents are directly influenced by three main factors. These are the rise and fall of the tide, wind and thermohaline circulation . The rise and fall of tides are also known to influence oceanic currents by creating currents either near the shore, or in bays and estuaries. These are known as tidal currents and are the only type of current that changes in a regular pattern and whose changes can be predicted . Winds are known to drive currents at or near the oceanic surface and can influence water movements on a localised or global scale. Temperature also plays a major factor when it comes to currents. Water bodies near the poles are cold while water near the equator is warmer and these differences in temperature play an important part in causing currents. Cold water currents occur as the cold water near the poles sink and moves towards the equator while warm water currents move outward from the equator along the surface towards the poles in an attempt to replace the sinking water. This mixing of warm and cold water causes currents and as they move around the globe from hemisphere to hemisphere they also help to replenish oxygen supplies with water bodies .
Differences in temperature, density and salinity are often referred to as thermohaline circulation. Differences in water density as a result of temperature (thermo) and salinity (haline) differences will also cause changes in currents. These thermohaline circulation changes occur in different parts of the ocean and can occur at both deep and shallow oceanic levels and can be long lasting or temporary . Additional factors that affect currents include rain runoff and ocean bottom topography. Ocean topography is influenced by slopes, ridges and valleys on the bottom which in turn can affect the direction of currents.
These currents are known to affect the Earth’s climate by driving warm waters from the equator and cold waters from the poles around the earth. For example, the warm Gulf Stream is known to bring milder weather to Norway as opposed to New York which is further south . There are a range of different currents such as 1) surface currents which are affected by wind patterns that usually occur at depths of no more than 300 m and 2) world oceanic currents such as the warm Gulf Stream explained above and El Nino currents for example.
Tides, waves and currents are completely different. They form under different conditions and are influenced by different factors. Waves are somewhat more noticeable than tides and currents while tides can often be seen on the shore. Understanding the differences between waves, tides and currents is imperative as it not only aids navigation but also helps people predict and measure them. Obtaining this information is useful as it allows individuals to direct cargo ships safely, determine the extent of an oil spill and best fishing spots, allows for tsunami tracking and aids in environmental restoration activities.
Waves move from side to side Tides move up and down Currents flow clockwise in the Northern Hemisphere and counter clockwise in the Southern Hemisphere. This is known as the Coriolis Effect.
Shalinee Naidoo. "Difference between waves, tides and currents." DifferenceBetween.net. June 22, 2017 < http://www.differencebetween.net/science/difference-between-waves-tides-and-currents/ >.
Indeed explicitly mentioned. Excellent site.
|
0.990551 |
One of colleague mentioned that I should buy Facebook likes to boost my websites ranking. Which service would you recommend and what will be the impact?
It’s true that Google values social signal but it doesn’t mean that it’s going to rank your website based on your Facebook likes. What Google looks for is the engagement on social channels for your brand. Since your “bought” facebook likes are passive, they will not engage with your page and hence they will not help you any way to boost your ranking. Try creating unique content, it will help you to get some social activity on your page and for the content you have posted.
Also, on the other note, Facebook’s algorithm shows your feed on user’s profile based on the conversion it’s generating, for example, number of clicks on the links you have shared in your facebook page. So, having, dormant users in your page not only are unhelpful but will also damage your Facebook conversion.
|
0.996766 |
Determines whether [is disallowed for host] [the specified absolute URI].
The check is disallowed for words.
true if [is disallowed for host] [the specified absolute URI]; otherwise, false.
|
0.708772 |
It was in 1717 AD, when an era of comparative peace and harmony dawned on the European scene, that the Grand Lodge of England took shape. It is, therefore, of interest that within about 12 years thereafter a petition was sent by a few brethren in India to constitute a Provincial Grand Lodge in Calcutta. The Petition having been granted, a Provincial Grand Master was appointed in 1728 AD to supervise Masonic activities in India and the Far East.
A number of Lodges established in 18th and 19th century belonging to various Constitutions viz. Dutch, French, Scottish, Irish, Danish and English flourished in different parts of India. But the ones that survived till Independence of India from British Rule in 1947 belonged only to English, Irish & Scottish Constitutions.
It is not possible to say with any accuracy when, or to whom, the idea of the formation of a Grand Lodge of India first occurred. In the early 1950's the Indian Masonic Journal carried some correspondence, and at least one editorial leader, on the formation of a Grand Lodge of India. As might be expected, a wide variety of views were expressed in that correspondence and no action appears to have been taken by any of the responsible authorities.
It was in 1956 that the first real consideration was given to the establishment of a sovereign Grand Lodge of India and indeed, following a joint Conference in Dublin of The Grand Lodges of England, Ireland and Scotland in October of that year, it was agreed that the views of the Brethren in India should be sought. As far as Scotland is concerned, a poll which was taken in the Spring of 1957 showed that a considerable number of Lodges were in favour of a Grand Lodge of India. Two year later - in 1959 - the then Immediate Past Grand Master of Scotland - Lord Macdonald of Macdonald - accompanied by Grand Secretary, Dr. Alexander F. Buchan, paid an official visit to India and took the opportunity of discussing with a number of Brethren, the question of a formation of a Grand Lodge of India. Lord Macdonald was much impressed by the views put before him and on his return to Scotland he consulted the Grand Masters of England and Ireland as to what steps might be taken to permit of the Brethren in India having their own Grand Lodge. In 1959 at a Conference held in London, the Grand Masters of the three British Grand Lodges expressed their unanimous opinion that an Independent Grand Lodge of India was desirable and that its establishment should be gradually but actively pursued.
In January 1960, the District Grand Lodges in India under the three Constitutions were directed to nominate members of a Steering Committee under an appointed chairman.
The terms of reference to the Committee were, To consider the steps to be taken to establish a Grand Lodge of India and the advice to be given to our Grand Lodges thereon'. Lieutenant-General Sir Harold Williams, a Brother of the Irish Constitution, was appointed Chairman and the Steering Committee met frequently and discharged its duties with great assiduity.
In due course the Steering Committee submitted its report, which was accepted by the three Grand Masters in all but most minor details. The report recommended, among other things, that all the Lodges under the three constitutions in India should be invited to consider and decide whether or not they wished to opt to form the new Grand Lodge. The Steering Committee's repot also dealt with such important matters as a Declaration of Principles; a draft Book of Constitutions; the appointment of its first Grand Master; the Regional Organisation; the rights of individuals and Lodges; provisions relating to Finance, Buildings, Regalia and the future of local and district Funds; the consequences of setting up of an independent grand Lodge; and the procedure to be followed by individual Lodges. This Report was embodied in a Memorandum sent to all Lodges in India under cover of a Foreword, dated December 1960, signed by the three Grand Masters.
The Foreword stated, among other things, that the attitude of the three Grand Lodges with regard to an independent Grand Lodge of India was indicated in the terms of reference for the Steering Committee, but that it was for the Brethren in Lodges in India to decide for themselves whether to opt for or against joining such a body. Much preparatory work had been done by the Committee set up to advice the Grand Masters, but the all-important question had to be decided at Lodge level. If the Brethren in India decided in favour of an independent Grand Lodge, then the three Grand Lodges would accept the decision and would wish to establish the closest fraternal relations with the new Grand Lodge of India.
All Lodges which opted to form the new Grand Lodge of India would immediately after the date of the Inaugural Meeting, return their existing Charters and would come under the jurisdiction of the Grand Lodge of India from the date of the Inaugural Meeting. Lodges which opted before the 30th of September 1961 would be numbered serially according to the date of their original formation. Masonic funds, effects and properties of Lodges which opted to form The Grand Lodge of India, would continue to vest in those Lodges. After the Inaugural Meeting, the three United Kingdom Grand Lodges would not issue Charters for any new Lodges within India.
All the Lodges in India were directed in the Memorandum to meet and discuss and resolve on the question of joining a Grand Lodge of India. To ensure uniformity, the proposition to be placed before each Lodge would be, 'That this Lodge do opt to join the proposed Grand Lodge of India on its inauguration'. It was emphasized that before the vote was taken, every effort should be made by Masters to ensure that members were fully aware of their responsibility and appreciate what was involved. Adequate notice had to be given of the meeting at which the voting would take place. Voting was to be by secret ballot, and the proposition was to be determined by a majority of votes of members present, the Master having an additional casting vote in the event of voting being equal.
The memorandum stated in conclusion in the Grand Masters expressed the firm hope that minorities, in Lodges where the voting was not unanimous. would abide by decision of the majority and unite with it furthering the activities of the Lodge under whichever Grand Lodge, old or new it thereby decided to place itself.
When all the Lodges, English, Irish a Scottish, had voted, it was found that approximately 50 per cent, of the Lodges in each Constitution had opted to join the new Grand Lodge of India. In point of fact, the new Grand Lodge of India began life with one hundred and forty-five Lodges upon its Roll.
The consecration meeting took place the Ashoka Hotel, New Delhi. An Occasion Lodge was opened with Right Worshipful Brother Kenneth Large, District Grand master for Bengal as Master. The Wardens' Chain were filled by Brothers C.M. Shahani and W.G. Miller, form the Irish and Scottish Constitutions respectively.
After the Lodge had been opened in all three degrees, deputations from the Grand Lodges of Scoltand, Ireland and England - in that order - were received. The deputations consisted of (from Scotland) - The Earl of Eglington and Winton, Most Worshipful Grand master Mason; Dr. Alexander F. Buchand Right Worshipful Grand Secretary; Geroge S. Draffen, Very Wroshipful Junior Grand Deacon; and S. W. Love, Past Provincial Grand Master of Renfrewshire East. (from the Grand Lodge of Ireland) - Right Worshipful Brother George S. Gamble Deputy Grand Master; Worshipful Brother Sir Basil A.T. McFarland, Bart, Provincial Grand Master of Donegal; and Worshipful Brother Canon R. R. Hartford, past Grand Chaplain. (from the United Grand Lodge of England - Right Worshipful Brother. The Earl Cadogan, Deputy Grand Master; Very Worshipful Brother J. W. Stubbs, Grand Secretary; Very Worshipful Brothers Canon J. R. Robson and Canon Mortlock, Past Grand Chaplains; Very Worshipful Brother Frank W. R. Douglas Grand Director of Ceremonies and Worshipful Brothers H. G. Potts and Lt. Col. M. G. Edwards, Past Deputy Grand Directors of Ceremonies.
After the three deputations had been received and seated, the Grand Master Mason of Scotland proceeded to the consecration. Thereafter the Deputy Grand Master of Ireland officially constituted the new Grand Lodge saying: "In the name of the Grand Lodges of England, Ireland and Scotland, and by command of their Grand Masers. I constitute and form you, my good Brethren, into the Sovereign Grand Lodge of India, and you are empowered henceforth to exercise all the rights and privileges of a Grand Lodge according to the ancient usages and landmarks of the Craft. May the Grand Architect of the Universe prosper, direct and counsel you in al your proceedings. After the consecration and constitution, the Deputy Grand Master of England assumed the throne and installed Major General Dr. Sir Syed Raza Ali Khan, G.C.I.E., K C. S.I., D. Lit., LI.D. His Highness the Nawab of Rampur as the first Grand Master of the Grand Lodge of India.
Thereafter the new Grand Master announced his appointments as Deputy Grand Master and Assistant Grand Masters who were invested and installed. This was followed by the appointment of the Regional Grand Masters and the appointment and installation of the Grand Officers of the Grand Lodge of India.
Among the Officers of the new Grand Lodge of India it is of interest to observe that, following the custom of the Grand Lodge of Scotland there is an office or 'Bearer of the Volume of the Sacred Law.' There were in fact five Brethren installed into this office, each Brother bearing a separate Volume of the Sacred Law - The Gita, The Koran, The Granth, The Zend Avesta and The Bible.
In addition to the three parent Grand Lodges, the M.W.Grand Master of the Grand Lodge of the State of Israel, the M.W.Past Grand Master of the Grand Lodge of Alberta (Canada) and about 1,491 Brethren from all over India were present at this historic event.
1. November 27: Inauguration of Regional Grand Lodge of Northern India by M.W.The Grand Master, M.W.Bro. Maj.Gen.Dr.Sir Syed Raza Ali Knan, H.H.The Nawab of Rampur.
2. December 2: Inauguration of the Regional Grand Lodge of Eastern India by R.W.Bro. Bhogilal C. Shah, Dy. G.M.
3. December 6: Inauguration of the Regional Grand Lodge of Western India by R.W.Bro. Bhogilal C. Shah, Dy.G.M.
4. December 9: Inauguration of the Regional Grand Lodge of Southern India by R.W.Bro.Bhogilal C.Shah, Dy.G.M.
5. May 14,1962: Lodge Kumaon (Nainital No. 1870 EC consecrated on 12.8.1888) opted to join the Grand Lodge of India and was numbered as No.148 GLI since in the mean time two Lodges viz. Lodge Shanthi was consecrated on 9.12.1961 and Lodge Bhogilal Shah granted the Warrant by GLI on 10.3.1962.
|
0.979579 |
If you've got an allergy to mold, take action to keep it from growing out of control in your home. The key to success is keeping things clean and dry. Put this checklist on your fridge to remind yourself of the steps you should take.
1. Clean weekly. Disinfect where mold grows -- in trash cans, sinks, and bathrooms.
2. Look for leaks. Check your roof and pipes beneath sinks and in the basement.
3. Dry damp areas quickly. Mold can start to grow in 24 to 48 hours.
4. Keep indoor humidity 50% or lower. Use a dehumidifier if you need it.
5. Don't overwater indoor plants. Damp soil grows mold.
6. Keep your fridge clean. Watch for signs of trouble in drip trays and on door seals.
7. Clean mold from your heating or AC ductwork. Hire a professional to do it.
8. Limit storage in damp basements or garages. Don't give the fungus a chance to grow.
9. Remove carpets in damp areas. It can breed mold if you have them in your bathrooms or the basement.
10. Air out kitchens and bathrooms. Put in exhaust fans to vent moisture.
11. Move mold away. Keep compost piles, yard clippings, and firewood far from the home.
12. Make sure gutters are clean. If they're blocked, this type of fungus can grow.
13. Check your foundation. The ground should slope away from it. If it doesn't, water may drain into your basement.
14. Stock up on allergy medication , if needed. Be ready before symptoms strike.
American Lung Association: "Make Valentine's Day an Asthma-Friendly Day."
Asthma and Allergy Foundation of America: "Tips to Control Indoor Allergens."
EPA: "Moisture and Mold Prevention and Control Tips," "Mold Clean Up," "What to Wear When Cleaning Moldy Areas."
|
0.949509 |
As one of Cincinnati's premier mediation attorneys, Rodger Walk acts as a facilitator to resolve disputes between parties in a civil group setting, without resorting to litigation to determine the outcome in the courts.
What is mediation, you may ask. Isn't it like arbitration? The answer is no! Mediation, like arbitration, is an alternative dispute resolution (ADR) process to resolve disputes between parties without resorting to litigation to determine the outcome in the courts. In mediation (which is always a confidential process), unlike arbitration or litigation, the mediator does not act as a trier of fact as would an arbitrator, a judge, or a jury. Rather the mediator acts as a facilitator to the parties who are seeking to resolve their dispute. The mediator acts as a neutral party and identifies the interests of the parties that are important to each them and facilitates discussions, both in a group setting and in private with each of the parties involved, so that they, with the help and input of the mediator, can fully explore a resolution of their dispute which will meet their various goals and interests while crafting a resolution that they control and which is mutually agreeable to them. The benefits to the parties are obvious as they exercise input and control of the process which they have mutually agreed is the best way to achieve a resolution of their dispute.
|
0.992416 |
Few people need convincing that animals bring them happiness and joy, but just why is that? This can be explained for a number of reasons, some of which just being the fact that animals are cute and/or fluffy! Pets make us feel good, any pet owner can agree. The simple sound of a cat purring or a dog panting just brings joy to people. I grew up with cats my whole life, and currently, I have two fluffy tabby cats who both bring me so much joy. I’ve also been a horseback rider for over 10 years, so my love for horses is strong as well. I consider myself to be a huge animal lover, so really almost any animal brings me joy. Here are some factors on why animals are good for the soul!
Let me rephrase that; pets ARE members of the family! Having a pet means you care for them and spend lots of time with them, just the same amount of time that you spend with your other family members. You talk to your pets, hang out with them, they live with you, and sometimes sleep with you. You put so much love towards your pets it’s hard to consider them as anything other than a member of the family! I know I love my cats the same as the rest of my family!
There’s a lot of scientific research when it comes to the health benefits of owning an animal. Studies show that if you own a dog you’re more likely to get out and exercise more, whether or not you’re taking the dog on a walk or to the park, you’ll feel more energized to get out of the house. Studies also show that children exposed to dogs and cats are less likely to develop allergy diseases, this includes allergies to animals, food, and forms of skin irritations, such as eczema. Sharing your life with your furry friends positively influences and helps develop kids’ immune system! Another study found that animals focus on the present, meaning that they don’t look forward to “tomorrow” as they don’t possess forms of self-awareness. This being said, they help older adults and people suffering from mental illness’, such as depression, enjoy the present and not excessively focus on negative factors.
Animals provide companionship, helping you get over your everyday loneliness. They always keep you company and let you be you, meaning that they’ll never judge you which is something humans lack sometimes. One of the reasons that I continue to horseback ride is not just because I love horses and think it’s fun, but it’s also because they make me feel safe. They bring me happiness whenever I feel down, and that goes for any animal. Simply seeing a dog walking down the road will bring me a sense of joy, and I hope everybody feels that way as well! Pet’s help to build relationships, not only with other animal lovers but in general, all by “teaching” us the basics of bonding with someone. Just introduce a cat to a dog person, they’ll be skeptical at first, but they’ll soon love cats as well. This works in the same way that pets bring us closer together as humans.
What can I say? If seeing a cute, fluffy little Pomeranian doesn’t bring you instant joy, you’re crazy!
All animals are innocent and don’t always understand us, so it’s best to give them endless amounts of love, especially with all the joy they bring us. Animals have always been a huge part of my life, and I hope that they can be a huge part of yours as well! If you’re not a pet owner, get out and spend some time at an animal shelter, or better yet adopt a pet if you can! Just remember, animals should not be taken for granted and some do require a lot more care than others. Make sure you appreciate them and give them back all the love and joy that they bring to you!
Do you have a pet who brings you endless amounts of joy? Or an experience with an animal that still makes you smile to this day? Let me know in the comments below!
|
0.945484 |
What Airlines Fly From Ft. Lauderdale to Costa Rica?
Costa Rica lies about 1,120 miles south, southwest of Fort Lauderdale, Florida. Costa Rica is one of several small countries that sit in the isthmus that connects North and South America. Its eastern shore faces the Caribbean Sea, while the Pacific Ocean borders the west. It takes about two and a half hours to fly from Fort Lauderdale to Costa Rica. The Florida city is located on the state's southeast coast, just north of Miami. Costa Rica is in the Central time zone but does not recognize Central Daylight Time.
While charter and seasonal flights land at several airports within Costa Rica, only two airports have regularly scheduled year-round service from the U.S. The Juan Santamaria International Airport serves San Jose, Costa Rica's capital and largest city. San Jose is located in the center of the country and is a convenient airport for business in the capital, and tours within the country. The airport lies just outside the city, and it takes just 20 minutes to reach most area hotels via taxi. The Liberia International Airport sits in Guanacaste, a northwest Costa Rican state, and is close to national parks and Pacific Coast beaches. The airport is eight miles northwest of Liberia and used mostly by tourists. San Jose lies 135 miles southeast of Liberia.
Spirit Airlines (spirit.com) offers daily nonstop flights to San Jose from Fort Lauderdale. As of May 2011, the airline schedules a daily flight in the late morning, arriving just before noon. The return flight departs San Jose in the early afternoon. A second late evening flight operates from Fort Lauderdale six days a week. The return flight leaves San Jose at about 1 a.m., arriving in Fort Lauderdale just before 6 a.m.
Airlines consider Ft. Lauderdale and Miami to be co-terminals, meaning both airports are equivalent for fare construction purposes. Miami International Airport is just 27 miles south of the Ft. Lauderdale Airport. American Airlines (aa.com) and Taca Airlines (taca.com) serve Miami, and each has nonstop flights to San Jose. American also offers nonstop flights to Liberia, whereas Taca serves the city via connecting service in San Jose.
Several airlines serve Costa Rica from Fort Lauderdale via another U.S. city. This means passengers will fly from Fort Lauderdale to another city, then take a connecting flight to Costa Rica. American Airlines serves San Jose and Liberia via Dallas. Continental (continental.com) serves both cities via Houston. Delta (delta.com) and U.S. Airways (usairways.com) fly to both cities via Atlanta and Charlotte, North Carolina, respectively.
Low-cost airlines such as Spirit Airlines generally offer highly competitive fares, but similar fares might be available from airports with two or more airlines serving the same destination, such as from Miami to San Jose. Airlines often offer promotional fares via connecting cities when projections indicate seats might go empty.
Fulton, Jeff. "What Airlines Fly From Ft. Lauderdale to Costa Rica?" Travel Tips - USA Today, https://traveltips.usatoday.com/airlines-fly-ft-lauderdale-costa-rica-55831.html. Accessed 21 April 2019.
|
0.935192 |
In New York City, in the mid-1910s, Bert Kalmar has reached the height of his career as a vaudeville performer with his dancing partner and sweetheart, Jessie Brown. In between shows, Bert composes music and secretly indulges in another favorite hobby of his: performing magic acts. Bert and Jessie are in love, but when Bert proposes marriage, Jessie insists that they wait until he is through being "everything in show business all at once." Billed as "Kendall the Great," Bert occasionally performs magic acts in disguise at a Coney Island theater. One day, while preparing for his magic show, Bert meets Harry Ruby, a song plugger who plays the piano at the Coney Island theater. Harry is instructed by his boss to serve as the magician's assistant, but he bungles his job and turns the Kendall the Great show into a comic disaster. Bert is angered by the fiasco, but becomes distracted by a more pressing problem when he discovers that his agent, Charlie Kope, and Jessie, who were in the audience, now know about his moonlighting. Bert later tries to incorporate some of his magic show themes into his act with Jessie, but she flatly rejects his ideas. Bert and Jessie continue performing their vaudeville act until the day that Bert injures his knee in a backstage accident. Much to his distress, Bert is told by a doctor that his injury will preclude him from dancing for at least one year. Hoping that Bert will now have more time to devote to her, Jessie suggests that they resume their plans to marry, but Bert rejects the idea. Jessie then decides to leave Bert and tour on her own. A short time later, at Al Masters' music library, Bert hears a tuneful song being played on a piano in the next room and asks to meet the composer. The composer turns out to be Harry, and although Bert remembers his first disastrous encounter with him, he eventually forgives Harry and begins writing songs with him. Following their first song, "My Sunny Tennessee," Bert and Harry create one hit song after another, but Bert grows increasingly depressed over his separation from Jessie. One day, Harry tries to help Bert overcome his depression by taking him on a trip to Buffalo, where Jessie is performing her show. Bert and Jessie resume their romance, and Jessie returns to New York with Bert, pledging to support his songwriting partnership with Harry. Harry, meanwhile, begins a romance with Terry Lordel, a sultry singer who is merely using him to further her career. Realizing that Harry is blind to Terry's scheme, Bert decides to protect him from an inevitable heartbreak by sending him to Florida to spend time with his favorite baseball team, the Washington Senators. When Harry returns to New York, he discovers that Terry has left him for another man. Harry is heartbroken, but Bert forces him to overlook his love troubles and concentrate on his work. A short time later, Harry reads a play that Bert has written and certain that it will fail, secretly sabotages the financing to protect Bert. Soon after the opening of the stage show Animal Crackers , for which Harry and Bert have contributed songs, Harry falls in love with Eileen Percy, a beautiful actress. One evening, at a party, Bert discovers the truth about Harry's involvement in the sabotaging of his play, and demands that they break off their partnership. Bert moves to Hollywood and becomes a successful screenplay writer, while Harry continues to compose songs. Harry eventually marries Eileen, who, with help from Jessie, secretly arranges a reunion of Bert and Harry on Phil Regan's radio show. Bert and Harry commemorate their reunion by singing a medley of their songs, and Bert surprises Harry at the end when he sings Harry's composition "Three Little Words," to which he had secretly written lyrics.
|
0.964905 |
For the Scottish backing band, see Bilbo Baggins (band).
Bilbo Baggins is the title character and protagonist of J. R. R. Tolkien's 1937 novel The Hobbit, as well as a supporting character in The Lord of the Rings. In Tolkien's narrative conceit, in which all the writings of Middle-earth are translations from the fictitious volume of The Red Book of Westmarch, Bilbo is the author of The Hobbit and translator of various "works from the elvish" (as mentioned in the end of The Return of the King).
In The Hobbit, Bilbo Baggins, a hobbit in comfortable middle age, was hired as a "burglar" –despite his initial objections– by the wizard Gandalf and 13 Dwarves led by their king, Thorin Oakenshield. The Dwarves were on a quest to reclaim the Lonely Mountain and its treasures from the dragon Smaug. The adventure took Bilbo and his companions through the wilderness, to the elf haven of Rivendell, across the Misty Mountains, through the black forest of Mirkwood, to Lake-town in the middle of Long Lake, and eventually to the Mountain itself. There, after Smaug was killed and the Mountain was reclaimed, the Battle of Five Armies took place. In that battle, a host of Elves, Men, and Dwarves--with the help of Eagles and Beorn the shapeshifter--defeated a host of Goblins and Warg. At the end of the story, Bilbo returned to his home in the Shire to find that several of his relatives--believing him to be dead--were trying to claim his home and possessions.
During his journey, Bilbo encountered other fantastic creatures, including Trolls, Elves, giant spiders, Beorn (a man who could change into a bear), Goblins, Eagles, Warg, and a murderous creature named Gollum. Underground, near Gollum's lair under the Misty Mountains, Bilbo accidentally found a magic ring of invisibility that he used to escape from Gollum.
By the end of the journey, Bilbo had become wiser and more confident, having saved the day in many precarious situations. Bilbo's journey has been compared to a pilgrimage of grace. The Hobbit can be characterized as a "Christian bildungsroman which equates progress to wisdom gained in the form of a rite of passage". He rescued the Dwarves from giant spiders with the magic ring and a short Elven-sword that he had acquired. He used the magic ring to sneak around in dangerous places, and he used his wits to smuggle the 13 Dwarves out of the Wood-elves' prison. When tensions arose over ownership of the treasures beneath the Lonely Mountain, Bilbo used the Arkenstone, a stolen heirloom jewel, as leverage in an unsuccessful attempt to negotiate a compromise between the Dwarves, the Wood-elves, and the Men of Lake-town. In so doing, Bilbo strained his relationship with Thorin; however, the two were reconciled at Thorin's deathbed following the Battle of the Five Armies. In addition to becoming wealthy from his share of the Dwarves' treasure, Bilbo found that he had traded respectability for experience and wisdom. At the end of the book, Gandalf proclaimed that Bilbo was no longer the Hobbit that he had been.
The Fellowship of the Ring, the first volume of The Lord of the Rings, begins with Bilbo's "eleventy-first" (111th) birthday, 60 years after the beginning of The Hobbit. The main character of the novel is Frodo Baggins, Bilbo's cousin,[nb 1] who celebrates his 33rd birthday and legally comes of age on the same day.
In T.A. 2989 (S.R. 1389), Bilbo, a lifelong bachelor, adopted Frodo, the orphaned son of his first cousin Primula Brandybuck and his second cousin Drogo Baggins, and made him his heir. Though Frodo was actually "his first and second cousin once removed either way", the two regarded each other as uncle and nephew.
All this time Bilbo had kept his magic ring, with no idea of its significance, using it mostly to hide from his obnoxious cousins, the Sackville-Bagginses, when they came to visit. Gandalf's investigations revealed it to be the One Ring forged by the Dark Lord Sauron. The Ring had prolonged Bilbo's life beyond the normal hobbit span, and at 111 he still looked 50. While the Ring did not initially corrupt him as it had its previous owners, it was beginning to affect him; over the years, it had begun to prey on his mind when out of his sight, and he lost sleep and felt "thin, sort of stretched … like butter that has been scraped over too much bread".
On the night of his and Frodo's birthday, Bilbo threw himself a party and invited all of the Shire. He signed his home, Bag End, and estate over to Frodo. He then gave a farewell address to his neighbours, at the end of which he put on the Ring and vanished from sight. As Bilbo prepared finally to leave the house, he reacted with panic and suspicion when Gandalf tried to persuade him to leave the Ring with Frodo. Bilbo refused to give up the Ring, referring to it as his "precious" – just as Gollum had. Gandalf lost his temper with his old friend and talked some sense into him. Bilbo admitted he would have liked to be rid of the Ring, and he left it behind, becoming the first person to do so voluntarily. He left the Shire that night, and was never seen in Hobbiton again.
His earlier adventure, his eccentric habits as a hobbit, and his sudden disappearance led to the enduring figure of "Mad Baggins" in hobbit folklore, who disappeared with a flash and a bang and returned with gold and jewels.
Freed of the Ring's power over his senses, Bilbo travelled first to Rivendell, and then on to visit the dwarves of the Lonely Mountain. After he returned to Rivendell he spent much of the next 17 years living a pleasant life of retirement: eating, sleeping, writing poetry, and working on his memoirs, There and Back Again, known as The Hobbit. He became a scholar of Elven lore, leaving behind the Translations from the Elvish, which forms the basis of what is known to us as The Silmarillion.
When Frodo and his friends Samwise Gamgee, Meriadoc Brandybuck and Peregrin Took stopped in Rivendell on their quest to destroy the Ring, Bilbo was still alive but now visibly aged, the years having caught up with him after he surrendered the Ring. Upon seeing the Ring again, he suddenly tried to take it from Frodo; he returned to his senses when a terrified Frodo backed away, and he broke down in tears, apologizing for bringing the burden of the Ring onto Frodo.
According to Appendix C of The Lord of the Rings, Bilbo was born to Bungo Baggins and Belladonna Took in T.A. 2890, or S.R. 1290. The Lord of the Rings gives the date of Bilbo's birthday as 22 September, but the actual date in the Shire calendar was Halimath 22; Tolkien tells us in Appendix D that he "used our modern names" for the months "to avoid confusion, while the seasonal implications of our names are more or less the same", so that Halimath is translated as September, but that "the Shire dates were actually in advance of ours by some ten days, and our New Year's Day corresponded more or less to the Shire January 9".
The Bagginses of Bag End were one of the oldest, wealthiest, and most respectable Hobbit families in Hobbiton until the year 2941 (SR 1341), when Bilbo inexplicably disappeared on his adventure and was thought dead.
"All that is gold does not glitter"
"The Man in the Moon Stayed Up Too Late"
"The Road Goes Ever On"
Tolkien's posthumously published poem "Bilbo's Last Song", illustrated by Pauline Baynes, describes Bilbo's contemplation of his forthcoming voyage to the Undying Lands. The illustrations evoke his last ride in the company of Elrond from Rivendell to the Grey Havens, as described in The Lord of the Rings.
In the 1955–1956 BBC Radio serialisation of The Lord of the Rings, Bilbo was played by Felix Felton.
In the 1968 BBC Radio serialisation of The Hobbit, Bilbo was played by Paul Daneman.
Nicol Williamson portrayed Bilbo with a light West Country accent in the 1974 performance released on Argo Records.
In the 1977 Rankin/Bass animated version of The Hobbit, Bilbo was voiced by Orson Bean. Bean also voiced both the aged Bilbo and Frodo in the same company's 1980 adaptation of The Return of the King.
In Ralph Bakshi's 1978 animated version of The Lord of the Rings, Bilbo was voiced by Norman Bird. Billy Barty was the model for Bilbo, as well as Frodo and Sam, in the live-action recordings Bakshi used for rotoscoping.
In the BBC's 1981 radio serialisation of The Lord of the Rings, Bilbo is played by John Le Mesurier.
In the 1993 television miniseries Hobitit by Finnish broadcaster Yle, Bilbo is portrayed by Martti Suosalo.
Throughout the 2003 video game the players control Bilbo, voiced by Michael Beatie. The game follows the plot of the book, but adds the elements of platform gameplay and various side-objectives along the main quests.
In The Lord of the Rings Online (2007) Bilbo resides in Rivendell, mostly playing riddle games with the Elf Lindir in the Hall of Fire. The game also includes multiple outlanding[clarification needed] stories about Bilbo's adventures and his ultimate fate from[clarification needed] various Hobbits in the Shire.
In Peter Jackson's films The Fellowship of the Ring (2001) and The Return of the King (2003) Bilbo is played by Ian Holm, who had played Frodo in the BBC radio series 20 years earlier. The movies omit the 17-year gap between Bilbo's 111th birthday and Frodo's departure from the Shire, as a result Bilbo mentions in Rivendell that he was unable to revisit the Lonely Mountain before his retirement.
In Peter Jackson's The Hobbit film series, a prequel to The Lord of the Rings, the young Bilbo is portrayed by Martin Freeman while Ian Holm reprises his role as an older Bilbo in An Unexpected Journey (2012) and The Battle of the Five Armies (2014).
The International Astronomical Union names all colles (small hills) on Saturn's moon Titan after characters in Tolkien's work. In 2012, they named a hilly area "Bilbo Colles" after Bilbo Baggins.
^ Although Frodo referred to Bilbo as his "uncle", they were in fact first and second cousins, once removed either way (his paternal great-great-uncle's son's son and his maternal great-aunt's son).
^ a b Pearce, Joseph (2012). Bilbo's Journey: Discovering the Hidden Meaning of the Hobbit. Charlotte, NC: Saint Benedict Press. ISBN 978-1618900586.
^ Tolkien, J.R.R. "Prologue, Of the Ordering of the Shire". The Lord of the Rings.
^ The Lord of the Rings, Appendix D.
^ White, James (22 October 2010). "Martin Freeman Confirmed As Bilbo!". Empire. Retrieved 28 November 2010.
^ "Categories for Naming Features on Planets and Satellites". Gazetteer of Planetary Nomenclature. International Astronomical Union. Retrieved 29 December 2012.
^ "Bilbo Colles". Gazetteer of Planetary Nomenclature. International Astronomical Union. Retrieved 14 November 2012.
For an alternate, fuller, version of Bilbo's family tree see Rodovid Engine.
The Hobbit, or There and Back Again is a childrens fantasy novel by English author J. R. R. Tolkien. It was published on 21 September 1937 to wide acclaim, being nominated for the Carnegie Medal. The book remains popular and is recognized as a classic in childrens literature. The Hobbit is set in a time between the Dawn of Færie and the Dominion of Men, and follows the quest of home-loving hobbit Bilbo Baggins to win a share of the treasure guarded by Smaug the dragon. Bilbos journey takes him from light-hearted, rural surroundings into more sinister territory, the story is told in the form of an episodic quest, and most chapters introduce a specific creature or type of creature of Tolkiens geography. Bilbo gains a new level of maturity and wisdom by accepting the disreputable, fey, the story reaches its climax in the Battle of the Five Armies, where many of the characters and creatures from earlier chapters re-emerge to engage in conflict. Personal growth and forms of heroism are central themes of the story and these themes have led critics to view Tolkiens own experiences during World War I as instrumental in shaping the story.
The authors scholarly knowledge of Germanic philology and interest in fairy tales are often noted as influences, the publisher was encouraged by the books critical and financial success and, requested a sequel. As Tolkiens work progressed on the successor The Lord of the Rings and these few but significant changes were integrated into the second edition. Further editions followed with minor emendations, including those reflecting Tolkiens changing concept of the world into which Bilbo stumbled, the work has never been out of print. Its ongoing legacy encompasses many adaptations for stage, radio, board games, several of these adaptations have received critical recognition on their own merits. Bilbo Baggins, the titular protagonist, is a respectable, reserved hobbit, during his adventure, Bilbo often refers to the contents of his larder at home and wishes he had more food. Until he finds a ring, he is more baggage than help. Gandalf, an itinerant wizard, introduces Bilbo to a company of thirteen dwarves, during the journey the wizard disappears on side errands dimly hinted at, only to appear again at key moments in the story.
Sauron /ˈsaʊrɒn/ is the title character and main antagonist of J. R. R. Tolkiens The Lord of the Rings. In the same work, he is identified as the necromancer, in Tolkiens The Silmarillion, he is described as the chief lieutenant of the first Dark Lord, Morgoth. The being known as Sauron originated as an immortal spirit, in his origin, Sauron therefore perceived the Creator directly. As Tolkien noted, Sauron could not, of course, be a sincere atheist, though one of the minor spirits created before the world, he knew Eru, according to his measure. In the terminology of Tolkiens invented language of Quenya, these spirits were called Ainur. Those who entered the world were called Valar, especially the most powerful ones. The lesser beings who entered the world, of whom Sauron was one, were called Maiar, in Tolkiens letters, the author noted that Sauron was of course a divine person. Tolkien noted that he was of a far higher order than the Maiar who came to Middle-earth as the Wizards Gandalf and Saruman.
As created by Eru, the Ainur were all good and uncorrupt, as Elrond stated in The Lord of the Rings, rebellion originated with the Vala Melkor. According to a story meant as a parable of events beyond Elvish comprehension, Eru let his spirit-children perform a great Music, the Music of the Ainur, developing a theme revealed by Eru himself. For a while the choir made wondrous music, but Melkor tried to increase his own glory by weaving into his song thoughts. Straightway discord arose around him, and many that sang nigh him grew despondent, but some began to attune their music to his rather than to the thought which they had at first. However, Sauron was not a beginner of discord, and he knew more of the Music than did Melkor, whose mind had always been filled with his own plans. Apparently Sauron was not even one of the spirits that immediately began to attune their music to that of Melkor, the cosmic Music now represented the conflict between good and evil. Finally, Eru abruptly brought the Song of Creation to an end, to show the spirits, faithful or otherwise, what they had done, Eru gave independent being to the now-marred Music.
In J. R. R. Tolkiens fantasy world of Middle-earth, the Misty Mountains are an epic mountain range, and one of the most important features of Middle-earths geography. The mountain-chain is less known by its alternative names. One of these is Hithaeglir, this was misspelled as Hithaiglin on the original Lord of the Rings map, other alternative names are the Mountains of Mist or the Towers of Mist. The range stretched continuously for some 900 miles across the continent of Middle-earth, the Misty Mountains first appeared in print in Tolkiens 1937 book, The Hobbit. A vision of the mountains is invoked in the first chapter and they are encountered directly in chapter 4. Further information about the mountains was added in Tolkiens subsequent publications, the northernmost section of the Misty Mountains ran from Carn Dûm to Mount Gundabad, and was known as the Mountains of Angmar. Mount Gundabad was where Durin awoke according to legend, though it was an abode of Orcs, Mount Gram, another Orc nest, was not far away.
Mount Gundabad was on the side of the range, where it nearly joined the westernmost extremity of the Grey Mountains. The strategic gap was about 10 miles wide, the greatest Dwarf realm in Middle-earth, Khazad-dûm, was located at the midpoint of the Misty Mountains. The areas three massive peaks - the Mountains of Moria - were Caradhras and Fanuidhol, under Celebdil was the main part of Khazad-dûm, it included the Endless Stair, which the Dwarves built from the foundations of the mountain to its summit. The southernmost peak of the Misty Mountains was Methedras, from here the great range finally subsided into foothills, here the southernmost foothills of the Misty Mountains looked across the Gap of Rohan to the northernmost foothills of the White Mountains. The Misty Mountains had very few passes, the most important of these were the High Pass and the Redhorn Pass, there was a pass at the source of the Gladden. Some of Middle-earths notable valleys and dales lay in or close to the Misty Mountains, nearby lay Nan Curunír, where Isengard was built.
In J. R. R. Tolkiens legendarium, Elves are one of the races that inhabit a fictional Earth, often called Middle-earth, and set in the remote past. They appear in The Hobbit and in The Lord of the Rings, Tolkien had been writing about Elves long before he published The Hobbit. The modern English word elf derives from the Old English word ælf, Tolkien would make it clear in a letter that his Elves differ from those of the better known lore, referring to Scandinavian mythology. By 1915 when Tolkien was writing his first elven poems, the elf and gnome had many divergent. One of the last of the Victorian Fairy-paintings, The Piper of Dreams by Estella Canziani, according to Marjorie Burns, Tolkien eventually chose the term elf over fairy, but still retained some doubts. I hear the tiny horns Of enchanted leprechauns And the padded feet of many gnomes a-coming, as a philologist, Tolkiens interest in languages led him to invent several languages of his own as a pastime. In considering the nature of who might speak these languages, and what stories they might tell, Some of the stories Tolkien wrote as elven history have been seen to be directly influenced by Celtic mythology.
For example, Flight of The Noldoli is based on the Tuatha Dé Danann and Lebor Gabála Érenn, john Garth sees that with the underground enslavement of the Noldoli to Melkor, Tolkien was essentially rewriting Irish myth regarding the Tuatha Dé Danann into a Christian eschatology. The name Inwe, given by Tolkien to the eldest of the elves and his clan is similar to the found in Norse mythology as that of the god Ingwi-Freyr. Terry Gunnell claims that the relationship between beautiful ships and the Elves is reminiscent of the god Njörðr and the god Freyrs ship Skíðblaðnir and he retains the usage of the French derived term fairy for the same creatures. Tolkien wrote of them, They are made by man in his own image and likeness and they are immortal, and their will is directly effective for the achievement of imagination and desire. In The Book of Lost Tales Tolkien includes both the more serious type of elves such as Fëanor and Turgon alongside the frivolous, Jacobean type of elves such as the Solosimpi.
Alongside the idea of the greater Elves, Tolkien developed the idea of children visiting Valinor, Elves would visit children at night and comfort them if they had been chided or were upset. This theme, linking elves with childrens dreams and nocturnal travelling was largely abandoned in Tolkiens writing, I do know Celtic things, and feel for them a certain distaste, largely for their fundamental unreason. They have bright colour, but are like a broken stained glass window reassembled without design and they are in fact mad as your reader says — but I dont believe I am. Terry Gunner notes that the titles of the Germanic gods Freyr and Freyja are given to Celeborn, according to Tom Shippey, the theme of diminishment from semi-divine Elf to diminutive Fairy resurfaces in The Lord of the Rings in the dialogue of Galadriel. Yet if you succeed, our power is diminished, and Lothlórien will fade, and we must depart into the West, or dwindle to a rustic folk of dell and cave, slowly to forget and to be forgotten.
Gandalf /ˈɡændɑːlf/ is a fictional character and one of the protagonists in J. R. R. Tolkiens novels The Hobbit and The Lord of the Rings. He is a wizard, member of the Istari order, as well as leader of the Fellowship of the Ring, in The Lord of the Rings, he is initially known as Gandalf the Grey, but returns from death as Gandalf the White. Tolkien discusses Gandalf in his essay on the Istari, which appears in the work Unfinished Tales. Merry he could be, and kindly to the young and simple, yet quick at times to sharp speech and the rebuking of folly, but he was not proud, and sought neither power nor praise. Mostly he journeyed tirelessly on foot, leaning on a staff, as one of the Maiar, Gandalf would have participated in the Music of the Ainur at the creation of the world. However he does not attain any prominence until the Valar settle in Valinor, in Valinor, Gandalf was known as Olórin. As recounted in the Valaquenta in The Silmarillion, he was one of the Maiar of Valinor, specifically, of the people of the Vala Manwë, and was said to be the wisest of the Maiar.
He was associated with two other Valar, Irmo, in whose gardens he lived, and Nienna, the patron of mercy. When the Valar decided to send the order of the Wizards to Middle-earth in order to counsel and assist all those who opposed Sauron, Olórin was proposed by Manwë. Olórin initially begged to be excused as he feared Sauron and lacked the strength to face him, as one of the Maiar, Gandalf was not a mortal Man but an angelic being who had taken human form. As one of those spirits, Olórin was in service to the Creator, along with the other Maiar who entered into the world as the five Wizards, he took on the specific form of an aged old man as a sign of his humility. Gandalf the Grey was the last of the Istari landing in the Havens of Mithlond. He seemed the oldest and least in stature of them, but Círdan the Shipwright felt that he had the highest inner greatness on their first meeting in the Havens, and gave him Narya, the chief Wizard, learned of the gift and resented it. Gandalf hid the ring well, and it was not widely known until he left with the other ring-bearers at the end of the Third Age that he, Gandalfs relationship with Saruman, the head of their Order, was strained.
In literature, a conceit is an extended metaphor with a complex logic that governs a poetic passage or entire poem. By juxtaposing and manipulating images and ideas in surprising ways, extended conceits in English are part of the poetic idiom of Mannerism, during the late sixteenth and early seventeenth century. In English literature the term is associated with the 17th century metaphysical poets. The metaphysical conceit differs from an analogy in the sense that it does not have a clear-cut relationship between the things being compared. An example of the latter occurs in John Donnes A Valediction, Forbidding Mourning, the metaphysical conceit is often imaginative, exploring specific parts of an experience. John Donnes The Flea is a poem seemingly about fleas in a bed, when Sir Philip Sidney begins a sonnet with the conventional idiomatic expression My true-love hath my heart and I have his. He takes the metaphor literally and teases out a number of possibilities in the exchange of hearts. The result is a fully formed conceit, the Petrarchan conceit is a form of love poetry wherein a mans love interest is referred to in hyperbole.
Middle-earth is the setting of much of J. R. R. Tolkiens legendarium. The term is equivalent to the term Midgard of Norse mythology, describing the human-inhabited world, Middle-earth is the north continent of Earth in an imaginary period of the Earths past, in the sense of a secondary or sub-creational reality. Its general position is reminiscent of Europe, with the environs of the Shire intended to be reminiscent of England, in ages, after Morgoths defeat and expulsion from Arda, his place was taken by his lieutenant Sauron. The Valar withdrew from involvement in the affairs of Middle-earth after the defeat of Morgoth. The most important wizards were Gandalf the Grey and Saruman the White, Gandalf remained true to his mission and proved crucial in the fight against Sauron. Saruman, became corrupted and sought to establish himself as a rival to Sauron for absolute power in Middle-earth, other races involved in the struggle against evil were Dwarves and most famously Hobbits. The early stages of the conflict are chronicled in The Silmarillion, while the stages of the struggle to defeat Sauron are told in The Hobbit.
Conflict over the possession and control of precious or magical objects is a theme in the stories. The First Age is dominated by the doomed quest of the elf Fëanor, in ancient Germanic mythology, the world of Men is known by several names, such as Midgard, Middenheim and Middengeard. The term Middle-earth, referred to as middle-world, was therefore not invented by Tolkien. It is found throughout the Modern English period as a development of the Middle English word middel-erde, Tolkien first encountered the term middangeard in an Old English fragment he studied in 1914, Éala éarendel engla beorhtast / ofer middangeard monnum sended. Hail Earendel, brightest of angels / above the middle-earth sent unto men and this quote is from the second of the fragmentary remnants of the Crist poems by Cynewulf. The name Éarendel was the inspiration for Tolkiens mariner Eärendil, who set sail from the lands of Middle-earth to ask for aid from the angelic powers, Tolkiens earliest poem about Eärendil, from 1914, the same year he read the Crist poems, refers to the mid-worlds rim.
The Shire is a region of J. R. R. Tolkiens fictional Middle-earth, described in The Lord of the Rings and other works. The Shire refers to an area settled exclusively by Hobbits and largely removed from the goings-on in the rest of Middle-earth and it is located in the northwest of the continent, in the large region of Eriador and the Kingdom of Arnor. In the languages invented by Tolkien, its name in Westron was Sûza Shire or Sûzat The Shire, while its name in Sindarin was i Drann. According to Tolkien, the Shire measured 40 leagues from the Far Downs in the west to the Brandywine Bridge in the east, and 50 leagues from the northern moors to the marshes in the south. This is confirmed in an essay by Tolkien on translating The Lord of the Rings, the Shire was originally divided into four Farthings. The outlying lands of Buckland and the Westmarch were formally added after the War of the Ring, within the Farthings there are some smaller unofficial clan homelands, the Tooks nearly all live in or near Tuckborough in Tookland, for instance.
A Hobbit surname often indicates where the came from, Samwise Gamgees last name derives from Gamwich. Buckland was named for the Oldbucks, the Shire is described as a small but beautiful and fruitful land, beloved by its inhabitants. The Hobbits had an agricultural system in the Shire but were not industrialised. The landscape included small pockets of forest, various supplies were produced in the Shire, including cereals, fruit and pipe-weed. The original parts of the Shire were subdivided into four Farthings, the Three-Farthing Stone marked the tripoint where the borders of the Eastfarthing and Southfarthing of the Shire came together, by the East Road. It is claimed that the Three-Farthing Stone was inspired by the Four shire stone and these were formally given to the hobbits as the East and West Marches of the Shire by King Elessar after the War of the Ring, in S. R. Buckland had been settled by then, Gorhendad Oldbuck led hobbits from the East Farthing across the river in T. A.2340. There is no mention of settlement in the Westmarch until Elessars gift, Sam Gamgees daughter Elanor and her husband Fastred settled there, the Northfarthing was the least populous part of the Shire.
Gollum is a fictional character from J. R. R. Tolkiens legendarium. He was introduced in the 1937 fantasy novel The Hobbit, and became an important supporting character in its sequel, Gollum was a Stoor Hobbit of the River-folk, who lived near the Gladden Fields. Originally known as Sméagol, he was corrupted by the One Ring, in Appendix F of The Lord of the Rings, the name Sméagol is said to be a translation of the actual Middle-earth name Trahald. The Ring, which Gollum referred to as my precious or precious, centuries of the Rings influence twisted Gollums body and mind, and, by the time of the novels, he had come to love and hate the Ring, just as he loved and hated himself. Throughout the story, Gollum was torn between his lust for the Ring and his desire to be free of it, Bilbo Baggins found the Ring and took it for his own, and Gollum afterwards pursued it for the rest of his life. Gollum finally seized the Ring from Frodo Baggins at the Cracks of Doom in Orodruin in Mordor, but he fell into the fires of the volcano, where both he and the Ring were destroyed.
Gollum was first introduced in the Hobbit as a small, slimy creature who lived on an island in the centre of an underground lake at the roots of the Misty Mountains. He survived on fish, which he caught from his small boat. Over the years, his eyes adapted to the dark and became lamp-like, Bilbo Baggins stumbled upon Gollums lair, having found Gollums ring in the network of goblin tunnels leading down to the lake. At his wits end in the dark, Bilbo agreed to a game with Gollum on the chance of being shown the way out of the mountains. In the new version Gollum pretended that he would show Bilbo the way out if he lost the riddle-game, discovering the Ring missing, he suddenly realized the answer to Bilbos last riddle — What have I got in my pocket. Bilbo inadvertently discovered the Rings power of invisibility as he fled, Gollum was convinced that Bilbo knew the way out all along, and hoped to intercept him near the entrance, lest the goblins apprehend Bilbo and find the Ring. Bilbo at first thought to kill Gollum in order to escape, but was overcome with pity, as Bilbo escaped, Gollum cried out, Thief, Thief.
Trolls are fictional characters in J. R. R. Tolkiens legendarium. They are portrayed as large humanoids of great strength and poor intellect, in The Hobbit, Bilbo Baggins and the Dwarf company encountered three trolls on their journey to Erebor. The trolls captured the Dwarves and prepared to eat them, but Bilbo managed to distract them until dawn and they spoke with thick Cockney accents, and even had English names, Tom and William. In The Lord of the Rings, Treebeard remarked that Trolls were made, in mockery of Ents, as Orcs were of Elves. Trolls origins are detailed in The Silmarillion. Morgoth, the evil Vala, created the first Trolls before the First Age of Middle-earth and they were strong and vicious but stupid creatures. Their major weakness was that they turned to stone in sunlight, during the wars of Beleriand, Gothmog had a bodyguard of trolls. As Morgoth had ordered to capture Húrin alive, the managed to wipe out the trolls before being captured by orcs. Many trolls died in the War of Wrath, but some survived and joined Sauron, in the Second Age and Third Age, trolls were among Saurons most dangerous warriors.
They could speak, and used a form of Westron. Hill-trolls in the Coldfells north of Rivendell killed Arador, Chieftain of the Rangers of the North, at the Black Gate the Army of the West fought hill-trolls of Gorgoroth, which are generally taken to be Olog-hai. Cave Trolls attacked the Fellowship of the Ring in Moria, One is described as having dark greenish scales and black blood. Their hide was so thick that when Boromir struck one in the arm his sword was notched, Frodo Baggins was able to impale the toeless foot of the same troll with the enchanted dagger Sting. Mountain Trolls are mentioned once, wielding the great battering ram Grond in shattering the gates of Minas Tirith, snow Trolls are mentioned only in the story of Helm Hammerhand. When Helm went out clad in white during the Long Winter to stalk and slay his enemies, otherwise nothing is known of them. Olog-hai are described in Appendix F of Return of the King and they were strong, agile and cunning trolls created by Sauron, not unlike the Uruk-hai.
Frodo Baggins is a fictional character in J. R. R. Tolkiens legendarium, and one of the main protagonists of The Lord of the Rings. Frodo is a hobbit of the Shire who inherits the One Ring from his cousin Bilbo Baggins and he is mentioned in Tolkiens posthumously published works, The Silmarillion and Unfinished Tales. Frodo did not appear until the draft of A Long-Expected Party. In the fourth draft, he was renamed Bingo Bolger-Baggins, son of Rollo Bolger, Tolkien did not change the name to Frodo until the third phase of writing, when much of the narrative, as far as the hobbits arrival in Rivendell, had already taken shape. Prior to this, the name Frodo had been used for the character who eventually became Peregrin Took, Frodo is introduced in The Fellowship of the Ring as the adoptive heir of Bilbo Baggins. At the age of 21 he was adopted by his cousin, Bilbo and he and Bilbo shared the same birthday, the 22nd of September. It was Bilbo who introduced the Elvish languages to Frodo and Meriadoc Brandybuck are first cousins once removed, since Frodo is first cousin to Meriadocs father, Saradoc Brandybuck.
Their common ancestors are Gorbadoc Brandybuck and Mirabella Took Brandybuck, Frodo is moreover second and third cousin to Meriadocs mother, Esmeralda Took. Frodo is related to Peregrin Took, being his second, even Fredegar Bolger is second cousin once removed to Frodo. Frodo shares a relationship with his gardener Samwise Gamgee although they are not related. The Council of Elrond convened on 25th October T. A.3018 in Rivendell, the Fellowship of the Ring opens as Frodo came of age and Bilbo left the Shire for good on his one hundred and eleventh birthday. Frodo inherited Bag End and Bilbos ring, which were introduced in The Hobbit. Gandalf, at time, was not certain about the origin of the Ring, so he warned Frodo to avoid using it. Realizing that he was a danger to the Shire as long as he remained there with the Ring, Frodo decided to home and take the Ring to Rivendell, home of Elrond. He left the Shire with three companions, his gardener Samwise Gamgee and his cousins Meriadoc Brandybuck and Peregrin Took and they escaped just in time, for Saurons most powerful servants, the Nine Nazgûl, had entered the Shire as Black Riders, looking for Bilbo and the Ring.
Men of the 1st Battalion, Lancashire Fusiliers in a communication trench near Beaumont Hamel, 1916. Photo by Ernest Brooks.
Mentioned at the beginning of The Lord of the Rings, the Ivy Bush is the closest public house to Birmingham Oratory which Tolkien attended while living near Edgbaston Reservoir. Perrott's Folly is nearby.
"Welcome to Hobbiton" sign in Matamata, New Zealand, where the film trilogy was shot.
An Air New Zealand Boeing 777-300ER with "The Airline of Middle-earth" livery to promote the film The Hobbit: An Unexpected Journey, at London Heathrow Airport.
Hobbit holes as they were filmed on a farm near Matamata, New Zealand.
A cave-troll in The Fellowship of the Ring.
One of the Olog-hai approaches Aragorn in The Return of the King.
Elves as portrayed in the 1977 Rankin-Bass version of The Hobbit.
The One Ring in Peter Jackson's films.
|
0.999476 |
Here are my new clothes - 2 fashion outfits.
Outfit 1 - it includes white t-shirt and green tank and opal cuffed crops jeans with 3 badges.
Outfit 2 : It includes white t-shirt pink cropped cardigan and opal cuffed crops jeans with two badges.
|
0.965789 |
What is the best care for my skin after sunburn?
Whenever the individual threshold dosage is exceeded the UV radiation of the sun causes erythema. Symptoms are ranging from minor reddening to a distinct sunburn. There are several skin care therapies which have proved successful such as active agents with echinacea extract and D-panthenol, which both quickly show positive results. The combination with a liposomal concentrate has synergetic effects as the phosphatidylcholine contained has additional anti-inflammatory effects because of the linoleic acid and choline contents. Also evening primrose oil and linseed oil encapsulated in phosphatidylcholine-containing nanoparticles act anti-inflammatory due to the high dosage of gamma respectively alpha linolenic acid. The application of an aloe vera product in form of a surface film protects the skin and provides cooling effects.
As skin with erythema symptoms also is extremely permeable, a fact which also applies for a barrier-damaged horny layer, preservatives and perfumes are contraindicated for individuals with sensitive skin.
|
0.975643 |
Wisdom explores, discerns, weighs, creates and envisions; it avoids jumping to conclusions and getting trapped by assumptions. Anything which helps us raise and care-fully consider a healthy range of factors, perspectives and options before and as we act qualifies as deliberation. So utilize and institutionalize diverse forms of such potent consideration.
If we are going to be wise we need to deepen into issues, we need to look underneath and beyond the obvious and into the non-obvious. The definition of deliberation I am using here is any process through which people are thinking carefully. They’re not being sloppy about their assumptions and perspectives. They are being mindful and heartful in the sense of bringing in their full awareness and caring impulses. They care about what happens. Such consideration is not taken lightly and done sloppily. There is a kind of intellectual and cognitive and heart based craftsmanship going on here. It’s like somebody who is a great artist or craftsman at something. They care about what is going to happen, and they put the attention, resources and time into it required to come up with really good results. So wisdom is about doing that with life’s issues and with how we deal with life’s issues.
Here are mhy thoughts about some of the words I use in the short description of this pattern: “Explored” in the pattern heart communicates an open-ended quality. We are exploring we are not going to stop short. We are going to adventure in this and that and the other, and see what in fact is going on here.
“Discerning” implies looking clearly, not having sloppy perception, noting what’s important – and noting what is irrelevant that is being pushed into the issue and any manipulation taking place.
“Weighing” involves exploring how this idea or option is better or worse than that other one… how this approach makes more or less sense than that other one. Some people don’t like the term, since it has a weightiness, a heaviness to it. Now, I don’t think that deliberation necessarily requires heaviness, but it does require a lot of different comparing kinds of energy in your mind and heart. But it can also be creative: Although we can have have deliberation that chooses between two or three options, we can also have deliberation that creates abundant options in search of what would be really good. I think the wisest forms of deliberation involve engagement with many options and creating new options. One process is “choice-creation” – creating options beyond those given to us.
And then there’s “not getting trapped by assumptions.” There are particular methods to help us surface our assumptions and look at them. If we are working together and we have disagreements and concerns about what we are saying to each other and we delve into what’s underneath all that, that can help us not be trapped by our own and each other’s assumptions.
Inviting diverse people to be included in the conversation – especially people who are very different from those who happens to be here now – can also help us not get trapped by our own assumptions. Our assumptions can blind us. Because they are assumptions – because we assume them – we look right through them: they’re transparent to us. Individuals have assumptions. Groups can have assumptions. Real wisdom requires recognizing that we have internal lenses that are causing us to see things in particular ways and miss other things. Since we want wisdom, we want to not miss anything that’s important if we can help it. So any work we can do to get beyond our fixed assumptions can be very productive.
Deliberation covers anything that helps us raise up, bring to our attention, deal with care, consider, and reflect on a healthy range of factors – which is ideally everything that is really relevant plus a few more things that may not seem relevant at first. I actually developed a whole theory of “relevance-plus”, meaning including everything that is relevant plus a few other things, because you can never know (in an interconnected world) what’s totally irrelevant. Having things which are not obviously relevant added into the mix often creates surprising breakthrough insights, often from some stranger’s perspective, as you will see in the rent-a-cop guard story in this pattern’s examples section.
So having a culture of deliberation is what a wise democracy is largely about.
Bohm Dialogue can help deliberation because it is specifically about bringing to the group’s attention the presence of assumptions in what is being said. It is about clarifying them and suspending them for group awareness and consideration, whether or not they are explicitly addressed by the group. Awareness of assumptions can help people avoid blind spots. And this is best done in a diverse group because assumptions tend to be invisible to those who hold them, while being vividly obvious for those who don’t share them.
Re “welcoming the stranger”, there is a great true story of people who are trying to discuss their shoe business and what it should do next. They’re holding this meeting in breakout groups in a big warehouse and a rent-a-cop guard wanders over to one of the breakout groups and suggests that they create a new kind of military-style boot for people like him, boots that are more comfortable than the usual boot but still have that militaristic image that supports their work. The people in the group did that and ended up generating millions of dollars in sales. The guard wasn’t even part of the meeting, but welcoming him in to share his insight and ideas created new options that weren’t there before.
|
0.999998 |
Back to School Gadgets and electronics for the school year!
It is almost back to school, or already back to school for some students in Canada or the US, so what back to school Gadgets do I recommend? I recommend a large flash drive, Netbook, Tablet computer, laptop, voice recorder, and MP3 Player for students to use during the school year for studying and relaxing.
A large flash drive like the SanDisk Cruzer 16 GB Cruzer USB 2.0 Flash Drive is useful for backing up important assignments, and large media files.
Netbooks like the Toshiba NB505-N508BL 10.1-Inch Netbook (Blue) has good battery life, and is light, so it won't be overly heavy when you just need to take a notes in class, but don't want to carry around a heavy laptop. A netbook is also good for surfing the web and doing other basic tasks.
Laptops and Netbooks are great, but they are not very easy to hold with one hand to read, so I recommend a Tablet Computer like the Acer Iconia Tab A500-10S16u 10.1-Inch Tablet Computer (Aluminum Metallic) for using as a E-book reader to read E-books, or to use to go online, use apps, watch video, or listen to music. Some Tablets can also be used to take HD video and pictures.
A full size laptop like the Toshiba Satellite L675-S7108 17.3-Inch LED Laptop (Grey) is great for running more powerful programs like video editors, music editors, photo editors, web design software, games, and watching movies in full screen.
A portable MP3 player like the Creative Labs Zen Mozaic EZ300 8 GB MP3 Player (Black) are great for listening to audio books, lectures you downloaded or recorded with a voice recorder, or listening to music when you are relaxing.
S-Video to 3-RCA Composite AV Cable for Laptop PC TV Cable lets you easily connect your laptop with an S-video output to a TV which only have the Yellow video, red and white sound composite cables.
This cable will help you convert your S-video video signal to the Yellow RCA composite signal for TV, and the 3.5 mm headphone audio jack to red and white audio connectors to connect to your TV for playing back audio on your TV's speaker system. You can also use the red and white cable to plug into your stereo or home theater sound system if they both supports RCA inputs for sound.
You can also use this cable for a DVD player which only have S-video out, and no RCA out. The length of the cable is about 5 ft. long.
RCA composite cables are also a lot more easier to plug into a TV because it only have one thick connector compared to multiple connectors on the S-video plugs which you have to align accurately with the corresponding holes in the TV inputs unlike RCA connectors where you just plug it into one hole.
S-Video to 3-RCA Composite AV Cable for Laptop PC TV the perfect cable for easily plugging in your laptops S-video feed to your TV with a Yellow video RCA input.
|
0.999999 |
I've heard: "I've to go the potty", "I have to meet Mr John", "Nature is calling me, I have to go", "I've to go to the rest room".
These sentences aren't formal, are they? Is there any other way that I can use it when I'm in a meeting?
Excuse me, I'm just going to the loo.
Excuse me, I'm just going to the toilet.
You could substitute "bathroom" for "toilet" if you wanted to be more euphemistic, but if you wanted to be less explicit, I'd recommend avoiding the word altogether. "Bathroom" is not generally used in normal/informal conversation in the UK to mean toilet ("bathroom" normally means the room in your house which contains the bath, shower, toilet, etc.) but might work in a more formal context.
To deal with one of the specific examples you gave, potty is not a word for toilets, it's a word for the pseudo-toilets small children use before they are used to using a real toilet (see pictures). Potty therefore is not a word you would ever use in a formal or informal context to refer to the normal act of going to the toilet. "Go to the potty" would be used, for example, when a mother was talking about her toddler. The key difference here is that a potty is actually a different thing to a toilet and you say "go to the toilet" when you're going to the toilet and "go to the potty" when you're going to use a potty.
In your examples I wouldn't put the first three in a 'formal' category. If you're in a meeting a must inform everyone why you are leaving "Excuse me, I just need to use the rest room" would be a perfectly acceptable way to do this. Depending on the meeting you could probably just say "Excuse me for a moment" without feeling the need to tell everyone what you're doing.
"Use the bathroom" is the most common euphemism, at least in the UK.
"Use the gents / ladies" - this is slightly more chatty, possibly better suited to a business context.
"Use the facilities" - also common, but it avoids the issue so much it sounds a little silly to me.
In a highly formal context the whole issue would be avoided. If you're about to close a big deal, hold it! And you don't see the Queen asking Mr President to "use the John". Instead she tells one of her aides, who tells one of his aides.
They are not British, either. I mentioned that because you tagged your question as British English.
In the UK at least, the only context in which people talk about potties is when discussing little children who use an actual potty http://dictionary.cambridge.org/dictionary/british/potty_2 This has no meaning when discussing anyone else who uses an actual toilet.
Restroom is part of American English in particular http://dictionary.cambridge.org/dictionary/british/restroom?q=restroom , and is not used in the UK.
I don't know about your other, two examples but, they are not British.
If you want a way to say that you need to use a toilet, particularly a formal way, there is nothing wrong with saying I need to go to toilet or I have to go to toilet. You could also say I have to go to the lavatory but, that is rather old-fashioned and not as common.
Not the answer you're looking for? Browse other questions tagged british-english formality or ask your own question.
Why do words like “expectorate” sound more posh than words like “spit”?
How accepted is ‘f***ing’ in informal conversation?
a term for back-and-forth communication?
|
0.999966 |
1 Center for the Study of the First Americans, Department of Anthropology, Texas A&M University, 4352 TAMU, College Station, TX 77843–4352, USA.
2 Center for the Study of the First Americans, Departments of Anthropology and Geography, Texas A&M University, 4352 TAMU, College Station, TX 77843–4352, USA.
3 Department of Anthropology, University of Utah, Salt Lake City, UT 84122–0060, USA.
When did humans colonize the Americas? From where did they come and what routes did they take? These questions have gripped scientists for decades, but until recently answers have proven difficult to find. Current genetic evidence implies dispersal from a single Siberian population toward the Bering Land Bridge no earlier than about 30,000 years ago (and possibly after 22,000 years ago), then migration from Beringia to the Americas sometime after 16,500 years ago. The archaeological records of Siberia and Beringia generally support these findings, as do archaeological sites in North and South America dating to as early as 15,000 years ago. If this is the time of colonization, geological data from western Canada suggest that humans dispersed along the recently deglaciated Pacific coastline.
Explaining when and how early modern humans entered the New World and adapted to its varied environments is one of anthropology's most exciting and enduring questions. Until recently, it was generally believed that about 13.5 thousand years ago (ka) (1) the first migrants spread rapidly from Beringia to Tierra del Fuego in a few centuries, passing through an interior ice-free corridor in western Canada, becoming Clovis, and hunting to extinction the last of the New World's mega-mammals (2). Today, we realize that the peopling of the Americas was a much more complex process, because of two significant developments during the past decade. Molecular geneticists, using refined methods and an ever-increasing sample of living populations and ancient remains, are now capable of providing reliable information on the Old World origins of the first Americans, the timing of their initial migration to the New World, and the number of major dispersal events. Archaeologists have found new sites and reinvestigated old ones using new methods, to test whether a pre-13.5-ka population existed in North and South America, and to explain how early populations colonized its unpeopled landscapes (Fig. 1). Here, we review these developments and present a working model explaining the dispersal of modern humans across the New World. We focus primarily on molecular genetic, archaeological, and human skeletal evidence. We do not review the contributions of historical linguistics, because most linguists today are pessimistic about the use of their data to reconstruct population histories beyond about 8 ka (3).
Map showing location of archaeological sites mentioned in text (1, Yamashita-cho; 2, Tianyuan Cave; 3, Studenoe-2; 4, Mal'ta; 5, Nizhnii Idzhir; 6, Alekseevsk; 7, Nepa-1; 8, Khaergas Cave; 9, Diuktai Cave; 10, Byzovaia; 11, Mamontovaya Kurya; 12, Yana RHS; 13, Ushki; 14, Tuluaq; 15, Nogahabara I; 16, Nenana; 17, Swan Point; 18, Old Crow; 19, Bluefish Caves; 20, Kennewick; 21, Paisley Caves; 22, Spirit Cave; 23, Arlington Springs; 24, Calico; 25, Tule Spring; 26, Pendejo Cave; 27, La Sena and Lovewell; 28, Gault; 29, Schaefer, Hebior, and Mud Lake; 30, Meadowcroft Rockshelter; 31, Cactus Hill; 32, Topper; 33, Page-Ladson; 34, Tlapacoya; 35, Pedra Furada; 36, Lagoa Santa; 37, Pikimachay; 38, Quebrada Jaguay; 39, Quebrada Santa Julia; 40, Monte Verde; 41, Piedra Museo; 42, Cerro Tres Tatas and Cuevo Casa del Minero; 43, Fell's Cave).
All human skeletal remains from the Americas are anatomically modern Homo sapiens; thus, the peopling of the New World is best understood in the context of the evolution and dispersal of modern humans in the Old World. Modern human dispersal from Africa across Eurasia began by about 50 ka (4, 5) and culminated with colonization of the Americas. Evidence from nuclear gene markers, mitochondrial (mt)DNA, and Y chromosomes indicates that all Native Americans came from Asia (6, 7). Molecular genetic diversity among modern Native Americans fits within five mtDNA (A, B, C, D, and X) andtwo Y-chromosome (C and Q) founding haplogroups, and all of these are found among indigenous populations of southern Siberia, from the Altai to Amur regions (8–10). Of these haplogroups, only X is known from both central Asia and Europe; however, X is a large, diverse haplogroup with many lineages, and the lineage found in Native American populations is distinct from those in Eurasia (6, 11). Ancient DNA from early American skeletal remains (12, 13) and human coprolites (14) link the present and the past; these, too, have only yielded Native American haplogroups of Asian origin. Based on the modern and ancient DNA records, then, Asia was the homeland of the first Americans, not Europe, lending no support to the recently proposed “Solutrean hypothesis,” that the progenitors of Clovis were derived from an Upper Paleolithic population on the Iberian Peninsula (15, 16).
Using contemporary mtDNA and Y-chromosome variation as a clock, geneticists calculate that modern humans dispersed into greater central Asia by 40 ka (4, 17, 18), setting the stage for the colonization of the Americas. Corroborating human skeletal evidence of this event, however, is scarce. The earliest modern human remains in Siberia are from Mal'ta and date to only 24 ka (19). In nearby eastern Asia, though, modern human fossils from Tianyuan Cave and Yamashita-cho are dated to the critical period, 39 to 36 ka (20), and in Siberia, archaeological evidence suggests that modern humans entered the region by 45 to 40 ka, when initial Upper Paleolithic technologies, tool forms, items of personal adornment, and art appeared for the first time (21). In Europe, archaeologists link the emergence of such behaviors to the spread of modern humans from southwestern Asia (22).
Establishing when central Asian and Native American haplogroup lineages last shared a common ancestor has proven to be difficult. Current coalescent estimates based on variation in extant mtDNA lineages set the event at 25 to 20 ka (4) or less than 20 ka (23), after the last glacial maximum (LGM), and estimates based on Y-chromosome variability suggest that divergence occurred after 22.5 ka, possibly as late as 20 to 15 ka (7, 24, 25). The differences in calculations are the result of several issues, including potential variation in mutation rates, variable and sometimes circular techniques of calibrating coalescent times to calendar years, time-dependency of mutation and/or substitution rates, and effects of genetic drift on the original founding population (4, 26).
New analyses of haplogroup subclades help to resolve when modern humans subsequently spread from Beringia to the rest of the Americas. Three subclades of mtDNA subhaplogroup C1 are widely distributed among North, Central, and South Americans but absent in Asian populations, which suggests that they evolved after the central Asian–Native American split, as the first Americans were dispersing from Beringia (27). The estimated date of coalescence for these subclades is 16.6 to 11.2 ka, which suggests that the colonization of the Americas south of the continental ice sheets may have occurred sometime during the late-glacial period, thousands of years after the initial splitting of Asian and Native American lineages. Genetic simulation studies and analyses of the geographic structure of Native American mtDNA haplogroups further suggest that colonization from Beringia occurred earlier in this time frame (about 16 ka) than later, because late-entry, rapid-spread models (like the Clovis-First model) are not capable of generating the observed geographic distribution of genetic patterns in extant populations (28, 29).
The cranial morphology of the earliest Americans [i.e., “Paleoamericans” like Kennewick (Washington), Spirit Cave (Nevada), and Lagoa Santa (Brazil)] is significantly different from that of more recent Native Americans (30). Given the assumption that craniometric variation is neutral and therefore phylogenetically significant, the differences could reflect two successive migrations stemming from two geographically or temporally distinct sources (31–33). Accordingly, Paleoamericans came to the New World first and were later replaced by ancestors of modern Native Americans.
Genetic data do not support this model. All major Native American mtDNA and Y-chromosome haplogroups emerged in the same region of central Asia, and all share similar coalescent dates, indicating that a single ancient gene pool is ancestral to all Native American populations (6, 10, 16). Similarly, all sampled native New World populations (from Alaska to Brazil) share a unique allele at a specific microsatellite locus that is not found in any Old World populations (except Koryak and Chukchi of western Beringia), which implies that all modern Native Americans descended from a single source population (34). This history is further supported by ancient DNA studies showing that Paleoamericans carried the same haplogroups (and even subhaplogroups) as modern Native groups (12–14). Thus, although the Paleoamerican sample is still small, the morphological differences are likely the result of genetic drift and natural selection (30), not separate migrations.
A separate but related problem is whether some modern Native American populations resulted from migrations that occurred after initial human dispersal. Phylogenetic analyses of haplogroup lineages cannot easily discriminate between a single migration and multiple migrations of genetically distinct but closely related populations. For this, we need identification of specific mtDNA and Y-chromosome haplogroup subclades through analysis of the entire molecule (as well as detailed studies of nuclear genome variation). A recent study investigating mtDNA subclade distributions across Siberia (11) recognized two subclades of haplogroup D2, one among central Siberian groups (D2a) and the other among Chukchi, Siberian Eskimos, and Aleuts (D2b). These subclades share a coalescent date of 8 to 6 ka, which suggests that middle-Holocene ancestors of modern Eskimo-Aleuts spread from Siberia into the Bering Sea region and not vice versa, which supports earlier interpretations based on dental evidence (35).
To colonize the Americas, modern humans had to learn to subsist in the extreme environments of the Siberian Arctic. They did this by 32 ka. The evidence comes from the Yana Rhinoceros Horn site (RHS), which is located along the lower Yana River in northwest Beringia and contains a frozen, well-preserved cultural layer with stone artifacts and remains of extinct fauna (36). Most interesting are bi-beveled rods on rhinoceros horn and mammoth ivory, signs of a sophisticated Upper Paleolithic technology. Sites of similar age occur in subarctic central Siberia (Nepa, Alekseevsk) and arctic European Russia (Mamontovaya Kurya, Byzovaia) (21, 37), which suggests that people had become well-equipped to handle life in the far north shortly after arriving in south Siberia (22). Their spread into the Arctic occurred during a time of relatively warm climate before the LGM.
As yet, no unequivocal traces that the early people of Yana RHS explored farther east onto the Bering Land Bridge and crossed into Alaska and northwest Canada have been found, but hints of an early human presence may include the 28-ka mammoth-bone core and flake recovered from Bluefish Caves (Yukon Territory) and even older bone materials from along the nearby Old Crow River (38). These bones, however, lack associated stone artifacts and might be the result of natural bone breakage (39). Instead, the earliest reliable archaeological evidence from eastern Beringia comes from Swan Point (central Alaska), where a distinctive microblade and burin industry dates to 14 ka (40). The Swan Point artifacts share many technological qualities with late Upper Paleolithic sites in central Siberia (e.g., Studenoe-2, Nizhnii Idzhir, Khaergas Cave, Diuktai Cave) and appear to document the dispersal of microblade-producing humans from Siberia to Beringia during the late glacial.
After 14 ka, the Beringian archaeological record becomes much more complicated. The best-documented industries for this time are the Nenana complex of central Alaska (dating to 13.8 to 13 ka) and the early Ushki complex of Kamchatka (13 ka) (22, 41). These complexes contain small bifaces and unifaces made on blades and flakes, but they lack microblades and burins. The Sluiceway-Tuluaq complex (northwest Alaska) also may be contemporaneous to Nenana but is technologically distinct from it (22). These sites contain large lanceolate bifaces that appear to date to about 13.2 ka. Another site, Nogahabara I (west Alaska), contains a mixed array of artifacts (lanceolate bifaces, notched bifaces, and microblade cores) reportedly dated to 13.8 to 12.7 ka (42); however, this site must be viewed with caution because the artifacts and bones used for dating are from near-surface and surface contexts in a sand dune blowout, a context notorious for artifact redeposition and mixing. After 13 ka, microblade and burin technologies reappear, sometimes in combination with bifacial point technologies. Perhaps these changes through time and across space relate to cultural differences and population turnovers, but more likely they represent the development of a unique human adaptation to the rapidly changing shrub-tundra environment of late-glacial Beringia (22). A small number of undated fluted points similar to Clovis occur in Alaska (39), but their relation to Clovis points found south of the continental ice sheets is unknown and may represent the backward flow of technologies (or people) from mid-latitude North America to Beringia at the very end of the Pleistocene (22, 39).
Since 40 ka, the Cordilleran and Laurentide ice sheets covered much of Canada, but during warmer periods they retreated sufficiently to create ice-free corridors along the Pacific coast and Plains east of the Canadian Rockies. These corridors were the conduits through which the first humans spread from Beringia to the Americas. When humans arrived in arctic Siberia at Yana RHS 32 ka, contracted ice sheets left wide-open corridors through which humans could have passed, but by 24 ka the ice sheets had grown sufficiently to clog both passageways (43). Although isolated ice-free refugia probably existed in both corridors throughout the LGM, humans probably did not occupy these areas until the corridors reopened during the late glacial. Timing of the reopening of the coastal and interior corridors is still debated, because of imprecise dating and because the various Cordilleran glaciers reacted differently to climate change (43). Nonetheless, the coastal corridor appears to have become deglaciated and open to human habitation by at least 15 ka, whereas the interior corridor may not have opened until 14 to 13.5 ka (44, 45). The archaeological records of both corridors are still inadequate for addressing questions about the initial peopling of the Americas; however, the presence of human remains dating to 13.1 to 13 ka at Arlington Springs, on Santa Rosa Island off the coast of California, indicates that the first Americans used watercraft (46).
Clovis and its contemporaries. Discussion of the early archaeological record south of the Canadian ice sheets starts with Clovis, the best-documented early complex in the Americas. Radiocarbon dates obtained over the last 40 years from Clovis sites across North America suggested that the complex ranged in age from 13.6 to 13 ka (2); however, evaluation of the existing dates and new 14C assays reveals that Clovis more precisely dates from 13.2–13.1 to 12.9–12.8 ka (47), a shorter and younger time span for Clovis than earlier thought. The current evidence suggests Clovis flourished during the late Allerød interstadial and quickly disappeared at the start of the Younger Dryas stadial. The apparent simultaneous appearance of Clovis across much of North America suggests that it rapidly expanded across the continent, but the overlap in 14C dates between regions of North America makes it impossible to determine a point of origin or direction of movement.
With recently excavated Clovis assemblages, especially from the southeastern United States and Texas, we know unequivocally that Clovis is characterized by not only bifacial technology but also distinctive Upper Paleolithic blade technology (Fig. 2) (15, 48). The principal diagnostic artifact of Clovis is its lanceolate fluted projectile point, not just because of its form but also the technology used to produce it. Other tool forms were equally important, especially formal stone tools like end scrapers, as well as cylindrical rods made on ivory, antler, or bone. These rods were beveled at one or both ends and functioned as fore-shafts or projectile points, respectively (48).
The distinctive Clovis biface and blade technologies (schematic diagram with approximate scale). Clovis fluted points were manufactured by reduction of a large blank through a succession of stages including removal of broad thinning flakes across the entire face of the biface, end thinning at all stages, and final fluting of the finished piece (A). Thinning flakes were often utilized as tools. Clovis blades were detached from conical and wedge-shaped cores (B), the main distinction being that conical cores have blade removals around their entire circumference while wedge-shaped cores have a single front of blade removals. Blades are long, parallel-sided, curved in longitudinal cross section, and triangular or trapezoidal in transverse cross section; they were often used as tools. These specific artifacts are made on Edwards chert from the Gault site, Texas.
Traditionally, Clovis has been thought to represent a population of mobile hunter-gatherers because individual Clovis tools had multiple functions and were highly curated, which suggests that they were part of a conveniently transported tool kit (2). Many Clovis tools were made on high-quality stones like chert and obsidian procured hundreds of kilometers from where they were eventually discarded (48). Clovis sites are small and typically represent mammoth or mastodon kills, short-term camps, or caches. In the southeastern United States and Texas, however, enormous scatters of Clovis artifacts have been found that possibly represent quarry-habitation sites habitually used by Clovis people, from which they did not range great distances. At the Gault site (Texas), of 650,000 excavated artifacts (mostly debitage), 99% are made from local, on-site cherts; rare nonlocal materials are from sources only 70 km away (49).
Clovis points have long been known to be associated with remains of mammoth and mastodon (2), but the importance of proboscideans in Clovis subsistence remains uncertain. Optimal foraging theory has been used to predict that humans would not become proboscidean-hunting specialists (50), and certainly the recurrence of bison, deer, hares, reptiles, and amphibians indicates that, in some contexts, Clovis people did more than hunt mammoth and mastodon (51). However, at least 12 unequivocal Clovis proboscidean kill and butchery sites are known (52), an unusually high number for such a short period of time, given that there are only six proboscidean kill sites for the entire Eurasian Upper Paleolithic (53). In most areas of North America, Clovis people hunted mammoth and mastodon regularly, and they likely played some role in their extinction. It is not surprising that they also subsisted on a variety of other foods.
Most Clovis sites are in North America. Few Clovis artifacts have been found in Central and South America (54). Instead, a different complex of archaeological sites may mark this era south of Panama. At least six sites in South America (Cerro Tres Tetas, Cueva Casa del Minero, and Piedra Museo in Argentina and Fell's Cave, Quebrada Santa Julia, and Quebrada Jaguay in Chile) have multiple dates that overlap the known age of Clovis (47, 55, 56). These sites mostly contain undiagnostic flake tools and bifaces, but distinctive Fishtail points (some with fluted bases) were found in deposits dating to 13.1 to 12.9 ka at Fell's Cave and Piedra Museo. Although it has been suggested that Fishtail points postdate Clovis and were derived from it (54), the two may have shared an earlier, as yet unidentified progenitor. Among the newest Clovis-aged localities in South America is Quebrada Santa Julia, a stratified site with a well-preserved living floor and hearth dating to 13.1 ka (57). Associated with the hearth were a broken, nondiagnostic, fluted biface, several flake tools, a core, and nearly 200 flakes, as well as remains of extinct horse. Quebrada Santa Julia provides an unambiguous association of fluting technology and extinct fauna in South America.
Early occupations. Since the discovery and definition of Clovis, researchers have searched for evidence of an even older occupation of the Americas, but most sites dating before Clovis investigated between 1960 and 1995 [e.g., Calico (California), Tule Springs (Nevada), Pendejo Cave (New Mexico), Pedra Furada (Brazil), Pikimachay Cave (Peru), and Tlapacoya (Mexico)] have not held up to scientific scrutiny (2, 39). Perhaps the best candidate is the Monte Verde site (Chile), which contains clear artifacts in a sealed context and is dated to 14.6 ka (58). Despite criticism (59), its acceptance by most archaeologists means synchronous and possibly earlier sites should exist in North America. A few localities dating between 15 and 14 ka now seem to provide compelling evidence of an occupation before Clovis.
In the northern United States, the Schaefer and Hebior sites (Wisconsin) provide strong evidence of human proboscidean hunting or scavenging near the margin of the Laurentide ice sheet between 14.8 and 14.2 ka (60, 61). At each site, disarticulated remains of a single mammoth were sealed in pond clay and associated with unequivocal stone artifacts. The bones bear consistent signs of butchering—cut and pry marks made by stone tools (61). Critics suggest that the bone breakage and surface marring is the result of natural processes (2); however, it is difficult to reject the evidence from these sites because of the consistent patterning of the marks, low-energy depositional context, and associated stone tools. Even earlier evidence of humans in Wisconsin is suggested by what appear to be cut and pry marks on the lower limb bones of a mammoth recovered from Mud Lake. These bones date to 16 ka, but stone tools are absent (61).
Three other sites—Meadowcroft Rockshelter (Pennsylvania), Page-Ladson (Florida), and Paisley Cave (Oregon)—may provide additional evidence of humans in North America by about 14.6 ka. At Meadowcroft Rockshelter, artifacts occur in sediments that may be as old as 22 to 18 ka (62), but it is the record post-dating 15.2 ka that is especially interesting. This is the uppermost layer of lower stratum IIa, which produced a small lanceolate biface and is bracketed by dates of 15.2 and 13.4 ka. Acceptance of the site, however, hinges on resolution of dating issues (63).
At Page-Ladson, early materials occur in a buried geologic context within a sinkhole that is now submerged by the Aucilla River. Seven pieces of chert debitage, one expedient unifacial flake tool, and a possible hammerstone were associated with extinct faunal remains, including a mastodon tusk with six deep grooves at the point where the tusk emerged from the alveolus of the cranium (64). These grooves are interpreted to have been made by humans as the tusk was removed from its socket. Seven 14C dates for this horizon average about 14.4 ka, which suggests human occupation of the sinkhole during the late Pleistocene when the water table was lower than it is today. Page-Ladson may contain evidence of pre-Clovis humans, but, despite extensive reporting on the site, more details on artifact contexts and site formation processes are needed to permit objective evaluation of the record.
At Paisley Cave, three human coprolites are directly 14C dated to about 14.1 ka (14). The human origin of the coprolites is supported by ancient mtDNA analyses that showed they contained haplogroups A and B, but a complete report is not yet available.
The evidence for humans in the Americas even earlier than 15 ka is less secure, but recently has been presented for four sites: Cactus Hill (Virginia), La Sena (Nebraska), Lovewell (Kansas), and Topper (South Carolina). Cactus Hill is a sand-dune site with late prehistoric, Archaic, and Clovis levels. Potentially older artifacts, including small prismatic blade cores, blades, and two basally thinned bifacial points were recovered 10 to 15 cm below the Clovis level (65). Three 14C dates ranging from 20 to 18 ka are reported from the levels below Clovis, but there are also dates of 10.3 ka and later. Charcoal samples were not recovered from hearth features but occur as isolated fragments at the same level as the artifacts. The younger dates indicate translocation of charcoal from overlying sediments, and the older charcoal could be derived from sediments underlying the cultural layer (59, 63), but luminescence dates on the aeolian sands correlate with the older 14C results and indicate minimal mixing of the sediments (66). Even though much information has yet to be published about this site, the potential presence of a biface and blade assemblage stratigraphically below the site's Clovis assemblage is compelling.
An even older occupation has been proposed based on taphonomically altered mammoth bones at the La Sena and Lovewell sites that date from 22 to 19 ka (67). Neither site has yielded stone tools or evidence of butchering; however, many of the leg bones display percussion impact and flaking, which suggests that they were quarried and flaked by humans while they were in a fresh, green state, within a few years of the death of the animals. Clovis people periodically flaked bone in this fashion, as did Upper Paleolithic Beringians (2, 22); however, in those contexts humans left behind stone tools, whereas at La Sena and Lovewell stone tools are absent.
Currently, the oldest claim for occupation of North America is at the Topper site, located on a Pleistocene terrace overlooking the Savannah River. Clovis artifacts at Topper are found at the base of a colluvial deposit, and older artifacts are reported in underlying sandy alluvial sediments dated to about 15 ka (68). The proposed early assemblage is a smashed core and microlithic industry. Cores and their removals show no negative bulbs, and flakes and spalls were modified into small unifacial tools and “bend-break tools,” possibly used for working wood or bone. In 2004, similar-looking material was found in older alluvial deposits dating in excess of 50 ka (69). Given that the assemblage was not produced through conventional Paleolithic technologies and that the putative artifacts could have been produced through natural processes (specifically thermal spalling), evaluation of this site must await a complete lithic analysis.
Unquestionably, the human skeletal evidence across the Americas shows that the New World was populated by Homo sapiens. Although the crania of these early people look different from modern Native Americans, modern and ancient DNA studies show that they were genetically related. The earliest inhabitants of the Americas hailed from south Siberia (between the Altai Mountains and Amur valley) and ultimately descended from a population of modern humans who dispersed from Africa by 50 ka and appeared in central Asia by 40 ka. Thus, a maximum limiting age can be placed on the entry of people into the New World of no earlier than 40 ka. Any claims for an earlier migration should be viewed with skepticism.
Current molecular evidence implies that members of a single population left Siberia and headed east to the Americas sometime between about 30 and 13 ka (Fig. 3). Most studies suggest this event occurred after the LGM, less than 22 ka. Recent analyses of mtDNA and nuclear sequence data further suggest a dispersal south from Beringia after 16.6 ka (27), from a founding population of less than 5000 individuals (70). The genetic record has not revealed multiple late-Pleistocene migrations, but does distinguish a Holocene dispersal of Eskimo-Aleuts from northeast Asia. There is nothing in the modern or ancient genetic records to suggest a European origin for some Native Americans.
Combined, the molecular genetic and archaeological records from Siberia, Beringia, and North and South America suggest humans dispersed from southern Siberia shortly after the last glacial maximum (LGM), arriving in the Americas as the Canadian ice sheets receded and the Pacific coastal corridor opened, 15 ka.
At first glance, the genetic evidence would seem to mesh well with the traditional view that Clovis represents the first people to enter the Americas. Redating of Clovis from 13.2–13.1 to 12.9–12.8 ka indicates it is not only centuries younger than the late-glacial complexes of Alaska but also younger than even the most conservative estimate for the opening of the interior Canadian corridor. The Clovis-First model, however, requires all American sites older than Clovis to be rejected, and this appears to be no longer possible. The Clovis-First model does not explain the apparent synchroneity between Clovis and the early Paleo-Indian sites of South America. Finally, a late-entry and rapid dispersal of humans across the New World is inconsistent with the distribution of genetic variation observed in Native American populations today.
Humans possibly colonized the Americas before the LGM. They occupied western Beringia by 32 ka, and no glacial ice sheets would have blocked passage through western Canada during this relatively warm time. However, there is still no unequivocal archaeological evidence in the Americas to support such an early entry.
The most parsimonious explanation of the available genetic, archaeological, and environmental evidence is that humans colonized the Americas around 15 ka, immediately after deglaciation of the Pacific coastal corridor. Monte Verde, Schaefer, and Hebior point to a human presence in the Americas by 14.6 ka. Human occupations at Meadowcroft, Page-Ladson, and Paisley Cave also appear to date to this time. Together these sites may represent the new basal stratum of American prehistory, one that could have given rise to Clovis. Most mtDNA and Y-chromosome haplogroup coalescence estimates predict a 15-ka migration event, and it may correlate to the post-LGM dispersal of microblade-producing populations into northern Siberia and their eventual appearance in Beringia during the late glacial. The first Americans used boats, and the coastal corridor would have been the likely route of passage since the interior corridor appears to have remained closed for at least another 1000 years. Once humans reached the Pacific Northwest, they could have continued their spread southward along the coast to Chile, as well as eastward along the southern margin of the continental ice sheets, possibly following traces of mammoth and mastodon to Wisconsin. Clovis could have originated south of the continental ice sheets, and the dense Clovis quarry-campsites in the southeastern United States may be the result of a longer occupation there than in other regions. Alternatively, Clovis could be the result of a second dispersal event from Beringia to America—from the same ancestral gene pool as the first dispersing population—when the interior ice-free corridor opened, about 13.5 ka.
The peopling of the Americas debate is far from resolved. To move forward, we must continue to take an interdisciplinary scientific approach to the problem. Archaeological investigations will provide the empirical evidence of the first Americans, but this evidence must be objectively and rigorously evaluated. Geoarchaeological investigations have and will play a major role by documenting the geological and geochronological context of sites and developing predictive models to find early sites. The sparse evidence for pre–13 ka occupation of the Americas may be a problem of sampling and artifact recognition. Genetic studies will also be key as more is learned about modern and ancient haplogroup subclades in combination with full mtDNA genome sequencing and identification of patterns of nuclear DNA variation. The empirical data from these fields and other disciplines will ultimately provide the evidence needed to build and test models to explain the origins and dispersal of the first Americans.
All ages are presented as ka (thousands of calendar years ago). Dates relating to genetic events are in calendar years based on coalescent methods. Dates relating to archaeological events are derived by calibrating radiocarbon ages. Radiocarbon dates younger than 21,000 14C years ago were calibrated with Calib 5.0.1 (IntCal04 curve); older dates were calibrated by using CalPal Online (CalPal 2007 HULU curve).
G. A. Haynes, The Early Settlement of North America: The Clovis Era (Cambridge Univ. Press, Cambridge, 2002).
J. N. Hill, in The Settlement of the American Continents: A Multidisciplinary Approach to Human Biogeography, C. M. Barton, G. A. Clark, D. R. Yesner, G. A. Pearson, Eds. (Univ. of Arizona Press, Tucson, 2004), pp. 39–48.
P. Forster, Philos. Trans. R. Soc. London B Biol. Sci. 359, 255 (2004).
M. Metspalu, T. Kivisild, H.-J. Bandelt, M. Richards, R. Villems, Nucleic Acids Mol. Biol. 81, 181 (2006).
D. A. Merriwether, in Environment, Origins, and Population, D. H. Ubelaker, Ed., Handbook of North American Indians, vol. 3, W. C. Sturtevant, Ed. (Smithsonian Institution Press, Washington, DC, 2006), pp. 817–830.
T. M. Karafet, S. L. Zegura, M. F. Hammer, in Environment, Origins, and Population, D. H. Ubelaker, Ed., Handbook of North American Indians, vol. 3, W. C. Sturtevant, Ed. (Smithsonian Institution Press, Washington, DC, 2006), pp. 831–839.
M. V. Derenko et al., Am. J. Hum. Genet. 69, 237 (2001).
E. B. Starikovskaya et al., Ann. Hum. Genet. 69, 67 (2005).
S. L. Zegura, T. M. Karafet, L. A. Zhivotovsky, M. F. Hammer, Mol. Biol. Evol. 21, 164 (2004).
M. Derenko et al., Am. J. Hum. Genet. 81, 1025 (2007).
B. M. Kemp et al., Am. J. Phys. Anthropol. 132, 605 (2007).
D. G. Smith, R. S. Malhi, J. A. Eshleman, F. A. Kaestle, B. M. Kemp, in Paleoamerican Origins: Beyond Clovis, R. Bonnichsen, B. T. Lepper, D. Stanford, M. R. Waters, Eds. (Center for the Study of the First Americans and Texas A&M Univ. Press, College Station, TX, 2005), pp. 243–254.
D. L. Jenkins, in Paleoindian or Paleoarchaic? Great Basin Human Ecology at the Pleistocene/Holocene Transition, K. E. Graf, D. N. Schmitt, Eds. (Univ. of Utah Press, Salt Lake City, 2007), pp. 57–81.
B. Bradley, D. Stanford, World Archaeol. 36, 459 (2004).
S. Wang et al., PLoS Genet. 3(11), e185 (2007).
R. S. Wells et al., Proc. Natl. Acad. Sci. U.S.A. 98, 10244 (2001).
D. Comas et al., Eur. J. Hum. Genet. 12, 495 (2004).
M. P. Richards, P. B. Pettit, M. C. Stiner, E. Trinkaus, Proc. Natl. Acad. Sci. U.S.A. 98, 6528 (2001).
H. Shang, H. Tong, S. Zhang, F. Chen, E. Trinkaus, Proc. Natl. Acad. Sci. U.S.A. 104, 6573 (2007).
T. Goebel, Evol. Anthropol. 8, 208 (1999).
J. F. Hoffecker, S. A. Elias, Human Ecology of Beringia (Columbia Univ. Press, New York, 2007).
T. G. Schurr, S. T. Sherry, Am. J. Hum. Biol. 16, 420 (2004).
M.-C. Bortolini et al., Am. J. Hum. Genet. 73, 524 (2003).
M. Seielstad et al., Am. J. Hum. Genet. 73, 700 (2003).
S. Y. W. Ho, G. Larson, Trends Genet. 22, 79 (2006).
E. Tamm et al., PLoS ONE 2(9), e829 (2007).
A. G. Fix, Am. J. Phys. Anthropol. 128, 430 (2005).
D. H. O'Rourke, M. G. Hayes, S. W. Carlyle, Hum. Biol. 72, 15 (2000).
J. F. Powell, The First Americans: Race, Evolution, and the Origin of Native Americans (Cambridge Univ. Press, Cambridge, 2005).
R. L. Jantz, D. W. Owsley, Am. J. Phys. Anthrop. 114, 146 (2001).
R. González-José et al., Am. J. Phys. Anthrop. 128, 772 (2005).
W. A. Neves, M. Hubbe, L. B. Piló, J. Hum. Evol. 52, 16 (2007).
K. B. Schroeder et al., Biol. Lett. doi:10.1098/rsbl.2006.0609 (2007).
J. H. Greenberg, C. G. Turner II, S. L. Zegura, Curr. Anthropol. 27, 477 (1986).
V. V. Pitulko et al., Science 303, 52 (2004).
P. Pavlov, J. I. Svendsen, S. Indrelid, Nature 413, 64 (2001).
R. E. Morlan, Quat. Res. 60, 123 (2003).
E. J. Dixon, Bones, Boats, and Bison: Archeology and the First Colonization of Western North America (Univ. of Utah Press, Salt Lake City, 1999).
C. E. Holmes, B. A. Crass, paper presented at the 30th annual meeting of the Alaska Anthropological Association, Fairbanks, 27 to 29 March 2003.
T. Goebel, M. R. Waters, M. A. Dikova, Science 301, 501 (2003).
D. Odess, J. T. Rasic, in American Antiquity 72, 691 (2007).
J. J. Clague, R. W. Mathewes, T. A. Ager, in Entering America: Northeast Asia and Beringia before the Last Glacial Maximum, D. B. Madsen, Ed. (Univ. of Utah Press, Salt Lake City, 2004), pp. 63–94.
C. A. S. Mandryk, H. Josenhans, D. W. Fedje, R. W. Mathewes, Quat. Sci. Rev. 20, 301 (2001).
A. S. Dyke, in Quaternary Glaciations—Extent and Chronology, Part II: North America, J. Ehlers, P. L. Gibbard, Eds. (Elsevier, Amsterdam, 2004), pp. 373–424.
J. R. Johnson, T. W. Stafford Jr., G. J. West, T. K. Rockwell, American Geophysical Union Joint Assembly, Acapulco, 22 to 25 May 2007, Eos 88(23), Jt. Assem. Suppl., Abstr. PP42A-03.
M. R. Waters, T. W. Stafford Jr., Science 315, 1122 (2007).
K. B. Tankersley, in The Settlement of the American Continents: A Multidisciplinary Approach to Human Biogeography, C. M. Barton, G. A. Clark, D. R. Yesner, G. A. Pearson, Eds. (Univ. of Arizona Press, Tucson, 2004), pp. 49–63.
M. B. Collins, in Foragers of the Terminal Pleistocene in North America, R. B. Walker, B. N. Driskell, Eds. (Univ. of Nebraska Press, Lincoln, 2007), pp. 59–87.
D. A. Byers, A. Ugan, J. Archaeol. Sci. 32, 1624 (2005).
M. D. Cannon, D. J. Meltzer, Quat. Sci. Rev. 23, 1955 (2004).
D. K. Grayson, D. J. Meltzer, J. Archaeol. Sci. 30, 585 (2003).
T. Surovell, N. Waguespack, P. J. Brantingham, Proc. Natl. Acad. Sci. U.S.A. 102, 6231 (2005).
J. E. Morrow, C. Gnecco, Eds., Paleoindian Archaeology: A Hemispheric Perspective (Univ. Press of Florida, Gainesville, 2006).
L. Miotti, M. C. Salemme, Quat. Int. 109-110, 95 (2003).
D. H. Sandweiss et al., Science 281, 1830 (1998).
D. Jackson, C. Méndez, R. Seguel, A. Maldonado, G. Vargas, Curr. Anthropol. 48, 725 (2007).
T. D. Dillehay, Ed., Monte Verde: A Late Pleistocene Settlement in Chile, vol. 2, The Archaeological Context and Interpretation (Smithsonian Institution Press, Washington, DC, 1997).
S. J. Fiedel, J. Archaeolog. Res. 8, 39 (2000).
D. J. Joyce, Quat. Int. 142-143, 44 (2006).
D. F. Overstreet, in Paleoamerican Origins: Beyond Clovis, R. Bonnichsen, B. T. Lepper, D. Stanford, M. R. Waters (Center for the Study of the First Americans, Texas A&M Univ. Press, College Station, TX, 2005), pp. 183–195.
J. M. Adovasio, D. R. Pedler, in Entering America: Northeast Asia and Beringia before the Last Glacial Maximum, D. M. Madsen, Ed. (Univ. of Utah Press, Salt Lake City, 2004), pp. 139–158.
C. V. Haynes Jr., in Paleoamerican Origins: Beyond Clovis, R. Bonnichsen, B. T. Lepper, D. Stanford, M. R. Waters, Eds. (Center for the Study of the First Americans, Texas A&M Univ. Press, College Station, TX, 2005), pp. 113–132.
S. D. Webb, Ed., First Floridians and Last Mastodons: The Page-Ladson Site in the Aucilla River (Springer, Dordrecht, The Netherlands, 2005).
J. M. McAvoy, L. D. McAvoy, Eds., Archaeological Investigations of Site 44SX202, Cactus Hill, Sussex County, Virginia (Research Report Series No. 8, Virginia Department of Historic Resources, Richmond, 1997).
J. K. Feathers, E. J. Rhodes, S. Huot, J. M. McAvoy, Quat. Geochronol. 1, 167 (2006).
S. R. Holen, Quat. Int. 142-143, 30 (2006).
A. C. Goodyear, in Paleoamerican Origins: Beyond Clovis, R. Bonnichsen, B. T. Lepper, D. Stanford, M. R. Waters (Center for the Study of the First Americans, Texas A&M Univ. Press, College Station, TX, 2005), pp. 103–112.
A. C. Goodyear, Legacy: S. Carolina Inst. Archaeol. Anthropol. 9-1/2, 1 (2005).
A. Kitchen, M. M. Miyamoto, C. J. Mulligan, PLoS One 3, e1596 (2008).
We thank J. Enk, S. Fiedel, K. Graf, H. Harpending, G. Haynes, E. Marchani, J. O'Connell, A. Scola, and J. Tackney for comments on early drafts of this paper. C. Pevny assisted in preparation of Fig. 2.
|
0.928877 |
What are the Requirements for Entrance into Two-Year Colleges?
Many students opt to enter two-year colleges before going on to pursue degrees at four-year colleges or universities. These students must meet certain entrance requirements, which are usually quite lenient, though they do vary from state to state and even from one school to another. Read on for more information regarding the requirements for entrance into two-year colleges.
Requirements for entrance into two-year colleges are less stringent than those of four-year colleges and universities. Most two-year colleges have an 'open door' policy, meaning anyone who meets a few simple entrance requirements will be admitted. Requirements for entrance into two-year colleges generally include educational background information, relevant testing scores and an admission application and supporting paperwork.
Common Courses Most two-year degrees require you to take basic level courses in math and English (composition); language, history, geography, and IT courses also tend to be common core courses.
Online Availability Many associate degree courses are fully available online although proctored exams may be required during the course of study.
Concentrations Concentrations are usually available in arts (Associate of Arts, AA), applied science (Associate of Applied Science, AAS), and science (Associate of Science, AS) degrees, with focus on subjects such as business administration, English, general studies, information technology, and education.
Possible Careers Although two-degrees also serve as a foundation for higher education opportunities, they can lead to careers in business, education, and a variety of other fields, depending on the area of specialization.
In most cases, applicants to any two-year junior, community or technical college in the United States must possess a high school diploma or equivalent. Some two-year colleges will consider an applicant without a diploma or GED (General Equivalency Diploma) if the applicant displays the ability or potential for college success. High school transcripts are often requested by some two-year colleges.
Submitting a completed application is the first step for those who meet the necessary education requirements for entrance into two-year colleges. Many colleges provide downloadable applications online. Some two-year colleges require that proof of financial support and an autobiographical essay accompany the completed application.
While SAT or ACT scores have little bearing on being admitted to a two-year college, many such institutions still ask that students submit them. These scores may be used to determine students' placement in English or math courses. The ACT Compass test is another exam sometimes used for placement in English and math courses.
Additionally, applicants who do not list English as their first language must take an English proficiency test. The two most common English proficiency tests are the TOEFL (Test of English as a Foreign Language) or IELTS (International English Language Testing System). Non-English speaking applicants must meet minimum testing scores on one of these tests to be considered for admission into a two-year college. These minimum scores may vary from one college to another.
|
0.970882 |
Amelia Earhart: The Final Flight (also known as Amelia Earhart) is a 1994 television film starring Diane Keaton, Rutger Hauer and Bruce Dern. It is based on Doris L. Rich's Amelia Earhart: A Biography. The film depicts events in the life of Amelia Earhart, focusing on her final flight and disappearance in 1937, with her exploits in aviation and her marriage to publisher G.P. Putnam being revealed in flashbacks. This film was not the first television dramatization of Earhart's life, as Amelia Earhart appeared in 1976, starring Susan Clark as Earhart and John Forsythe as her husband George Putnam.
In 1928, Amelia Earhart (Diane Keaton) gains fame by undertaking a transatlantic flight, albeit as a passenger. Her marriage to media tycoon George Palmer Putnam (Bruce Dern) and a series of record-breaking flights propel her to international fame as a long-distance flyer. With help from a close friend and adviser, Paul Mantz (Paul Guilfoyle), Earhart and her navigator, the hard-drinking Fred Noonan (Rutger Hauer), plan her longest flight ever: a round-the-world attempt in 1937. The interest in Japanese-held islands may lead to her disappearance, and a massive search effort is unsuccessful, but solidifies Earhart as an aviation icon.
Critics decried the inaccurate portrayals of historical figures such as Earhart and Putnam.
Principal photography began on October 18, 1993 with studio work as well as location shooting in both California and Quebec. Although a Beech D18 was used, it was an adequate substitute for Earhart's famed Lockheed Model 10 Electra used in the circumnavigational flight of the globe in 1937. Well-known race pilot Steve Hinton, president of the Planes of Fame Air Museum and owner of Fighter Rebuilders, flew for the film.
Interest in the story of Amelia Earhart, especially with the release of Amelia in 2009, led film reviewers to recall the earlier Earhart portrayals. Rosalind Russell had played "an Earhart-esque flier in 1943's Flight for Freedom" and Susan Clark starred in the 1976 miniseries, Amelia Earhart.
Following closely the contemporary Earhart biographies that had appeared, Amelia Earhart: The Final Flight dramatized Earhart's final flight to the extent that more myth than fact comes through. Reviews of the performances in Amelia Earhart: The Final Flight were mixed, with some observers noting that the depictions were not true to the character of the historical figures that were portrayed.
Keaton's understated portrayal of Earhart resulted in nominations for a 1995 Golden Globe and a 1995 Emmy for Lead Actress in a Miniseries or Special, as well as a 1995 Screen Actors Guild Award nomination. Editor Michael D. Ornstein won the 1995 CableACE Award for Editing while the production also garnered nominations for an American Society of Cinematographers, (ASC) Award for Outstanding Achievement in Cinematography in Movies of the Week/Pilots and an Emmy nomination for Single Camera Editing in a Miniseries/Special for 1995.
↑ "Amelia Earhart." YourTrailers.net. Retrieved: May 2, 2012.
↑ "Amelia Earhart: The Final Flight (1994) Full credits." imdb. Retrieved: May 3, 2012.
↑ Hiltbrand, David/ "Picks and Pans Review: Amelia Earhart: the Final Flight." People, Vol. 41, No. 22, June 13, 1994. Retrieved: May 3, 2012.
↑ "Amelia Earhart (1976): Miscellaneous Notes." Turner Classic Movies. Retrieved: May 3, 2012.
↑ Mikkelson, David and Barbara. "The Plane Truth." snopes.com, 2012. Retrieved: May 3, 2012.
↑ Germain, Scott. "Fighter Rebuilders." warbirdaeropress.com, 2006. Retrieved: May 4, 2012.
↑ "President." planesoffame.org. Retrieved: May 4, 2012.
↑ "About Susan Clark." yahoo.com. Retrieved: May 3, 2012.
↑ King, Susan. "Amelia Earhart's soaring spirit." Los Angeles Times, May 25, 2009. Retrieved: May 3, 2012.
↑ Butler 1989, p. 416.
↑ Lovell 2009, p. 351.
↑ Goldstein and Dillon 1997, pp. 273–274.
↑ Kennedy, David M. "She was betrayed by fame." The New York Times, November 26, 1989. Retrieved: May 2, 2012.
↑ Tucker, Ken. "TV Review: Amelia Earhart: The Final Flight (1994)." Entertainment Weekly, June 10, 1994. Retrieved: May 3, 2012.
↑ Phillips, Joseph. "Amelia Earhart - The Final Flight (1994) Diane Keaton." ' 'Rare TV on DVD. Retrieved: May 3, 2012.
↑ McCallion, Bernadette. "Amelia Earhart: The Final Flight: Overview." msn.com. Retrieved: May 3, 2012.
↑ "Awards for Amelia Earhart: The Final Flight (1994)." Turner Classic Movies. Retrieved: May 4, 2012.
↑ "Amelia Earhart: The Final Flight (1994) Awards." imdb. Retrieved: May 4, 2012.
Goldstein, Donald M. and Katherine V. Dillon. Amelia: The Centennial Biography of an Aviation Pioneer. Washington, D.C.: Brassey's, 2009, first edition 1997. ISBN 1-57488-134-5.
Lovell, Mary S. The Sound of Wings: The Life of Amelia Earhart. New York: St. Martin's Press, 1989. ISBN 0-312-03431-8.
|
0.959095 |
revenues and spending for a financial year that is often passed by the legislature, approved by the chief executive or president and presented by the Finance Minister to the nation.
In the United Kingdom and Australia, the finance minister (called the "Chancellor of the Exchequer" and the "Treasurer" respectively) is in practice the most important cabinet post after the Prime Minister.
In the United States, the finance minister is called the "Secretary of the Treasury", though there is a separate and subordinate Treasurer of the United States, and it is the director of the Office of Management and Budget who drafts the budget.
This position in the federal government of the United States is analogous to the Minister of Finance in many other countries.
The position of the finance minister might be named for this portfolio, but it may also have some other name, like "Treasurer" or, in the United Kingdom, "Chancellor of the Exchequer". Due to a quirk of history, the Chancellor of the Exchequer is also styled Second Lord of the Treasury with the Prime Minister also holding the historic position of First Lord of the Treasury.
The chancellor is responsible for all economic and financial matters, equivalent to the role of finance minister in other nations.
Due to a quirk of history, the Chancellor of the Exchequer is also styled Second Lord of the Treasury with the Prime Minister also holding the historic position of First Lord of the Treasury.
This office is not equivalent to the usual position of the "Treasurer" in other governments; the closer equivalent of a Treasurer in the United Kingdom is the Chancellor of the Exchequer, who is the Second Lord of the Treasury.
Finance ministers can be unpopular if they must raise taxes or cut spending.
Finance ministers often dislike this practice, since it reduces their freedom of action.
This is a list of current finance ministers of the 193 United Nations member states, Holy See (Vatican City) and the State of Palestine.
A ministry of finance is a common type of government department that serves as a finance ministry.
A finance minister's portfolio has a large variety of names across the world, such as "treasury", "finance", "financial affairs", "economy" or "economic affairs".
In the United Kingdom and Australia, the finance minister (called the "Chancellor of the Exchequer" and the "Treasurer" respectively) is in practice the most important cabinet post after the Prime Minister. The position of the finance minister might be named for this portfolio, but it may also have some other name, like "Treasurer" or, in the United Kingdom, "Chancellor of the Exchequer".
Finance ministers are also often found in governments of federated states or provinces of a federal country.
In these cases their powers may be substantially limited by superior legislative or fiscal policy, notably the control of taxation, spending, currency, inter-bank interest rates and the money supply.
It may also be a junior minister in the finance department, the British Treasury, for example has four junior ministers.
In Hong Kong the finance minister is called the Financial Secretary, though there is a Secretary for the Treasury subordinate to him.
|
0.999928 |
Can we really give up the right to gun ownership without giving up other rights Can we pretend not to know that any new, stricter regime of “gun control” enforced by the American capitalist state will result in a greater curtailment of many rights, in more surveillance, in more criminalization of dissident radicalism, directed fiercely and selectively against the opponents of racism and imperialism?
The anti-gun-rights position in general rests on this premise. I think it’s wrong-headed, and I do not see how one can deny that it is elitist and authoritarian. It’s a concept of the state that leftists should be working to extirpate from people’s minds, not helping to perpetuate in the name of ensuring their safety.
The fundamental problem with right-wing populist militancy is not the guns it may brandish, but the foolish and self-destructive mindset that underlies it. The problem with left-wing populist militancy is that there isn’t any. What passes for the left in this country is forbidden from imagining such a thing by its fundamental fear of breaking up either the Democratic Party Blue Tribe or the American liberal capitalist state, which it imagines can be turned back into a Good Daddy, if we just impeach the evil president and elect the good one.
There will be gun regulation, and there is, a lot of it. And, often, in those toddling towns where the regulation is “toughest,” gun violence is highest. No reasonable polity would allow individuals to own tanks, or Stingers, or .50-caliber machine guns (or, pace Karl, cannons), and I would be good with banning those bump stocks and gat cranks. I can’t go on about the dangers of states of mind, and then object to any notion of a background check. I do object, however, to those proposals that are silly (“military-style”) and whose main purpose is to train citizens into more thorough compliance, to those that reject the fundamental right of gun ownership, and to those that will criminalize fifty-million people.
|
0.998608 |
A National Forest in Nevada! Indeed, it's true! This area is also known at Mount Charleston National Recreation Area with the spectacular 11,900-foot Mount Charleston Peak. In fact, the name Nevada means "snow-capped" in Spanish. Thanks to its unique geographic history, Nevada has more mountain ranges than any other state in the nation. The Humboldt - Toiyabe National Forest, the largest national Forest outside the State of Alaska, sprawls from eastern California and western Nevada to the northeastern boundary of the state and on down to southern Nevada and takes in the higher elevations of some of Nevada's most spectacular mountain ranges.
|
0.999932 |
Why younger men can date older women?
An aged woman knows exactly what she wants from her partner and is ready to give something to him. The maturity of the female personality, her self-confidence is liked by young people, so they choose the experience and beauty, and not the freshness of youth.
|
0.999996 |
Marita's little brother has left toys all over the living room floor! Fortunately, Marita has developed special robots to clean up the toys. She needs your help to determine which robots should pick up which toys.
There are T toys, each with an integer weight W[i] and an integer size S[i]. Robots come in two kinds: weak and small.
There are A weak robots. Each weak robot has a weight limit X[i], and can carry any toy of weight strictly less than X[i]. The size of the toy does not matter.
There are B small robots. Each small robot has a size limit Y[i], and can carry any toy of size strictly less than Y[i]. The weight of the toy does not matter.
Each of Marita's robots takes one minute to put each toy away. Different robots can put away different toys at the same time.
Your task is to determine whether Marita's robots can put all the toys away, and if so, the shortest time in which they can do this.
No robot can pick up the toy of weight 5 and size 3, and so it is impossible for the robots to put all of the toys away.
Line 1 will contain three integers A, B (0 ≤ A, B ≤ 50,000, and 1 ≤ A + B) and T (1 ≤ T ≤ 1,000,000), respectively representing the number of weak robots, the number of small robots, and the number of toys.
Line 2 will contain A integers, the values of X[i] (1 ≤ X[i] ≤ 2,000,000,000), that specify the weight limit for each weak robot.
Line 3 will contain B integers, the values of Y[i] (1 ≤ Y[i] ≤ 2,000,000,000), that specify the size limit of each small robot.
The next T lines will each contain two integers W[i] and S[i] (1 ≤ W[i], S[i] ≤ 2,000,000,000), the weight and size of each toy, respectively.
If A = 0 or B = 0, then the corresponding line (line 2 or line 3) should be empty.
The output should contain one integer - the smallest number of minutes required to put all of the toys away, or -1 if this is not possible.
Very weak test data. I submitted a wrong greedy solution that should give TLE and WA. I got 76 points. Only 2 tests WA and 3 TLE.
Some people are only satisfied when they actually solve the problem, not when they get points.
Can't get accepted with O(n*(logn)^2)!!!
Why am I getting WA in test #6x only? I can't find any bugs in my code.
I can tell you that it's a randomly-generated case, and that you're vastly understating the amount of time required -- your answer is about half of the correct answer.
Since the IOI publishes its test data, you can always go look it up yourself.
Is there any input (A+B=0) ?
|
0.999543 |
And it's only August, guys.
It might only be the beginning of August, but there's already a trend emerging among the fall magazine covers, and it can be summarized in five words: Gucci looks 37 and 38.
The two closing looks from Gucci's fall 2014 show, white minidresses with crystal embellishment on the bodice, have already appeared on three covers, and we don't imagine that number going down any time soon. Why? Because Gucci has a history of racking up the most appearances magazine covers season after season. It's ruffle-filled spring 2013 collection appeared on 111 covers, its fall 2012 collection nabbed 81 and its fall 2011 covered over 50 magazines – the most covers for a single brand in each of those seasons.
The appeal of the optical dresses is pretty easy to see – they're shiny, simple and universally flattering. For August, W styled Mila Kunis as a '60s icon in the dress. Teen Vogue also went the '60s route for it's September cover starring Kendall Jenner, while Marie Claire chose to outfit Blake Lively as more of a bombshell than an ingenue.
Keep your eyes peeled for more – and we mean much more – and tell us your favorite Gucci cover so far in the comments.
We reviewed 154 covers from 10 leading U.S. fashion publications, and while some titles saw distinct improvement, others went in a disappointing, opposite direction.
Just when you thought Prada and Dolce & Gabbana's spring collections laid claim to all the glossy covers in 2011, another spring collection refuses to be ignored Gucci's striking jewel-toned spring collection has already made it onto 50 covers so far this year. Marie Claire and Vogue have taken a particular liking to the Marrakech-inspired collection. Lara Stone donned look number two from the label's spring 2011 collection on the February issue of Vogue Paris. The look was so popular it showed up on at least five other covers including January Jones' April Marie Claire UK cover. Meanwhile, Elle US put Katy Perry in Gucci's bright colors for their March issue. It looks like the cover race just got a little more interesting. Prada, 48; Dolce & Gabbana, 42; Gucci, 50. But as they say, it ain't over 'til it's over.
|
0.968909 |
How can I create a brand new campaign in AdWords using adCore?
It's possible to create a brand new campaign in AdWords using adCore.
You would use this feature in cases where you created a new campaign in the 'create dynamic search campaign' section but have not yet made a campaign in AdWords to receive the information (keywords, ads and ad-groups).
To use this feature go to the left navigation bar and click 'campaign settings' and click '+ create new campaign' . Then give the campaign a name.
Note that this is the name the campaign will have in AdWords.
Once you click save, adCore will ask you to set the campaign settings. go ahead a give the campaign the same settings you would in AdWords.
|
0.973977 |
Were witches real in the Middle Ages? This is a handbook on witchcraft first published in 1628, claiming to expose the entire practice and profession of witchcraft. Based on what we know today, the material may not be entirely accurate. The book is valuable, however, in the sense that it allows one to view the extreme superstition surrounding witchcraft and to better understand the degree of persecution that resulted.
|
0.931706 |
(CNN) The chairman of the Federal Communications Commission was awarded a handmade rifle by the National Rifle Association Friday at the Conservative Political Action Conference.
Ajit Pai, the FCC's chairman who oversaw the highly controversial repeal of the commission's net neutrality rules last year, was awarded the rifle, along with the NRA's "Charlton Heston Courage Under Fire Award," for his efforts last year during the repeal.
The award came as a surprise to Pai, who was expecting to give a speech unrelated to the NRA. The rifle is awarded "when someone has stood up under pressure with grace and dignity and principled discipline," said Carolyn Meadows, the second vice-president of the NRA.
Meadows added that she was unable to bring the gun onstage, due to CPAC's rules regarding weapons.
Instead, she said, the rifle -- a Kentucky handmade long gun-- will remain at the NRA's museum until Pai is able to get it.
"You'll love it," Meadows told Pai.
Pai came under harsh criticism last year after he supported the FCC's decision to repeal net neutrality rules. Before receiving the award, Dan Schneider, the American Conservative Union's executive director, said Pai is "the most courageous, heroic person I know."
"He has received countless death threats. His property has been invaded by the George Soros crowd," Schneider said. "He has a family, and his family has been abused in different ways."
Past recipients of the award include Rush Limbaugh, Phyllis Schlafly and Vice President Mike Pence.
|
0.830432 |
Muhammad Ali Pasha al-Mas'ud ibn Agha - (Mehmet Ali Pasha in Albanian; Kavalalı Mehmet Ali Paşa in Turkish) - (4 March 1769 – 2 August 1849) was an Ottoman Turk, of Albanian origin, who became an Ottoman Wāli, and self-declared Khedive of Egypt and Sudan.Though not a modern nationalist, he is regarded as the founder of modern Egypt because of the dramatic reforms in the military, economic and cultural spheres that he instituted.He also ruled Levantine territories outside Egypt.The dynasty that he established would rule Egypt and Sudan until the Egyptian Revolution of 1952.Muhammad Ali was born in Kavala, in the Ottoman province of Macedonia (now a part of modern Greece) to Albanian parents.According to the many French, English and other western journalists who interviewed him, and according to people who knew him, the only language he knew fluently was Albanian. He was also competent in Turkish.The son of a tobacco and shipping merchant named Ibrahim Agha, his mother Zainab Agha was his uncle Husain Agha's daughter. Muhammad Ali was the nephew of the "Ayan of Kavalla" (Çorbaci) Husain Agha.When his father died at a young age, Muhammad was taken and raised by his uncle with his cousins.As a reward for Muhammad Ali's hard work, his uncle Çorbaci gave him the rank of "Bolukbashi" for the collection of taxes in the town of Kavala.After his promising success in collecting taxes, he gained Second Commander rank under his cousin Sarechesme Halil Agha in the Kavala Volunteer Contingent that was sent to re-occupy Egypt following Napoleon's withdrawal.He married Ali Agha's daughter, Emine Nosratli, a wealthy widow of Ali Bey.In 1801, his unit was sent, as part of a larger Ottoman force, to re-occupy Egypt following a brief French occupation. The expedition landed at Aboukir in the spring of 1801.The French withdrawal left a power vacuum in the Ottoman province. Mamluk power had been weakened, but not destroyed, and Ottoman forces clashed with the Mamluks for power.During this period of anarchy Muhammad Ali used his loyal Albanian troops to play both sides, gaining power and prestige for himself.As the conflict drew on, the local populace grew weary of the power struggle.Led by the ulema, a group of prominent Egyptians demanded that the Wāli (governor), Ahmad Khurshid Pasha, step down and Muhammad Ali be installed as the new Wāli in 1805.The Ottoman Sultan, Selim III, was not in a position to oppose Muhammad Ali’s ascension, thereby allowing Muhammad Ali to set about consolidating his position. During the infighting between the Ottomans and Mamluks between 1801 and 1805, Muhammad Ali had carefully acted to gain the support of the general public.By appearing as the champion of the people Muhammad Ali was able to forestall popular opposition until he had consolidated power.The Mamluks still posed the greatest threat to Muhammad Ali.They had controlled Egypt for more than 600 years, and over that time they had extended their rule extensively throughout Egypt.Muhammad Ali’s approach was to eliminate the Mamluk leadership, then move against the rank and file.In 1811, Muhammad Ali invited the Mamluk leaders to a celebration held at the Cairo Citadel in honor of his son, Tusun, who was being appointed to lead a military expedition into Arabia. When the Mamluks arrived, they were trapped and killed.After the leaders were killed, Muhammad Ali dispatched his army throughout Egypt to rout the remainder of the Mamluk forces.Muhammad Ali transformed Egypt into a regional power which he saw as the natural successor to the decaying Ottoman Empire. He summed up his vision for Egypt as follows:"I am well aware that the (Ottoman) Empire is heading by the day toward destruction...On her ruins I will build a vast kingdom... up to the Euphrates and the Tigris."
Sa'id of Egypt (1822–1863) was the Wāli of Egypt and Sudan from 1854 until 1863, officially owing fealty to the Ottoman Sultan but in practice exercising virtual independence.
He was the fourth son of Muhammad Ali Pasha. Sa'id was a Francophone, educated in Paris.
Under Sa'id's rule there were several law, land and tax reforms. Some modernization of Egyptian and Sudanese infrastructure also occurred using western loans.
Slave raids (the annual 'razzia') also ventured beyond Sudan into Kordofan and Ethiopia.
Facing European pressure to abolish official Egyptian slave raids in the Sudan, Sa'id issued a decree banning raids. Freelance slave traders ignored his decree.
Under Sa'id's rule the influence of sheikhs was curbed and many Bedouin reverted to nomadic raiding.
Sa'id died in January 1863 and was succeeded by his nephew Ismail.
Isma'il Pasha (İsmail Paşa in Turkish), known as Ismail the Magnificent December 31, 1830 – March 2, 1895), was a Wāliand and subsequently Khedive of Egypt and Sudan from 1863 until he was removed at the behest of the British in 1879.
His philosophy can be glimpsed in a statement he made in 1879: "My country (Egypt) is no longer in Africa; we are now part of Europe. It is therefore natural for us to abandon our former ways and to adopt a new system adapted to our social conditions."
In 1867, Isma'il succeeded in persuading the Ottoman Sultan Abdülaziz to grant a firman finally recognizing him as Khedive in exchange for an increase in the tribute.
His relations with Sir Eldon Gorst, were excellent, and they co-operated in appointing the cabinets headed by Butrus Ghali in 1908 and Muhammad Sa'id in 1910 and in checking the power of the Nationalist Party.
The appointment of Kitchener to succeed Gorst in 1911 displeased Abbas, and relations between him and the British deteriorated. Kitchener often complained about "that wicked little Khedive" and wanted to depose him.
Hussein Kamel was the son of Khedive Isma'il Pasha, who ruled Egypt from 1863 to 1879.
This brought to an end the legal fiction of Ottoman sovereignty over Egypt, which had been largely nominal since Muhammad Ali's seizure of power in 1805.
Upon Hussein Kamel's death, his only son, Prince Kamal al-Din Husayn, declined the succession, and Hussein Kamel's brother Ahmed Fuad ascended the throne as Fuad I.
The Mah'mal, or litter, is a wooden erection in pyramidal form and is hung by embroidered fabrics, which are very beautiful.
These hangings, or coverings, accompany the litter and are intended for the most sacred sanctuary of the interior of the Mosque at Mecca . . . .The ceremony we witnessed was the one observed in honor of taking the coverings from the citadel to the Mosque of Huseyn.
Here the sacred fabrics remain for two or three weeks, where they are embroidered and packed ready to accompany the great caravan of pilgrims to Mecca.
The coverings for the sanctuary at Mecca are sent every year from Cairo by the representative of the Sultan of Turkey.
The Mah'mal having made the pilgrimage to Mecca often is not only a symbol of, royalty, but is also regarded as a sacred relic.
Even the sight of it in the esteem of devout Muslims brings a blessing.
At the head of the procession we see soldiers who are followed by camels, highly decorated, and bearing on their humps palm-branches, with oranges attached.
Each section of the procession is preceded by a band of music, the largest being that which accompanies the Mah'mal.
The cavalcade moves very slowly. The people cheer the "Prince of the Pilgrimage" as he goes by between two camels, one in front of the other.
He is to conduct the expedition when it finally starts from the Birket-el-Hagg to Mecca . . . .An unusual commotion is created as the Mah'mal or litter goes by. It is seen far down the street on the back of a camel, swinging right and left, and up and down, as the ship of the desert makes its way through the sea of excited and tumultuous humanity on every side.
Prior to becoming sultan, Fuad had played a major role in the establishment of Cairo University. He became the university's first rector in 1908, and remained in the post until his resignation in 1913.
His full title was "His Majesty Farouk I, by the grace of God, King ofEgypt and Sudan, Sovereign of Nubia, of Kordofan, and of Darfur."
Before his father's death, he was educated at the Royal Military Academy, Woolwich, England.
Built on the debris of a house owned by the Turkish Prince Abdeen Bey, Abdeen Palace is considered one of the most sumptuous palaces in the world in terms of its adornments, paintings, and large number of clocks scattered in the parlors and wings, most of which are decorated with pure gold.
Built by Khedive Ismail, to become the official government headquarters instead of the Citadel of Cairo (which had been the centre of Egyptian government since the Middle Ages), this palace was used as well for official events and ceremonies.
Construction started in 1863 and continued for 10 years and the palace was officially inaugurated in 1874. Erected on an area of 24 feddans, the palace was designed by the French architect Rousseau along with a large number of Egyptian, Italian, French and Turkish decorators. However, the palace’s garden was added in 1921 by Sultan Fuad I on an area of 20 feddans.
The cost of building the palace reached 700,000 Egyptian pounds in addition to 2 million pounds for its furnishing. More money was also spent on the palace’s alteration, preservation and maintenance by consecutive rulers. The palace includes 500 rooms.
Muhammad Ali build himself a retreat palace or an official residence away from the Citadel in the district called Shubra al-kheyma.
Shubra lies north of Bulaq in the vicinity of the Muqqattam hills, south of Cairo, a spot on the bank of the Nile, which he found perfect for the construction of his official residence away from the seat of government. Another probable reason for that location was that Bulaq, at the time, was already undergoing many urban changes given the considerable efforts Muhammad Ali had put into developing a modern industrial infrastructure area.
The Citadel is sometimes referred to as Mohamed Ali Citadel, because it contains the Mosque of Muhammad Ali of (or Mohamed Ali Pasha), which was built between 1828 and 1848, perched on the summit of the citadel.
This Ottoman mosque was built in memory of Tusun Pasha, Muhammad Ali's oldest son, who died in 1816. However, it also represents Muhammad Ali's efforts to erase symbols of the Mamluk dynasty that he replaced.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.