source
stringlengths 32
199
| text
stringlengths 26
3k
|
---|---|
https://en.wikipedia.org/wiki/University%20of%20California%20Television
|
University of California Television (known simply as UCTV) is a 24-hour television channel presenting educational and enrichment programming from the campuses, national laboratories, and affiliated institutions of the University of California system. UCTV's non-commercial programming delivers science, health and medicine, public affairs, humanities, and the arts to a general audience, as well as specialized programming for health care professionals and teachers. Programming includes documentaries, lectures, debates, interviews, performances and more.
UCTV is an Educational-access television cable TV channel. See "Where to watch" below. UCTV can also be seen worldwide via live webstream, video-on-demand archives (Flash files), and offers both audio and video podcasts for downloading. UCTV programs are also available on YouTube, Apple podcast, Roku and Amazon Fire. UCTV was available nationwide on Dish Network (channel 9412) (service terminated by Dish as of March 1, 2012).
UCTV launched in January 2000 on the Dish Network and is based on the UC San Diego campus where UCSD-TV is also located. UCTV collects programming from each of the ten University of California campuses (UC Berkeley, UC Davis, UC Irvine, UC Los Angeles, UC Merced, UC Riverside, UC San Diego, UC San Francisco, UC Santa Barbara, UC Santa Cruz) and affiliated institutions (Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, UC Agriculture & Natural Resources, UC Office of the President, UC Sacramento Center, UC Washington DC Center).
Thematic programming
UCTV airs 25 hours of original programming per week on a rotating 24-hour schedule of three-hour thematic blocks in the areas of science, health and medicine, public affairs, humanities, and arts and music
UCTV offers a one-hour program block for health care professionals called The Med Ed Hour (Tuesday through Thursday at noon Pacific), featuring medical programs for physicians, nurses and other health care professionals.
University of California
Internet television channels
|
https://en.wikipedia.org/wiki/CPRA
|
CPRA may refer to:
Central progressive retinal atrophy, a type of progressive retinal atrophy (an eye problem)
California Privacy Rights Act (a privacy and data protection law)
California Public Records Act (a freedom-of-information law)
|
https://en.wikipedia.org/wiki/CMBFAST
|
In physical cosmology, CMBFAST is a computer code, written by Uroš Seljak and Matias Zaldarriaga, for computing the anisotropy of the cosmic microwave background. It was the first efficient program to do so, reducing the time taken to compute the anisotropy from several days to a few minutes by using a novel semi-analytic line-of-sight approach.
References
The CMBFAST website on LAMBDA NASA project page.
Physical cosmology
|
https://en.wikipedia.org/wiki/MicroMUSE
|
MicroMUSE is a MUD started in 1990. It is based on the TinyMUSE system, which allows members to interact in a virtual environment called Cyberion City, as well as to create objects and modify their environment. MicroMUSE was conceived as an environment to allow people in far-flung locations to interact with each other, particularly college students with Internet access. A core group of users remain active.
History
1990
MicroMUSE was founded as MicroMUSH by the user known as "Jin" in the summer of 1990. Based upon TinyMUSH, MicroMUSH was centered around Cyberion City, a space station orbiting Earth of the 24th century. The initial MicroMUSH database was largely due to the efforts of Jin and the Wizards who went by the online aliases "Trout_Complex", "Coyote", "Opera_Ghost", "Snooze", "Wai", "Star" and "Mama.Bear". Larry "Leet" Foard and "Bard" (later known as "Michael") were, along with Jin, the primary programmers.
The focus, at the time, primarily was communication and creativity. Users were encouraged to build "objects" and were given extensive leeway to create and communicate with other members. At times, it could be compared to a high-tech version of the wild west.
1991
Typical problems of growth and success, over time, led to issues with computing resources. In April 1991, MicroMUSH moved to MIT. The name was officially changed to MicroMUSE during this same time period.
1992
Through 1992, the focus of MicroMUSE continued to change, though not very noticeably to existing users. New users were given a smaller "quota" of object which they could build. The game was extremely popular at this point. Users could log in at almost any time of day and find at least thirty active people.
1993
By the end of 1993, the space engine, which had been developed within the original theme of MicroMUSE, was moved out of MicroMUSE. The focus was shifting; it became less about creativity and communication between random people across the internet, and more about bringing in primary-school children. The "quota" of objects was reduced, for all players, from as much as 100, down to 10 "objects". The game became more-heavily censored, as some of the leadership began to push a K-12-friendly environment throughout the game. Long-time users who did not like the change, and spoke out against it, were often banned from the game altogether.
1994
By the end of 1994, any semblance of what MicroMUSE had been was almost gone. A charter and bylaws were created, which officially changed the focus of MicroMUSE. The second developer had left the project, and Frnkzk became the head developer of MicroMUSE. The guidance of a "mentor" was required for anyone not pre-screened by the administration. By this point, the focus was solely education.
Post-1994
The changes in focus and game policies, along with changing technology, caused a gradual decline in the number of core members using MicroMUSE.
Counter-Movement
As the game changed drastically, in 1993 and 1994,
|
https://en.wikipedia.org/wiki/List%20of%20Schools%20of%20the%20Sacred%20Heart
|
The School of the Sacred Heart is an international network of private Catholic schools that are run by or affiliated with the Society of the Sacred Heart, which was founded in France by Saint Madeleine Sophie Barat. Membership of the network exceeds 2800. The Schools of the Sacred Heart were brought to the United States by Saint Rose Philippine Duchesne, where the association became known as the Network of Sacred Heart Schools. Their philosophy has five goals:
Educate to establish a personal and active faith in God
Educate to establish deep respect for intellectual values
Educate to establish a social awareness which compels one to action
Educate to establish the building of a community with Christian values
Educate to establish personal growth in an atmosphere of wise freedom
List of Schools of the Sacred Heart
Schools highlighted in blue can be clicked on for more information.
Africa
Chad
Lycée du Sacré-Cœur, Ndjamena
Congo
École Maternelle Bosangani, Gombé
Lycée Tuzayana & École Primaire Filles, Mboma
Egypt
Collège du Sacré-Cœur, Cairo
Collège du Sacre-Cœur, Heliopolis
Sacred Heart Girls' School, Alexandria
Kenya
Laini Saba Primary School, Nairobi
Sacred Heart Kyeni
Uganda
Agriculture School Bongorin
Kangole Girls' Senior Secondary School, Moroto
St. Charles Lwanga G.T.C. Kalunga, Masaka
Asia
Korea
Sacred Heart Girls' Middle School, Yongsan, Seoul
Sacred Heart Girls' High School, Yongsan, Seoul
India
Sacred Heart Convent Sr. Sec. School, Jagadhri, Haryana
Children of the New Dawn School, Vasoli
Prerana Nursery School, Pune
Sophia Nursery School, Mumbai
Sacred Heart Convent High School, Ahmednagar
Sacred Heart Convent High School, Nashik
Sacred Heart Convent School, Parichha, Uttar Pradesh
Sacred Heart High School, Kharagpur, West Bengal
Sacred Heart Convent School, Dhalli, Shimla, Himachal Pradesh
Sacred Heart Convent High School, Khajuraho, Chhatarpur, Madhya Pradesh
Pakistan
Sacred Heart Convent School, Lahore, Pakistan
Japan
University of the Sacred Heart, Shibuya, Tokyo Metropolitan Area
International School of the Sacred Heart, Shibuya, Tokyo Metropolitan Area
Sacred Heart School in Tokyo, Sankocho-Minato, Tokyo Metropolitan Area
Sacred Heart School of Fuji, City of Susono, Shizuoka Prefecture
Sacred Heart School of Obayashi, City of Takarazuka, Hyōgo Prefecture
Sacred Heart School of Sapporo, City of Sapporo, Hokkaido Prefecture
Taiwan
Sacred Heart Girls' High School, New Taipei City
Sacred Heart Primary School & Kindergarten, New Taipei City
Australia and New Zealand
Australia
Duchesne College, St. Lucia, Queensland
Kincoppal School, Rose Bay, Sydney, New South Wales
Sacré Cœur, Glen Iris, Victoria
Sancta Sophia College, Camperdown, New South Wales
Stuartholme School, Toowong, Queensland
New Zealand
Baradene College of the Sacred Heart, Auckland
Europe
Austria
Sacré Cœur Riedenburg, Bregenz
Sacré-Cœur Graz, Graz
Sacré-Cœur Pressbaum, Pressbaum
Campus Sacré-Cœur Wien, Wien
Campus Sacré-Cœur Wien Wäh
|
https://en.wikipedia.org/wiki/Organic%20computer
|
Organic computer may refer to:
Wetware computer, a computer made from biological materials
Organic computing, an emerging computing paradigm in which a system and its components and subsystems are well coordinated in a purposeful manner
|
https://en.wikipedia.org/wiki/Ordinal%20data%20type
|
In computer programming, an ordinal data type is a data type with the property that its values can be counted. That is, the values can be put in a one-to-one correspondence with the positive integers. For example, characters are ordinal because we can call 'A' the first character, 'B' the second, etc. The term is often used in programming for variables that can take one of a finite (often small) number of values. While the values are often implemented as integers (or similar types such as bytes) they are assigned literal names and the programming language (and the compiler for that language) can enforce that variables only be assigned those literals.
For instance in Pascal, one can define:
var
x: 1..10;
y: 'a'..'z';
Data types
|
https://en.wikipedia.org/wiki/Pagination
|
Pagination, also known as paging, is the process of dividing a document into discrete pages, either electronic pages or printed pages.
In reference to books produced without a computer, pagination can mean the consecutive page numbering to indicate the proper order of the pages, which was rarely found in documents pre-dating 1500, and only became common practice c. 1550, when it replaced foliation, which numbered only the front sides of folios.
Pagination in word processing, desktop publishing, and digital typesetting
Word processing, desktop publishing, and digital typesetting are technologies built on the idea of print as the intended final output medium, although nowadays it is understood that plenty of the content produced through these pathways will be viewed onscreen as electronic pages by most users rather than being printed on paper.
All of these software tools are capable of flowing the content through algorithms to decide the pagination. For example, they all include automated word wrapping (to obviate hard-coded newline delimiters), machine-readable paragraphing (to make paragraph-ending decisions), and automated pagination (to make page-breaking decisions). All of those automated capabilities can be manually overridden by the human user, via soft hyphens (that is, inserting a hyphen which will only be used if the word is split over two lines, and thus not shown if not), manual line breaks (which force a new line within the same paragraph), hard returns (which force both a new line and a new paragraph), and manual page breaks.
Pagination in print
Today printed pages are usually produced by outputting an electronic file to a printing device, such as a desktop printer or a modern printing press. These electronic files may for example be Microsoft Word, PDF or QXD files. They will usually already incorporate the instructions for pagination, among other formatting instructions. Pagination encompasses rules and algorithms for deciding where page breaks will fall, which depend partly on cultural considerations about which content belongs on the same page: for example one may try to avoid widows and orphans. Some systems are more sophisticated than others in this respect. Before the rise of information technology (IT), pagination was a manual process: all pagination was decided by a human. Today, most pagination is performed by machines, although humans often override particular decisions (e.g. by inserting a hard page break).
Pagination in electronic display
"Electronic page" is a term to encompass paginated content in presentations or documents that originate or remain as visual electronic documents. This is a software file and recording format term in contrast to electronic paper, a hardware display technology. Electronic pages may be a standard sized based on the document settings of a word processor file, desktop publishing application file, or presentation software file. Electronic pages may also be dynamic in size or content
|
https://en.wikipedia.org/wiki/John%20Seely%20Brown
|
John Seely Brown (born 1940), also known as "JSB", is an American researcher who specializes in organizational studies with a particular bend towards the organizational implications of computer-supported activities. Brown served as Director of Xerox PARC from 1990 to 2000 and as Chief Scientist at Xerox from 1992 to 2002; during this time the company played a leading role in the development of numerous influential computer technologies. Brown is the co-author of The Social Life of Information, a 2000 book which analyzes the adoption of information technologies.
Early life
John Seely Brown was born in 1940 in Utica, New York.
Brown graduated from Brown University in 1962 with degrees in physics and mathematics. He received a Ph.D. from the University of Michigan in computer and communication sciences in 1970.
Career
His research interests include the management of radical innovation, digital culture, ubiquitous computing, autonomous computing and organizational learning. JSB is also the namesake of John Seely Brown Symposium on Technology and Society, held at the University of Michigan School of Information. The first JSB symposium in 2000 featured a lecture by Stanford Professor of Law Lawrence Lessig, titled "Architecting Innovation," and a panel discussion, "The Implications of Open Source Software," featuring Brown, Lessig and the William D. Hamilton Collegiate Professor of Complex Systems at SI, Michael D. Cohen. Subsequent events were held in 2002, 2006 and 2008.
He has held several positions and roles, including:
Independent co-chair of the Deloitte Center for Edge Innovation (present)
Senior fellow, Annenberg Center for Communication at USC (present)
Chief scientist of Xerox Corporation (1992 – April 2002)
Director of the Xerox PARC research center (1990 – June 2000)
Cofounder of the Institute for Research on Learning
Board member of multiple companies, including Amazon, Corning, MacArthur Foundation and Polycom
Advisory board member of several private companies, including Innovation Exchange and H5
Former board member of In-Q-Tel
Honors
IRI Medal from the Industrial Research Institute, 1999
Design Futures Council Senior Fellow
Honorary degrees
Publications
John Seely Brown, Douglas Thomas, A New Culture of Learning: Cultivating the Imagination for a World of Constant Change, CreateSpace 2011. .
John Seely Brown, Foreword, in: Toru Iiyoshi M.S. Vijay Kumar, Opening Up Education: The Collective Advancement of Education through Open Technology, Open Content, and Open Knowledge, The MIT Press 2010. .
John Seely Brown, John Hagel III, Lang Davison, The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things In Motion, Basic Books 2010. .
John Seely Brown, John Hagel III, How World Of Warcraft Promotes Innovation; in: Willms Buhse/Ulrike Reinhard: Wenn Anzugträger auf Kapuzenpullis treffen (When Suits meet Hoodies), whois-Verlag 2009. .
John Seely Brown, John Hagel III, The Only Sustainable Edge: Why Business Str
|
https://en.wikipedia.org/wiki/Grigore%20Moisil
|
Grigore Constantin Moisil (; 10 January 1906 – 21 May 1973) was a Romanian mathematician, computer pioneer, and titular member of the Romanian Academy. His research was mainly in the fields of mathematical logic (Łukasiewicz–Moisil algebra), algebraic logic, MV-algebra, and differential equations. He is viewed as the father of computer science in Romania.
Moisil was also a member of the Academy of Sciences of Bologna and of the International Institute of Philosophy. In 1996, the IEEE Computer Society awarded him posthumously the Computer Pioneer Award.
Biography
Grigore Moisil was born in 1906 in Tulcea into an intellectual family. His great-grandfather, Grigore Moisil (1814–1891), a clergyman, was one of the founders of the first Romanian high school in Năsăud. His father, Constantin Moisil (1876–1958), was a history professor, archaeologist and numismatist; as a member of the Romanian Academy, he filled the position of Director of the Numismatics Office of the Academy. His mother, Elena (1863–1949), was a teacher in Tulcea, later the director of "Maidanul Dulapului" school in Bucharest (now "Ienăchiță Văcărescu" school).
Grigore Moisil attended primary school in Bucharest, then high school in Vaslui and Bucharest (at ) between 1916 and 1922. In 1924 he was admitted to the Civil Engineering School of the Polytechnic University of Bucharest, and also the Mathematics School of the University of Bucharest. He showed a stronger interest in mathematics, so he quit the Polytechnic University in 1929, despite already having passed all the third-year exams. In 1929 he defended his Ph.D. thesis, La mécanique analytique des systemes continus (Analytical mechanics of continuous systems), before a commission led by Gheorghe Țițeica, with Dimitrie Pompeiu and Anton Davidoglu as members. The thesis was published the same year by the Gauthier-Villars publishing house in Paris, and received favourable comments from Vito Volterra, Tullio Levi-Civita, and Paul Lévy.
In 1930 Moisil went to the University of Paris for further study in mathematics, which he finalized the next year with the paper On a class of systems of equations with partial derivatives from mathematical physics. In 1931 he returned to Romania, where he was appointed in a teaching position at the Mathematics School of the University of Iași. Shortly after, he left for a one-year Rockefeller Foundation scholarship to study in Rome. In 1932 he returned to Iași, where he remained for almost 10 years, developing a close relationship with professor Alexandru Myller. He taught the first modern algebra course in Romania, named Logic and theory of proof, at the University of Iași. During that time, he started writing a series of papers based on the works of Jan Łukasiewicz in multi-valued logic. His research in mathematical logic laid the foundation for significant work done afterwards in Romania, as well as Argentina, Yugoslavia, Czechoslovakia, and Hungary. While in Iași, he completed research rem
|
https://en.wikipedia.org/wiki/Timeline%20%28disambiguation%29
|
A timeline is a graphical representation of a chronological sequence of events.
Timeline or time line may also refer to:
Computing and technology
TimeLine, project management software
TimeLine Product Group, former business unit of Symantec Corporation
Timeline (Facebook), a feature of the social network Facebook
Arts and media
Films
Timeline (2003 film), a film based on Michael Crichton's 1999 novel of the same name
Timeline (2014 film), a romantic-comedy-drama Thai film
Games
Timeline (video game), a 2000 video game published by Eidos Interactive and based on the eponymous 1999 Michael Crichton novel
Timeline (1985), a two-player chess variant designed by George Marino for Geo Games
TimeLine (2003), 54-card boardgame designed by James Ernest for Cheapass Games
Music
Time Line (AD album), 1984
Time Lines, a 2005 album by Andrew Hill
Time-Line, a 1983 album by Renaissance
Timeline (Ayreon album), 2008
Timeline (Richard Marx album), 2000
Timeline (Mild High Club album), 2015
Time Line (Ralph Towner album), 2005
Timeline (The Vision Bleak album), 2016
Timeline (Yellowjackets album), 2011
Timeline, a member of the electro funk music group, Underground Resistance
Television
Timeline (TV series), a 1989 educational PBS TV show
Timeline, a BBC Two Scotland TV programme, the successor to Scotland 2016
Timeline, a 2014 gameshow hosted by Brian Conley
The Timeline, a documentary series developed by NFL Films
Other uses in arts and media
Timeline (novel), a 1999 science fiction novel by Michael Crichton
Transformers: Timelines, a collectible set of toys and fiction by Hasbro
See also
Alternate history
Chronology
List of timelines
Timecode
Timestream
Timetable (disambiguation)
|
https://en.wikipedia.org/wiki/Meta%20noise
|
Meta noise refers to inaccurate or irrelevant metadata. This is particularly prevalent in systems with a schema not based on a controlled vocabulary, such as certain folksonomies.
Examples:
misspelled tags ( instead of white), or tags with multiple spellings (hip-hop and hip hop)
obviously inaccurate or joke tags (dog on a content object featuring only a cat)
On systems open to large user groups, tags which are understood by only a minority of users.
Hidden benefit
Although the existence of meta noise may initially appear to detract from the value of metadata generally, meta noise allows less popular tags to be defined and used by a minority of users without damaging the validity or cohesion of what the majority of users would consider to be the most relevant or accurate metadata, thus actually increasing access to content.
References
External links
https://web.archive.org/web/20060814005830/http://www.win.tue.nl/SW-EL/2006/camera-ready/02-bateman_brooks_mccalla_SWEL2006_final.pdf
Metadata
|
https://en.wikipedia.org/wiki/Mercy%20Point
|
Mercy Point is an American science fiction medical drama, created by Trey Callaway, David Simkins, and Milo Frank, which originally aired for one season on United Paramount Network (UPN) from October 6, 1998, to July 15, 1999. With an ensemble cast led by Joe Morton, Maria del Mar, Alexandra Wilson, Brian McNamara, Salli Richardson, Julia Pennington, Gay Thomas, Jordan Lund, and Joe Spano, the series focuses on the doctors and nurses in a 23rd-century hospital space station located in deep space. The executive producers were Trey Callaway, Michael Katleman, Lee David Zlotoff, Joe Voci, and Scott Sanders.
Callaway adapted Mercy Point from his original screenplay, "Nightingale One". It was picked up by Mandalay Television, and the concept was eventually revised as a television project and renamed Mercy Point; production on the film project had ended due to the poor commercial performance of the 1997 film Starship Troopers. The television show was part of a three-million-dollar deal between Mandalay and Columbia TriStar Television to produce 200 hours of material. It was filmed in Vancouver to reduce production costs, the hospital sets being constructed on a series of sound stages. Director Joe Napolitano has praised the show for its use of a complete set to allow for more intricate directing. Despite Callaway envisioning Mercy Point as a companion to Star Trek: Voyager, it was paired with Moesha and Clueless as its lead-in on Tuesday nights. Initially focused on ethical and medical cases, the show's storylines gradually shifted toward relating the characters' personal relationships, to better fit UPN's primarily teen viewership.
Mercy Point was placed on hiatus after only three episodes were aired, and was replaced by the reality television series America's Greatest Pets and the sitcom Reunited. The show suffered from low ratings, with an average of two million viewers. The final four episodes of the series were broadcast in two 2-hour blocks on Thursday nights in July 1999. It has never been released on DVD or Blu-ray, but was made available to stream on Crackle. Critical response to Mercy Point was mixed; some commentators praised its characterization and use of science-fiction elements, while others found it to be uninteresting and unoriginal. Callaway stated that he had the potential story arcs for the full first season already planned before the show's cancellation.
Premise
Set in the year 2249, Mercy Point revolves around doctors and nurses working in a hospital space station in deep space. The "state-of-the-art hospital" is described as "the last stop for anything going out, the first stop for anything coming back" by one of the show's characters. It is noted for existing on the "fringes of the galaxy", on a colony called Jericho. The facility includes advanced medical equipment, such as "artificial wombs, holographic three-dimensional X-ray projections [and] zero-gravity operating tables". A talking computer known as Hippocrates, voiced
|
https://en.wikipedia.org/wiki/Hylophorbus
|
Hylophorbus is a genus of microhylid frogs endemic to New Guinea. Common name Mawatta frogs has been coined for them.
Molecular data suggest that Hylophorbus is monophyletic and that its sister taxon is Callulops.
Species
There are 12 recognized species:
References
External links
taxon Hylophorbus at http://www.eol.org.
Microhylidae
Amphibian genera
Amphibians of New Guinea
Taxa named by William John Macleay
Endemic fauna of New Guinea
|
https://en.wikipedia.org/wiki/Backjumping
|
In backtracking algorithms, backjumping is a technique that reduces search space, therefore increasing efficiency. While backtracking always goes up one level in the search tree when all values for a variable have been tested, backjumping may go up more levels. In this article, a fixed order of evaluation of variables is used, but the same considerations apply to a dynamic order of evaluation.
Definition
Whenever backtracking has tried all values for a variable without finding any solution, it reconsiders the last of the previously assigned variables, changing its value or further backtracking if no other values are to be tried. If is the current partial assignment and all values for have been tried without finding a solution, backtracking concludes that no solution extending
exists. The algorithm then "goes up" to , changing 's value if possible, backtracking again otherwise.
The partial assignment is not always necessary in full to prove that no value of leads to a solution. In particular, a prefix of the partial assignment may have the same property, that is, there exists an index such that cannot be extended to form a solution with whatever value for . If the algorithm can prove this fact, it can directly consider a different value for instead of reconsidering as it would normally do.
The efficiency of a backjumping algorithm depends on how high it is able to backjump. Ideally, the algorithm could jump from to whichever variable is such that the current assignment to cannot be extended to form a solution with any value of . If this is the case, is called a safe jump.
Establishing whether a jump is safe is not always feasible, as safe jumps are defined in terms of the set of solutions, which is what the algorithm is trying to find. In practice, backjumping algorithms use the lowest index they can efficiently prove to be a safe jump. Different algorithms use different methods for determining whether a jump is safe. These methods have different cost, but a higher cost of finding a higher safe jump may be traded off a reduced amount of search due to skipping parts of the search tree.
Backjumping at leaf nodes
The simplest condition in which backjumping is possible is when all values of a variable have been proved inconsistent without further branching. In constraint satisfaction, a partial evaluation is consistent if and only if it satisfies all constraints involving the assigned variables, and inconsistent otherwise. It might be the case that a consistent partial solution cannot be extended to a consistent complete solution because some of the unassigned variables may not be assigned without violating other constraints.
The condition in which all values of a given variable are inconsistent with the current partial solution is called a leaf dead end. This happens exactly when the variable is a leaf of the search tree (which correspond to nodes having only leaves as children in the figures of this article.)
The backjumpi
|
https://en.wikipedia.org/wiki/Sun-1
|
Sun-1 was the first generation of UNIX computer workstations and servers produced by Sun Microsystems, launched in May 1982. These were based on a CPU board designed by Andy Bechtolsheim while he was a graduate student at Stanford University and funded by DARPA. The Sun-1 systems ran SunOS 0.9, a port of UniSoft's UniPlus V7 port of Seventh Edition UNIX to the Motorola 68000 microprocessor, with no window system. Affixed to the case of early Sun-1 workstations and servers is a red bas relief emblem with the word SUN spelled using only symbols shaped like the letter U. This is the original Sun logo, rather than the more familiar purple diamond shape used later.
The first Sun-1 workstation was sold to Solo Systems in May 1982. The Sun-1/100 was used in the original Lucasfilm EditDroid non-linear editing system.
Models
Hardware
The Sun-1 workstation was based on the Stanford University SUN workstation designed by Andy Bechtolsheim (advised by Vaughan Pratt and Forest Baskett), a graduate student and co-founder of Sun Microsystems. At the heart of this design were the Multibus CPU, memory, and video display cards. The cards used in the Sun-1 workstation were a second-generation design with a private memory bus allowing memory to be expanded to 2 MB without performance degradation.
The Sun 68000 board introduced in 1982 was a powerful single-board computer. It combined a 10 MHz Motorola 68000 microprocessor, a Sun-designed memory management unit (MMU), 256 KB of zero wait state memory with parity, up to 32 KB of EPROM memory, two serial ports, a 16-bit parallel port and an Intel Multibus (IEEE 796 bus) interface in a single , Multibus form factor.
By using the Motorola 68000 processor tightly coupled with the Sun-1 MMU, the Sun 68000 CPU board was able to support a multi-tasking operating system such as UNIX. It included an advanced Sun-designed multi-process two-level MMU with facilities for memory protection, code sharing and demand paging of memory. The Sun-1 MMU was necessary because the Motorola 68451 MMU did not always work correctly with the 68000 and could not always restore the processor state after a page fault.
The CPU board included 256 KB of memory which could be replaced or augmented with two additional memory cards for a total of 2 MB. Although the memory cards used the Multibus form factor, they only used the Multibus interface for power; all memory access was via the smaller private P2 bus. This was a synchronous private memory bus that allowed for simultaneous memory input/output transfers. It also allowed for full performance zero wait state operation of the memory. When installing the first 1 MB expansion board, either the 256 Kb of memory on the CPU board or the first 256 KB on the expansion board had to be disabled.
On-board I/O included a dual serial port UART and a 16-bit parallel port. The serial ports were implemented with an Intel 8274 UART and later with a NEC D7201C UART. Serial port A was wired as a dat
|
https://en.wikipedia.org/wiki/Eel%20%28disambiguation%29
|
An eel is a fish in the order of Anguilliformes.
Eel, EEL or eels may also refer to:
Animals
Amphibians
Congo eel, amphibians of the genus Amphiuma (order Caudata)
Siren intermedia or two-legged eel or mud eel
Rubber eel, an aquatic caecilian of the family Typhlonectidae (order Gymnophiona)
Ray-finned fish
Electric eel, a genus of knifefish in the order Gymnotiformes
Deep-sea spiny eels, a common name for fish in the family Notacanthidae, order Notacanthiformes
Fire eel
Spiny eel
Swamp eel
People
Camille Henry (1933-1997), Canadian National Hockey League player nicknamed "The Eel"
Eric Moussambani (born 1978), Equatorial Guinean swimmer nicknamed "Eric the Eel"
Places
Eel Glacier, Washington state, United States
Eel Lake, Oregon, United States
Eels Lake, Ontario, Canada
Eel River (disambiguation)
Eel Township, Cass County, Indiana, United States
Crag Hill, Lake District, UK, a mountain formerly known as Eel Crag
Arts and entertainment
Eels (band), a musical group
"Eels" (The Mighty Boosh), a 2007 episode of The Mighty Boosh
The Eel (film), a 1997 Japanese film
USS Eel, a fictional submarine in the novel Run Silent, Run Deep and its film adaptation
"Eel", 3rd episode of Servant (TV series)
Fictional characters
Eel (comics), two Marvel Comics villains
Eel (G.I. Joe), a set of fictional characters in the G.I. Joe universe
The Eel (fictional character), a pulp fiction character
Eel O'Brien, real name of Plastic Man, a comic book superhero
Other uses
Eel as food
Electron energy loss spectroscopy
Entwicklung und Erprobung von Leichtflugzeugen, a German aircraft design concern based in Putzbrunn
Environmentally Endangered Lands, a wildland conservation program in Brevard County, Florida
Extensible Embeddable Language, a scripting and programming language
Parramatta Eels, an Australian rugby league club
USS Eel (SS-354), a projected United States Navy submarine
Elastic potential energy, sometimes abbreviated as "eel" in Physics
See also
EAL (disambiguation)
Eeles, a name
|
https://en.wikipedia.org/wiki/GURPS%20Alpha%20Centauri
|
GURPS Alpha Centauri is a sourcebook for GURPS Third Edition.
Contents
GURPS Alpha Centauri details the setting of the Sid Meier's Alpha Centauri computer game.
Publication history
Steve Jackson Games published GURPS Alpha Centauri, a sourcebook for the GURPS role-playing game set in the Alpha Centauri universe.
Reception
References
Alpha Centauri in fiction
Alpha Centauri
Role-playing games based on video games
Science fiction role-playing games
Role-playing game supplements introduced in 2002
|
https://en.wikipedia.org/wiki/Dynamic%20synchronous%20transfer%20mode
|
Dynamic synchronous transfer mode (DTM) is an optical networking technology standardized by the European Telecommunications Standards Institute (ETSI) in 2001 beginning with specification ETSI ES 201 803-1. DTM is a time-division multiplexing and a circuit-switching network technology that combines switching and transport. It is designed to provide a guaranteed quality of service (QoS) for streaming video services, but can be used for packet-based services as well. It is marketed for professional media networks, mobile TV networks, digital terrestrial television (DTT) networks, in content delivery networks and in consumer oriented networks, such as "triple play" networks.
History
The DTM architecture was conceived in 1985 and developed at the Royal Institute of Technology (KTH) in Sweden.
It was published in February 1996.
The research team was split into two spin-off companies, reflecting two different approaches to use the technology. One of these companies remains active in the field and delivers commercial products based on the DTM technology. Its name is Net Insight.
See also
Broadband Integrated Services Digital Network
References
Further reading
External links
IHS web page listing for ETSI ES 201 803- 6
Paper from the founder of the Topology (in postscript format)
Network protocols
Link protocols
|
https://en.wikipedia.org/wiki/Grotthuss%20mechanism
|
The Grotthuss mechanism (also known as proton jumping) is a model for the process by which an 'excess' proton or proton defect diffuses through the hydrogen bond network of water molecules or other hydrogen-bonded liquids through the formation and concomitant cleavage of covalent bonds involving neighboring molecules.
In his 1806 publication “Theory of decomposition of liquids by electrical currents”, Theodor Grotthuss proposed a theory of water conductivity. Grotthuss envisioned the electrolytic reaction as a sort of ‘bucket line’ where each oxygen atom simultaneously passes and receives a single hydrogen ion.
It was an astonishing theory to propose at the time, since the water molecule was thought to be OH not H2O and the existence of ions was not fully understood.
On its 200th anniversary, his article was reviewed by Cukierman.
Although Grotthuss was using an incorrect empirical formula of water, his description of the passing of protons through the cooperation of neighboring water molecules proved prescient.
Lemont Kier suggested that proton hopping may be an important mechanism for nerve transduction.
Proton transport mechanism and proton-hopping mechanism
The Grotthuss mechanism is now a general name for the proton-hopping mechanism. In liquid water the solvation of the excess proton is idealized by two forms: the H9O4+ (Eigen cation) or H5O2+ (Zundel cation). While the transport mechanism is believed to involve the inter-conversion between these two solvation structures, the details of the hopping and transport mechanism is still debated.
Currently there are two plausible mechanisms:
Eigen to Zundel to Eigen (E–Z–E), on the basis of experimental NMR data,
Zundel to Zundel (Z–Z), on the basis of molecular dynamics simulation.
The calculated energetics of the hydronium solvation shells were reported in 2007 and it was suggested that the activation energies of the two proposed mechanisms do not agree with their calculated hydrogen bond strengths, but mechanism 1 might be the better candidate of the two.
By use of conditional and time-dependent radial distribution functions (RDF), it was shown that the hydronium RDF can be decomposed into contributions from two distinct structures, Eigen and Zundel. The first peak in g(r) (the RDF) of the Eigen structure is similar to the equilibrium, standard RDF, only slightly more ordered, while the first peak of the Zundel structure is actually split into two peaks. The actual proton transfer (PT) event was then traced (after synchronizing all PT events so that t=0 is the actual event time), revealing that the hydronium indeed starts from an Eigen state, and quickly transforms into the Zundel state as the proton is being transferred, with the first peak of g(r) splitting into two.
For a number of important gas phase reactions, like the hydration of carbon dioxide, a Grotthuss-like mechanism involving concerted proton hopping over several water molecules at the same time has been shown to describe
|
https://en.wikipedia.org/wiki/Uniform%20binary%20search
|
Uniform binary search is an optimization of the classic binary search algorithm invented by Donald Knuth and given in Knuth's The Art of Computer Programming. It uses a lookup table to update a single array index, rather than taking the midpoint of an upper and a lower bound on each iteration; therefore, it is optimized for architectures (such as Knuth's MIX) on which
a table lookup is generally faster than an addition and a shift, and
many searches will be performed on the same array, or on several arrays of the same length
C implementation
The uniform binary search algorithm looks like this, when implemented in C.
#define LOG_N 4
static int delta[LOG_N];
void make_delta(int N)
{
int power = 1;
int i = 0;
do {
int half = power;
power <<= 1;
delta[i] = (N + half) / power;
} while (delta[i++] != 0);
}
int unisearch(int *a, int key)
{
int i = delta[0] - 1; /* midpoint of array */
int d = 0;
while (1) {
if (key == a[i]) {
return i;
} else if (delta[d] == 0) {
return -1;
} else {
if (key < a[i]) {
i -= delta[++d];
} else {
i += delta[++d];
}
}
}
}
/* Example of use: */
#define N 10
int main(void)
{
int a[N] = {1, 3, 5, 6, 7, 9, 14, 15, 17, 19};
make_delta(N);
for (int i = 0; i < 20; ++i)
printf("%d is at index %d\n", i, unisearch(a, i));
return 0;
}
References
Knuth. The Art of Computer Programming, Volume 3. Page 412, Algorithm C.
External links
An implementation of Knuth's algorithm in Pascal, by Han de Bruijn
An implementation of Knuth's algorithm in Go, by Adrianus Warmenhoven
Search algorithms
Articles with example C code
|
https://en.wikipedia.org/wiki/Vocal%20warm-up
|
A vocal warm-up is a series of exercises meant to prepare the voice for singing, acting, or other use.
There is very little scientific data about the benefits of vocal warm-ups. Relatively few studies have researched the effects of these exercises on muscle function and even fewer have studied their effect on singing-specific outcomes.
Description
Vocal warm-ups are intended to accomplish five things: a physical whole-body warm-up, preparing the breath, preparing the articulators and resonators, moving from the spoken register to the singing register (or an extended spoken register for acting), and preparing for the material that is going to be rehearsed or performed.
Physical whole-body warm-ups help prepare a singer or actor's body in many ways. Muscles all over the body are used when singing/acting. Stretching helps to activate and prepare the large muscle groups that take care of balance and posture, and the smaller muscle groups that are directly involved with breathing and facial articulation. Stretches of the abdomen, back, neck, and shoulders are important to avoid tension, which influences the sound of the voice through constriction of the larynx and/or breathing muscles. Actors (including opera singers or musical theatre performers) may need to do a more comprehensive physical warm-up if their role is demanding physically.
Preparing the breath involves not only stretching the many muscles involved with respiration, but preparing them to sustain exhalation during long singing/speaking passages. Specific training of the respiratory muscles is required for singers to take very quick deep breath and sustain their exhalation over many bars of music. A good vocal warm-up should include exercises such as inhaling for 4 counts, then exhaling for 8 counts (and slowly transitioning until the performer can inhale for 1 count and exhale for as long as possible); panting or puffing air are also used to engage in the intercostal muscles.
Vocal articulation is controlled by a variety of tissues, muscles, and structures (place of articulation), but can be basically understood as the effects of the lips, the teeth, and the tip of the tongue. Often we also try and use our jaw for articulation, which creates unnecessary tension in the facial muscles and tongue. A good vocal warm up will relax the jaw, while activating the lips and the tongue in a variety of exercises to stretch the muscles and prepare for the more defined vocal articulation that is required when singing or acting. These exercises may include tongue twisters, or the famous "me, may, ma, moh, moo" that many actors are seen doing in film.
Resonators are the hard and soft surfaces within the oral cavity that affect the sound waves produced during phonation. Hard surfaces, such as the hard palate, cannot be controlled by the singer, but soft surfaces, such as the soft palate, can be trained to change the timbre of the sound. A vocal warm up should include exercises which direct sound
|
https://en.wikipedia.org/wiki/1945%20Pacific%20typhoon%20season
|
The 1945 Pacific typhoon season was the first official season to be included in the West Pacific typhoon database. It was also the first season to name storms. It has no official bounds; it ran year-round in 1945, but most tropical cyclones tend to form in the northwestern Pacific Ocean between June and December. These dates conventionally delimit the period of each year when most tropical cyclones form in the northwestern Pacific Ocean.
The scope of this article is limited to the Pacific Ocean, north of the equator and west of the international date line. Storms that form east of the date line and north of the equator are called hurricanes; see 1945 Pacific hurricane season. Predecessor agency to the Joint Typhoon Warning Center (JTWC), Fleet Weather Center/Typhoon Tracking Center was established on the island of Guam in June 1945, after multiple typhoons, including Typhoon Cobra in the previous season and Typhoon Connie in this season, had caused a significant loss of men and ships. It would not take major responsibility in the West Pacific basin until 1950 season. Instead, storms in this season are identified and named by the United States Armed Services, and these names are taken from the list that USAS publicly adopted before this season had started earlier this year. Since this is the first season to be included in the West Pacific typhoon database, this would also be the first season where the names of Western Pacific tropical cyclones are preserved publicly.
Systems
Tropical Storm Ann
The first named storm of the season, Tropical Storm Ann formed on April 19 at relatively low latitude of 9.5°N. Ann generally tracked westward and later reached its peak intensity on April 21, before weakening to a tropical depression on April 23. The storm began to curve north the next day, and overall did not affect any landmasses and dissipated on April 26.
Tropical Storm Betty
The second named storm of the season, Tropical Storm Betty formed on May 13, 1945, and began to move in a northeastern direction. It strengthened into a tropical storm only 18 hours later and continued on its path. However, the storm eventually moved further north, and into colder waters. Betty weakened into a tropical depression and dissipated on May 16, having not threatened land at all.
Typhoon Connie
A small yet powerful typhoon, Connie was first spotted on June 1 by the Fleet Weather Center on Guam, moving northeast. Winds were reported to have been as high as 140 mph. But by June 7, it began to weaken. Its final fate is unknown.
The U.S. Navy's Third Fleet was hit by Connie, and reporting about the storm frequently refers to it as Typhoon Viper. The same fleet had previously been hit, with great loss of life, by Cobra the previous year. Connie being lesser, only one officer and five seamen were lost or killed because of Connie, and around 150 airplanes on its carriers were either lost or damaged.
Tropical Storm Doris
Tropical Storm Doris existed from June 18 to 2
|
https://en.wikipedia.org/wiki/Board%20representation%20%28computer%20chess%29
|
Board representation in computer chess is a data structure in a chess program representing the position on the chessboard and associated game state. Board representation is fundamental to all aspects of a chess program including move generation, the evaluation function, and making and unmaking moves (i.e. search) as well as maintaining the state of the game during play. Several different board representations exist. Chess programs often utilize more than one board representation at different times, for efficiency. Execution efficiency and memory footprint are the primary factors in choosing a board representation; secondary considerations are effort required to code, test and debug the application.
Early programs used piece lists and square lists, both array based. Most modern implementations use a more elaborate but more efficient bit array approach called bitboards which map bits of a 64-bit word or double word to squares of the board.
Board state
A full description of a chess position, i.e. the position "state", must contain the following elements:
The location of each piece on the board
Whose turn it is to move
Status of the 50-move draw rule. The name of this is sometimes a bit confusing, as it is 50 moves by each player, and therefore 100 half-moves, or ply. For example, if the previous 80 half-moves passed without a capture or a pawn move, the fifty-move rule will kick in after another twenty half-moves.
Whether either player is permanently disqualified to castle, both kingside and queenside.
If an en passant capture is possible.
Board representation typically does not include the status of the threefold repetition draw rule. To determine this rule, a complete history of the game from the last irreversible action (capture, pawn movement, or castling) needs to be maintained, and so, is generally tracked in separate data structures. Without this information, models may repeat the position despite having a winning advantage, resulting in an excessive amount of draws.
The board state may also contain secondary derived information like which pieces attack a square; for squares containing pieces, which spaces are attacked or guarded by that piece; which pieces are pinned; and other convenient or temporary state.
The board state is associated with each node of the game tree, representing a position arrived at by a move, whether that move was played over the board, or generated as part of the program's search. It is conceptually local to the node, but may be defined globally, and incrementally updated from node to node as the tree is traversed.
Types
Array based
Piece lists
Some of the very earliest chess programs working with extremely limited amounts of memory maintained serial lists (arrays) of the pieces in a conveniently searchable order, like largest to smallest; associated with each piece was its location on the board as well as other information, such as squares representing its legal moves. There were several lists, o
|
https://en.wikipedia.org/wiki/Common%20Vulnerabilities%20and%20Exposures
|
The Common Vulnerabilities and Exposures (CVE) system provides a reference method for publicly known information-security vulnerabilities and exposures. The United States' National Cybersecurity FFRDC, operated by The MITRE Corporation, maintains the system, with funding from the US National Cyber Security Division of the US Department of Homeland Security. The system was officially launched for the public in September 1999.
The Security Content Automation Protocol uses CVE, and CVE IDs are listed on Mitre's system as well as in the US National Vulnerability Database.
Background
A vulnerability is a computer-software system's weakness enabling unwarranted access. E.g. software processing credit-cards mustn't allow people to read the credit card numbers it processes, yet a nefarious party might use a vulnerability for reading credit card numbers. Considering a specific vulnerability in isolation is hard because there exist many pieces of software, oftentimes with many vulnerabilities and possibly of various types. CVE Identifiers assign each vulnerability a unique formal name, thus establishing a common-language.
CVE identifiers
MITRE Corporation's documentation defines CVE Identifiers (also called "CVE names", "CVE numbers", "CVE-IDs", and "CVEs") as unique, common identifiers for publicly known information-security vulnerabilities in publicly released software packages. Historically, CVE identifiers had a status of "candidate" ("CAN-") and could then be promoted to entries ("CVE-"), but this practice was ended in 2005 and all identifiers are now assigned as CVEs. The assignment of a CVE number is not a guarantee that it will become an official CVE entry (e.g., a CVE may be improperly assigned to an issue which is not a security vulnerability, or which duplicates an existing entry).
CVEs are assigned by a CVE Numbering Authority (CNA). While some vendors acted as a CNA before, the name and designation was not created until February 1, 2005. There are three primary types of CVE number assignments:
The Mitre Corporation functions as Editor and Primary CNA
Various CNAs assign CVE numbers for their own products (e.g., Microsoft, Oracle, HP, Red Hat)
A third-party coordinator such as CERT Coordination Center may assign CVE numbers for products not covered by other CNAs
When investigating a vulnerability or potential vulnerability it helps to acquire a CVE number early on. CVE numbers may not appear in the MITRE or NVD CVE databases for some time (days, weeks, months or potentially years) due to issues that are embargoed (the CVE number has been assigned but the issue has not been made public), or in cases where the entry is not researched and written up by MITRE due to resource issues. The benefit of early CVE candidacy is that all future correspondence can refer to the CVE number. Information on getting CVE identifiers for issues with open source projects is available from Red Hat and GitHub.
CVEs are for software that has been publicly re
|
https://en.wikipedia.org/wiki/Fityk
|
Fityk is curve fitting and data analysis application, predominantly used to fit analytical,
bell-shaped functions to experimental data. It is positioned to fill the gap between general plotting software and programs specific for one field, e.g. crystallography or XPS.
Originally, Fityk was developed to analyse powder diffraction data. It is also used in other fields that require peak analysis and peak-fitting, like chromatography or various kinds of spectroscopy.
Fityk is free and open source, distributed under the terms of GNU General Public License, with binaries/installers available free of charge on the project's website. It runs on Linux, macOS, Microsoft Windows, FreeBSD and other platforms. It operates either as a command line program or with a graphical user interface.
It is written in C++, using wxWidgets, and providing bindings for Python and other scripting languages.
Features
three weighted least squares methods:
Levenberg-Marquardt algorithm,
Nelder-Mead method
Genetic algorithm
about 20 built-in functions and support for user-defined functions
equality constraints
data manipulations,
handling series of datasets,
automation of common tasks with scripts.
Alternatives
The programs LabPlot, MagicPlot and peak-o-mat have similar scope.
More generic data analysis programs with spread-sheet capabilities include the proprietary Origin and its clones QtiPlot (paid, closed source) and SciDAVis (non-paid, open source).
See also
Comparison of numerical analysis software
External links
References
2004 software
Data analysis software
Free plotting software
Free science software
Free software programmed in C++
Free software projects
Regression and curve fitting software
Software that uses wxWidgets
|
https://en.wikipedia.org/wiki/Transactional%20memory
|
In computer science and engineering, transactional memory attempts to simplify concurrent programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing.
Transactional memory systems provide high-level abstraction as an alternative to low-level thread synchronization. This abstraction allows for coordination between concurrent reads and writes of shared data in parallel systems.
Motivation
In concurrent programming, synchronization is required when parallel threads attempt to access a shared resource. Low-level thread synchronization constructs such as locks are pessimistic and prohibit threads that are outside a critical section from running the code protected by the critical section. The process of applying and releasing locks often functions as an additional overhead in workloads with little conflict among threads. Transactional memory provides optimistic concurrency control by allowing threads to run in parallel with minimal interference. The goal of transactional memory systems is to transparently support regions of code marked as transactions by enforcing atomicity, consistency and isolation.
A transaction is a collection of operations that can execute and commit changes as long as a conflict is not present. When a conflict is detected, a transaction will revert to its initial state (prior to any changes) and will rerun until all conflicts are removed. Before a successful commit, the outcome of any operation is purely speculative inside a transaction. In contrast to lock-based synchronization where operations are serialized to prevent data corruption, transactions allow for additional parallelism as long as few operations attempt to modify a shared resource. Since the programmer is not responsible for explicitly identifying locks or the order in which they are acquired, programs that utilize transactional memory cannot produce a deadlock.
With these constructs in place, transactional memory provides a high-level programming abstraction by allowing programmers to enclose their methods within transactional blocks. Correct implementations ensure that data cannot be shared between threads without going through a transaction and produce a serializable outcome. For example, code can be written as:
def transfer_money(from_account, to_account, amount):
"""Transfer money from one account to another."""
with transaction():
from_account.balance -= amount
to_account.balance += amount
In the code, the block defined by "transaction" is guaranteed atomicity, consistency and isolation by the underlying transactional memory implementation and is transparent to the programmer. The variables within the transaction are protected from external conflicts, ensuring that either the correct amount is transferred or no action is taken at all. Note that concurrency relat
|
https://en.wikipedia.org/wiki/KWLC
|
KWLC (1240 AM) is a college radio station. The station's programming consists primarily of music, but also includes sports, religious services, and educational content. In September 2015, KWLC added a Sunday afternoon news program. Licensed to Decorah, Iowa, United States. The station is currently owned by Luther College and operated by a staff of Luther students.
The station began broadcasting in 1926 and is said to be the oldest continually operating radio station in Iowa. It broadcasts on a frequency shared with local commercial station KDEC. In 2004, the station began webcasting.
References
External links
FCC timeshare documentation
WLC
Radio stations established in 1926
1926 establishments in Iowa
Luther College (Iowa)
|
https://en.wikipedia.org/wiki/KMEG
|
KMEG (channel 14) is a television station in Sioux City, Iowa, United States, affiliated with the digital multicast network Dabl. It is owned by Waitt Broadcasting, which maintains a shared services agreement (SSA) with Sinclair Broadcast Group, owner of Fox/MyNetworkTV/CBS affiliate KPTH (channel 44), for the provision of certain services. The two stations share studios along I-29 (postal address says Gold Circle) in Dakota Dunes, South Dakota; KMEG's transmitter is located in unincorporated Plymouth County, Iowa, east of James and US 75 along the Woodbury County line.
From its sign-on in 1967 to 2021, KMEG was the CBS affiliate in Sioux City. It was put on the air to provide the area with full three-network service for the first time. The station largely spent decades in third place under a succession of owners; it had no full-length local news programming from 1976 to 1999. KMEG briefly had the national spotlight in 1993 when its decision not to air the Late Show with David Letterman left Sioux City the only market where the show was not aired.
Waitt Broadcasting, the present owner, acquired KMEG in 1998. New studios were built in Dakota Dunes and a new news operation was started. In 2005, Waitt outsourced most station operations to KPTH owner Pappas Telecasting; that station changed hands in 2009 and again in 2013. CBS programming moved to a "CBS 14" subchannel of Sinclair-owned KPTH in 2021, leaving KMEG to broadcast national digital multicast television networks.
History
Early years
Medallion Broadcasters, Inc., applied to the Federal Communications Commission (FCC) in November 1966 seeking authority to build a television station on ultra high frequency (UHF) channel 14 in Sioux City. Medallion, a group of northwest Iowa residents, sought to bring the missing ABC network to Sioux City. The group was headed by Robert Donovan, longtime sales manager of one of the two existing stations in Sioux City, KVTV (channel 9, now KCAU-TV). The commission granted the application on February 15, 1967. Construction then began, with Medallion taking up space in a building at Seventh Street and Floyd Boulevard previously used by a coffee company. The call letters KMEG were selected to reflect that the station would broadcast with a megawatt, the first station in the region to do so.
While the station was in the construction phase, KVTV announced it would change network affiliations from CBS to ABC. Medallion then announced its intention to pursue the CBS affiliation for KMEG and signed an affiliation agreement.
KMEG began broadcasting on September 5, 1967, from a transmitter site on high ground east of Sioux City. Eighteen months after going on air, Medallion announced the sale of KMEG to John Fetzer; the FCC approved the deal and noted that Medallion had sustained heavy losses in starting up and running channel 14. However, much carried over from the station's founding ownership. Donovan remained with KMEG as station manager until 1983, while KMEG ha
|
https://en.wikipedia.org/wiki/Edinburgh%20Parallel%20Computing%20Centre
|
EPCC, formerly the Edinburgh Parallel Computing Centre, is a supercomputing centre based at the University of Edinburgh. Since its foundation in 1990, its stated mission has been to accelerate the effective exploitation of novel computing throughout industry, academia and commerce.
The University has supported high performance computing (HPC) services since 1982. , through EPCC, it supports the UK's national high-end computing system, ARCHER (Advanced Research Computing High End Resource), and the UK Research Data Facility (UK-RDF).
Overview
EPCC's activities include: consultation and software development for industry and academia; research into high-performance computing; hosting advanced computing facilities and supporting their users; training and education .
The Centre offers two Masters programmes: MSc in High-Performance Computing and MSc in High-Performance Computing with Data Science .
It is a member of the Globus Alliance and, through its involvement with the OGSA-DAI project, it works with the Open Grid Forum DAIS-WG.
Around half of EPCC's annual turnover comes from collaborative projects with industry and commerce. In addition to privately funded projects with businesses, EPCC receives funding from Scottish Enterprise, the Engineering and Physical Sciences Research Council and the European Commission.
History
EPCC was established in 1990, following on from the earlier Edinburgh Concurrent Supercomputer Project and chaired by Jeffery Collins from 1991. From 2002 to 2016 EPCC was part of the University's School of Physics & Astronomy, becoming an independent Centre of Excellence within the University's College of Science and Engineering in August 2016.
It was extensively involved in all aspects of Grid computing including: developing Grid middleware and architecture tools to facilitate the uptake of e-Science; developing business applications and collaborating in scientific applications and demonstration projects.
The Centre was a founder member of the UK's National e-Science Centre (NeSC), the hub of Grid and e-Science activity in the UK. EPCC and NeSC were both partners in OMII-UK, which offers consultancy and products to the UK e-Science community. EPCC was also a founder partner of the Numerical Algorithms and Intelligent Software Centre (NAIS).
EPCC has hosted a variety of supercomputers over the years, including several Meiko Computing Surfaces, a Thinking Machines CM-200 Connection Machine, and a number of Cray systems including a Cray T3D and T3E.
High-performance computing facilities
EPCC manages a collection of HPC systems including ARCHER (the UK's national high-end computing system) and a variety of smaller HPC systems. These systems are all available for industry use on a pay-per-use basis.
Current systems hosted by EPCC include:
ARCHER2: As of 2021, the ARCHER2 facility is based around a HPE Cray EX supercomputer that provides the central computational resource, with an estimated peaks performance of 28 Peta
|
https://en.wikipedia.org/wiki/DCAS
|
DCAS may be:
DCAS keys, control keys on the computer keyboard, see
Deputy Chief of the Air Staff (disambiguation)
Derive computer algebra system
Double compare-and-swap
Downloadable Conditional Access System
New York City Department of Citywide Administrative Services
|
https://en.wikipedia.org/wiki/HIO
|
HIO may refer to:
Hillsboro Airport, in Washington County, Oregon, United States
Hypoiodous acid, an oxidising agent
Hybrid input-output algorithm, in coherent diffraction imaging
Oslo University College, the largest state university college in Norway
Østfold University College, a further and higher education institution in south-eastern Norway
Tsoa language, spoken in Botswana and Zimbabwe
|
https://en.wikipedia.org/wiki/Automatic%20Storage%20Management
|
Automatic Storage Management (ASM) is a feature provided by Oracle Corporation within the Oracle Database from release Oracle 10g (revision 1) onwards. ASM aims to simplify the management of database datafiles, control files and log files. To do so, it provides tools to manage file systems and volumes directly inside the database, allowing database administrators (DBAs) to control volumes and disks with familiar SQL statements in standard Oracle environments. Thus DBAs do not need extra skills in specific file systems or volume managers (which usually operate at the level of the operating system).
Features
IO channels can take advantage of data striping and software mirroring
DBAs can automate online redistribution of data, along with the addition and removal of disks/storage
the system maintains redundant copies and provides 3rd-party RAID functionality
Oracle supports third-party multipathing IO technologies (such as failover or load balancing to SAN access)
the need for hot spares diminish
Architecture overview
ASM creates extents out of datafiles, log-files, system files, control files and other database structures. The system then spreads these extents across all disks in a "diskgroup". One can think of a diskgroup in ASM as a Logical Volume Manager volume group — with an ASM file corresponding to a logical volume. In addition to the existing Oracle background processes, ASM introduces two new ones - OSMB and RBAL. OSMB opens and creates disks in a diskgroup. RBAL provides the functionality of moving data between disks in a diskgroup.
Implementation and usage
Automatic Storage Management (ASM) simplifies administration of Oracle-related files by allowing the administrator to reference disk groups (rather than individual disks and files) which ASM manages. ASM extends the Oracle Managed Files (OMF) functionality
that also includes striping and mirroring to provide balanced and secure storage. DBAs can use the ASM functionality in combination with existing raw and cooked file-systems, along with OMF and manually managed files.
An ASM instance controls the ASM functionality. It isn't a full database instance, it provides just the memory structures, and as such is very small and lightweight.
The main components of ASM are disk groups, each of which comprise several physical disks controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are known as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined by the DBA for ease of reference.
The level of redundancy and the granularity of the striping can be controlled using templates. Oracle Corporation provides default templates for each file-type stored by ASM, but additional templates can be defined as needed.
Failure groups are defined within a disk group to support the required level of redundancy. For two-way mirroring, a disk group might cont
|
https://en.wikipedia.org/wiki/Common%20Type%20System
|
In Microsoft's .NET Framework, the Common Type System (CTS) is a standard that specifies how type definitions and specific values of types are represented in
computer memory. It is intended to allow programs written in different programming languages to easily share information. As used in programming languages, a type can be described as a definition of a set of values (for example, "all integers between 0 and 10"), and the allowable operations on those values (for example, addition and subtraction).
The specification for the CTS is contained in Ecma standard 335, "Common Language Infrastructure (CLI) Partitions I to VI." The CLI and the CTS were created by Microsoft, and the Microsoft .NET framework is an implementation of the standard.
Functions of the Common Type System
To establish a framework that helps enable cross-language integration, type safety, and high performance code execution.
To provide an object-oriented model that supports the complete implementation of many programming languages.
To define rules that languages must follow, which helps ensure that objects written in different languages can interact with each other.
The CTS also defines the rules that ensures that the data types of objects written in various languages are able to interact with each other.
The CTS also specifies the rules for type visibility and access to the members of a type, i.e. the CTS establishes the rules by which assemblies form scope for a type, and the Common Language Runtime enforces the visibility rules.
The CTS defines the rules governing type inheritance, virtual methods and object lifetime.
Languages supported by .NET can implement all or some common data types…
When rounding fractional values, the halfway-to-even ("banker's") method is used by default, throughout the Framework. Since version 2, "Symmetric Arithmetic Rounding" (round halves away from zero) is also available by programmer's option.
it is used to communicate with other languages
Type categories
The common type system supports two general categories of types:
Value types Value types directly contain their data, and instances of value types are either allocated on the stack or allocated inline in a structure. Value types can be built-in (implemented by the runtime), user-defined, or enumerations.
Reference types Reference types store a reference to the value's memory address, and are allocated on the heap. Reference types can be self-describing types, pointer types, or interface types. The type of a reference type can be determined from values of self-describing types. Self-describing types are further split into arrays and class types. The class types are user-defined classes, boxed value types, and delegates.
The following example written in Visual Basic .NET shows the difference between reference types and value types:
Imports System
Class Class1
Public Value As Integer = 0
End Class 'Class1
Class Test
Shared Sub Main()
Dim val1 As Integer = 0
Dim
|
https://en.wikipedia.org/wiki/Investment%20decisions
|
Investment decisions are made by investors and investment managers. These decision are made based on the finding of analysis tools based on data available about the companies.
Investors commonly perform investment analysis by making use of fundamental analysis, technical analysis and gut feel.
Investment decisions are often supported by decision tools. The portfolio theory is often applied to help the investor achieve a satisfactory return compared to the risk taken.
Investment decision biases
Bad decisions are often followed by a feeling of investor's remorse.
See also
Behavioral finance
Cognitive bias
Relative strength
Ratio analysis
References
Investment
|
https://en.wikipedia.org/wiki/HRESULT
|
HRESULT is a computer programming data type that represents the completion status of a function.
It is used in the source code of applications targeting Microsoft Windows and earlier IBM/Microsoft OS/2 operating systems, but its design does not limit its use to these environments. It could be used in any system supporting 32-bit integers. In other words, most modern computers.
The original purpose of HRESULT was to lay out ranges of status codes for both public and Microsoft internal use in order to prevent collisions between status codes in different subsystems of the OS/2 operating system.
An HRESULT is designed to simultaneously be both a simple numerical value and a structure of fields indicating severity, facility and status code.
Use of HRESULT is most commonly encountered in COM programming, where it forms the basis for a standardized error handling mechanism. But its use is not limited to COM. For example, it can be used as an alternative to the more traditional use of a Boolean pass/fail result.
Data structure
HRESULT is defined in a system header file as a 32-bit, signed integer and a value is often treated opaquely as an integer, especially in code that consumes a function that returns HRESULT. But a value consists of the following separate items:
Severity: indicates whether the function succeeded or failed
Facility: identifies the part of the system for which the status applies
Code: identifies a particular condition in the context of the facility
An HRESULT value is a structure with the following bit-fields:
S - Severity - indicates success (0) or failure (1)
R - Reserved portion of the facility code; corresponds to NT's second severity bit (1 - Severe Failure)
C - Customer. Specifies whether the value is Microsoft-defined (0) or customer-defined (1)
N - Reserved portion of the facility code; used to indicate a mapped NT status value [is this reserved or used?]
X - Reserved portion of the facility code; reserved for internal use; used to indicate a value that is not status but is instead message ids for display strings [is this reserved or used?]
Facility - indicates the system service that is responsible for the status; examples:
1 - RPC
2 - Dispatch (COM dispatch)
3 - Storage (OLE storage)
4 - ITF (COM/OLE Interface management)
7 - Win32 (raw Win32 error codes)
8 - Windows
9 - SSPI
10 - Control
11 - CERT (Client or server certificate)
...
Code - the facility's status code
Numeric representation
An HRESULT value is sometimes displayed as a hexadecimal value with 8 digits.
Examples:
0x80070005
0x8 - Status: Failure
0x7 - Facility: win32
0x5 - Code: E_FAULT
0x80090032
0x8 - Status: Failure
0x9 - Facility: SSPI
0x32 - Code: The request is not supported
Sometimes an HRESULT value is shown as a signed integer, but this is less common and harder to read.
Name representation
An HRESULT is sometimes represented as a so-called name, an identifier with format Facility_Severity_Reason:
Facility is either the facility
|
https://en.wikipedia.org/wiki/Paul%20Larson
|
Paul Larson (Per-Åke Larson) is a computer scientist. He is most famous for inventing the linear hashing algorithm with Witold Litwin. Paul Larson is currently a senior researcher in the Database Group of Microsoft Research. He is frequent chair and committee member of conferences such as VLDB, SIGMOD, and ICDE.
In 2005 he was inducted as a Fellow of the Association for Computing Machinery.
References
Larson PA. "Dynamic Hash Tables." Communications of the ACM. April 1988, 31(4):446-57 pdf.
External links
Paul Larson MSR Page
UW MSR Summer Institute 2010
Year of birth missing (living people)
Living people
Microsoft employees
American computer scientists
Database researchers
Academic staff of the University of Waterloo
Fellows of the Association for Computing Machinery
|
https://en.wikipedia.org/wiki/SageMath
|
SageMath (previously Sage or SAGE, "System for Algebra and Geometry Experimentation") is a computer algebra system (CAS) with features covering many aspects of mathematics, including algebra, combinatorics, graph theory, numerical analysis, number theory, calculus and statistics.
The first version of SageMath was released on 24 February 2005 as free and open-source software under the terms of the GNU General Public License version 2, with the initial goals of creating an "open source alternative to Magma, Maple, Mathematica, and MATLAB". The originator and leader of the SageMath project, William Stein, was a mathematician at the University of Washington.
SageMath uses a syntax resembling Python's, supporting procedural, functional and object-oriented constructs.
Development
Stein realized when designing Sage that there were many open-source mathematics software packages already written in different languages, namely C, C++, Common Lisp, Fortran and Python.
Rather than reinventing the wheel, Sage (which is written mostly in Python and Cython) integrates many specialized CAS software packages into a common interface, for which a user needs to know only Python. However, Sage contains hundreds of thousands of unique lines of code adding new functions and creating the interfaces among its components.
SageMath uses both students and professionals for development. The development of SageMath is supported by both volunteer work and grants. However, it was not until 2016 that the first full-time Sage developer was hired (funded by an EU grant). The same year, Stein described his disappointment with a lack of academic funding and credentials for software development, citing it as the reason for his decision to leave his tenured academic position to work full-time on the project in a newly founded company, SageMath, Inc.
Achievements
2007: first prize in the scientific software division of Les Trophées du Libre, an international competition for free software.
2012: one of the projects selected for the Google Summer of Code.
2013: ACM/SIGSAM Jenks Prize.
Performance
Both binaries and source code are available for SageMath from the download page. If SageMath is built from source code, many of the included libraries such as OpenBLAS, FLINT, GAP (computer algebra system), and NTL will be tuned and optimized for that computer, taking into account the number of processors, the size of their caches, whether there is hardware support for SSE instructions, etc.
Cython can increase the speed of SageMath programs, as the Python code is converted into C.
Licensing and availability
SageMath is free software, distributed under the terms of the GNU General Public License version 3.
Although Microsoft was sponsoring a native version of SageMath for the Windows operating system, prior to 2016 there were no plans for a native port, and users of Windows had to use virtualization technology such as VirtualBox to run SageMath. SageMath 8.0 (July 2017), with devel
|
https://en.wikipedia.org/wiki/TMF%20Flanders
|
TMF was a Belgian pay television channel whose programming was centred towards pop music videoclips. TMF was operated by Viacom International Media Networks.
Originally an abbreviation of "The Music Factory", the channel was launched as TMF Vlaanderen in 1998, mainly due to the success of the eponymous Dutch music television channel. The station began broadcasting on October 3, 1998.
The recordings of TMF Flanders occurred mainly in the Eurocam Media Center in Lint, there was until mid-2013 also established the parent company.
History
TMF Flanders was launched on October 3, 1998. On October 5, 2015, Viacom announced that TMF will stop broadcasting om November 1, 2015. Thereby two Flemish youth channels (TMF and competitor JIM) disappeared in a short time. From November 1, 2015 Comedy Central took over the whole channel. Thereby the last TMF stopped and the brand completely disappeared.
In October 14, 2023, to celebrate the 25th birthday of the channel, Pickx revived TMF under the name "TMF For A Day".
See also
The Music Factory
References
External links
TMF Vlaanderen
Official Facebook page
Official Twitter page
Official Netlog page
Official Vimeo page
Music television channels
Television channels in Flanders
Defunct television channels in Belgium
Television channels and stations established in 1998
Television channels and stations disestablished in 2015
Music organisations based in the Netherlands
|
https://en.wikipedia.org/wiki/Salome%20%28software%29
|
SALOME is a multi-platform open source (LGPL-2.1-or-later) scientific computing environment, allowing the realization of industrial studies of physics simulations.
This platform, developed by a partnership between EDF and CEA, sets up an environment for the various stages of a study to be carried out: from the creation of the CAD model and the mesh to the post-processing and visualization of the results, including the sequence of calculation schemes. Other functionalities such as uncertainty treatment, data assimilation are also implemented.
SALOME does not contain a physics solver but it provides the computing environment necessary for their integration. The SALOME environment serves as a basis for the creation of disciplinary platforms, such as salome_meca (containing code_aster), salome_cfd (with code_saturne) and SALOME-HYDRO (with TELEMAC-MASCARET).
It is also possible to create tools for specific applications (for example civil engineering, fast dynamics in pipes or rotating machines, available in salome_meca) whose specialized graphical interfaces facilitate the performance of a study.
In addition to using SALOME through its graphical interface, most of the functionalities are available through a Python API. SALOME is available on its official website.
A SALOME Users’ Day takes place every year, featuring presentations on studies performed with SALOME in several application domains, either at EDF, CEA or elsewhere. The presentations of previous editions are available on the official website.
History and consortium
The development of SALOME started around the year 2000 by a 9-sided partnership, including EDF, CEA and Open Cascade. The SALOME acronym means "Numerical Simulation by Computing Architecture in Open Source and with Evolving Methodology" (in French, « Simulation numérique par Architecture Logicielle en Open source et à Méthodologie d'Évolution »). Since 2020, the partnership focuses on industrial applications in the energy domain and is formed by EDF and CEA.
The MED format
The MED format (Modèle d’Échange des Données in French, for Data Exchange Model) is a specialization of the HDF5 standard. It is jointly owned by EDF and CEA. MED is SALOME's data exchange model. The MED data model offers a standardized representation of meshes and result fields that is independent of the simulated physics. The MED library is developed in C and C++ and has an API in C, FORTRAN and Python.
Available features
SALOME offers many features, including a powerful open source parametric CAD modeller, a multi-algorithm mesh generator/editor, a computational code supervisor, and many data analysis and processing tools.
Most of the modules are accessible both through the GUI and Python script. However, some modules remain dedicated to a purely scripted use (via python script). Here is the list of the available modules of SALOME 9.9 and that are also accessible via Python scripts :
SHAPER: parametric and variational CAD generator of geome
|
https://en.wikipedia.org/wiki/Accelerator%20table
|
In Windows programming, an accelerator table allows an application to specify a list of accelerators (keyboard shortcuts) for menu items or other commands. For example, Ctrl+S is often used as a shortcut to the File→Save menu item, Ctrl+O is a common shortcut to the File→Open menu item, etc. An accelerator takes precedence over normal processing and can be a convenient way to program some event handling.
Accelerator tables are usually located in the resources section of the binary.
Accelerators and menus
Each accelerator is associated with a control ID, the same kind of IDs which are assigned to buttons, combo boxes, list boxes, and also menu items. In this way, GUI objects can be created which represent the same function as an accelerator.
Since using the menus, and subsequently the mouse, is not always the best solution, it is important to provide users with the possibility to minimize usage of the mouse. For this reason showing the accelerators in menus can be useful; it informs the user that there are shortcuts, and that using the mouse is not always mandatory.
Electron usage
The software framework Electron also uses the term "Accelerator" as the name for its API to specify keyboard shortcuts for menu items and program behaviors on multiple platforms, including those other than Windows.
See also
Keyboard shortcut
References
User interface techniques
|
https://en.wikipedia.org/wiki/Alan%20Shalleck
|
Alan J. Shalleck (November 14, 1929 – February 6, 2006) was an American writer and producer for children's programming on television, most known for his work on later Curious George books and the 1980s television shorts.
Shalleck studied drama at Syracuse University in Syracuse, New York and went to work for CBS in the 1950s, eventually becoming an associate producer on the children's television series Winky Dink and You. In the early sixties he moved to Montreal where he produced "Like Young", at CFCF-TV, a highly successful teen music/dance show starring Jim McKenna that was eventually picked up and syndicated by Dick Clark Productions. Following his years at CBS, and CFCF-TV Shalleck was a producer at The Network for Continuing Medical Education and then formed his own production company (AJ Shalleck Productions) and produced a number of low-budget children's animated films and television episodes.
In 1977, he approached Margret Rey about producing a television series based on Curious George, which led to the 1980 television show. Shalleck and Rey wrote more than 100 short episodes for the series. In addition, they collaborated on a number of children's books and audiobooks. (Some of these books list Rey as the author and Shalleck as the editor, while others reverse the credits.)
In his retirement, Shalleck created the company "Reading By GRAMPS" and visited local elementary schools, bookstores, and other events to read books to children and promote literacy. However, he also experienced financial problems and was forced to supplement his income with part-time jobs. He most recently worked as a bookseller for Borders Books in Boynton Beach, Florida.
On February 7, 2006, 3 days before the theatrical release of a Curious George animated motion picture, Shalleck's body was discovered, partially hidden, at his home in Boynton Beach, Florida, a victim of a robbery/homicide. His attackers were tracked down using the victim's phone records. They confessed to the crime.
On October 19, 2007, one of Shalleck's murderers, 31-year-old Rex Ditto, was sentenced to life in prison and is not eligible for parole. Ditto's co-defendant, Vincent Puglisi, was convicted of first-degree murder and robbery with a deadly weapon on June 24, 2008. He was sentenced in July 2008 to life in prison and is also not eligible for parole.
References
American children's writers
Television producers from New York (state)
1929 births
2006 deaths
2006 murders in the United States
American murder victims
Syracuse University alumni
Deaths by stabbing in Florida
People murdered in Florida
Male murder victims
People from Boynton Beach, Florida
Curious George
Television producers from Florida
|
https://en.wikipedia.org/wiki/Packet%20crafting
|
Packet crafting is a technique that allows network administrators to probe firewall rule-sets and find entry points into a targeted system or network. This is done by manually generating packets to test network devices and behaviour, instead of using existing network traffic. Testing may target the firewall, IDS, TCP/IP stack, router or any other component of the network. Packets are usually created by using a packet generator or packet analyzer which allows for specific options and flags to be set on the created packets. The act of packet crafting can be broken into four stages: Packet Assembly, Packet Editing, Packet Play and Packet Decoding. Tools exist for each of the stages - some tools are focused only on one stage while others such as Ostinato try to encompass all stages.
Packet assembly
Packet Assembly is the creation of the packets to be sent. Some popular programs used for packet assembly are Hping, Nemesis, Ostinato, Cat Karat packet builder, Libcrafter, libtins, PcapPlusPlus, Scapy, Wirefloss and Yersinia. Packets may be of any protocol and are designed to test specific rules or situations. For example, a TCP packet may be created with a set of erroneous flags to ensure that the target machine sends a RESET command or that the firewall blocks any response.
Packet editing
Packet Editing is the modification of created or captured packets. This involves modifying packets in manners which are difficult or impossible to do in the Packet Assembly stage, such as modifying the payload of a packet. Programs such as Scapy, Ostinato, Netdude allow a user to modify recorded packets' fields, checksums and payloads quite easily. These modified packets can be saved in packet streams which may be stored in pcap files to be replayed later.
Packet play
Packet Play or Packet Replay is the act of sending a pre-generated or captured series of packets. Packets may come from Packet Assembly and Editing or from captured network attacks. This allows for testing of a given usage or attack scenario for the targeted network. Tcpreplay is the most common program for this task since it is capable of taking a stored packet stream in the pcap format and sending those packets at the original rate or a user-defined rate. Scapy also supports send functions to replay any saved packets/pcap. Ostinato added support for pcap files in version 0.4. Some packet analyzers are also capable of packet replay.
Packet decoding
Packet Decoding is the capture and analysis of the network traffic generated during Packet Play. In order to determine the targeted network's response to the scenario created by Packet Play, the response must be captured by a packet analyzer and decoded according to the appropriate specifications. Depending on the packets sent, a desired response may be no packets were returned or that a connection was successfully established, among others. The most famous tools for that task are Wireshark and Scapy.
See also
Comparison of packet analyzers
Replay at
|
https://en.wikipedia.org/wiki/UnitingCare%20Australia
|
UnitingCare Australia is the national body for the UnitingCare network, made up of the Uniting Church in Australia's (UCA) community services agencies.
It is a sister body to UnitingJustice Australia, and UnitingWorld. All are agencies of the Uniting Church in Australia, National Assembly.
UnitingCare Australia advocates on behalf of the UnitingCare network to the Australian Federal Government.
UnitingCare network
UnitingCare is a brand name under which many Uniting Church community services agencies operate although they may be agencies of the respective Synods, or separate legal entities. Together with agencies under the Uniting Church in Australia without the UnitingCare brand, the agencies form the UnitingCare network.
The network is one of Australia's largest non-government community services provider networks, with over 1,600 sites Australia-wide. The UnitingCare network has 40,000 employees and 30,000 volunteers nationally, and provides services to children, young people and families, people with disabilities, and older Australians, in urban, rural and remote communities, including residential and community care, child care, homelessness prevention and support, family support, domestic violence and disability services.
Examples of non-UnitingCare branded agencies within the UnitingCare network include Uniting NSW.ACT, Uniting WA, Juniper (WA), Somerville Community Services (NT), and Uniting Communities (SA). The network also includes the Uniting Missions Network, made up of 34 missions such as the Wesley Missions in Queensland and NSW, and Blue Care in Queensland.
Mandate
UnitingCare Australia's mandate is:
To take up community service issues within the theological framework of the Uniting Church, particularly the Church's social justice perspectives.
To develop and reflect on the policies and practices of the Uniting Church in community services.
To pursue appropriate issues within the Uniting Church, with Government and the community sector, with the Australian community and with other parts of the church.
National Director
The first National Director of UnitingCare Australia was Libby Davies AM, appointed in 1994 until 2001. Lin Hatfield Dodds was National Director until July 2016. Martin Cowling acted as the National Director between June 2016 and December 2016. Claerwen Little took up the position of National Director on 6 February 2017.
See also
Wesley Mission
Prahran Mission
The Wayside Chapel
References
External links
UnitingCare Australia
Uniting NSW.ACT
Uniting WA
Uniting Missions Network
UnitingJustice Australia
UnitingWorld
Blue Care
Uniting Church in Australia
Health charities in Australia
Social work organisations in Australia
Medical and health organisations based in the Australian Capital Territory
|
https://en.wikipedia.org/wiki/EOSDIS
|
The Earth Observing System Data and Information System (EOSDIS) is a key core capability in NASA’s Earth Science Data Systems Program. Designed and maintained by Raytheon Intelligence & Space, it is a comprehensive data and information system designed to perform a wide variety of functions in support of a heterogeneous national and international user community.
EOSDIS provides a spectrum of services; some services are intended for a diverse group of casual users while others are intended only for a select cadre of research scientists chosen by NASA's peer-reviewed competitions, and then many fall somewhere in between. The primary services provided by EOSDIS are User Support, Data Archive, Management and Distribution, Information Management, and Product Generation, all of which are managed by the Earth Science Data and Information System (ESDIS) Project.
Overview
EOSDIS ingests, processes, archives, and distributes data from a large number of Earth-observing satellites, and provides end-to-end capabilities for managing NASA's Earth science data from various sources – satellites, aircraft, field measurements, and various other programs. For the Earth Observing System (EOS) satellite missions, EOSDIS provides capabilities for command and control, scheduling, data capture and initial (Level 0) processing.
These capabilities, constituting the EOSDIS Mission Operations, are managed by the Earth Science Mission Operations (ESMO) Project. NASA network capabilities transport the data to the science operations facilities. EOSDIS consists of a set of processing facilities and Distributed Active Archive Centers distributed across the United States. These processing facilities and DAACs serve hundreds of thousands of users around the world, providing hundreds of millions of data files each year covering many Earth science disciplines. The EOSDIS project as of September 2012 reported it contained approximately 10 PB of data in its database with ingestion of approximately 8.5 TB daily.
The remaining capabilities of EOSDIS constitute the EOSDIS Science Operations, which are managed by the Earth Science Data and Information System (ESDIS) Project. These capabilities include: generation of higher level (Level 1-4) science data products for EOS missions; archiving and distribution of data products from EOS and other satellite missions, as well as aircraft and field measurement campaigns. The EOSDIS science operations are performed within a distributed system of many interconnected nodes (Science Investigator-led Processing Systems and distributed, discipline-specific, Earth science Distributed Active Archive Centers) with specific responsibilities for production, archiving, and distribution of Earth science data products. The Distributed Active Archive Centers serve a large and diverse user community (as indicated by EOSDIS performance metrics) by providing capabilities to search and access science data products and specialized services.
History
From early
|
https://en.wikipedia.org/wiki/Menthol%20%28data%20page%29
|
This page provides supplementary chemical data on Menthol.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI or the links below, and follow its directions.
Baker MSDS (l-form)
Fisher MSDS (DL or racemic form)
Fisher MSDS (l-form)
Ambix MSDS Menthol Eucalyptus ointment
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup
|
https://en.wikipedia.org/wiki/Peirce%27s%20criterion
|
In robust statistics, Peirce's criterion is a rule for eliminating outliers from data sets, which was devised by Benjamin Peirce.
Outliers removed by Peirce's criterion
The problem of outliers
In data sets containing real-numbered measurements, the suspected outliers are the measured values that appear to lie outside the cluster of most of the other data values. The outliers would greatly change the estimate of location if the arithmetic average were to be used as a summary statistic of location. The problem is that the arithmetic mean is very sensitive to the inclusion of any outliers; in statistical terminology, the arithmetic mean is not robust.
In the presence of outliers, the statistician has two options. First, the statistician may remove the suspected outliers from the data set and then use the arithmetic mean to estimate the location parameter. Second, the statistician may use a robust statistic, such as the median statistic.
Peirce's criterion is a statistical procedure for eliminating outliers.
Uses of Peirce's criterion
The statistician and historian of statistics Stephen M. Stigler wrote the following about Benjamin Peirce:
"In 1852 he published the first significance test designed to tell an investigator whether an outlier should be rejected (Peirce 1852, 1878). The test, based on a likelihood ratio type of argument, had the distinction of producing an international debate on the wisdom of such actions (Anscombe, 1960, Rider, 1933, Stigler, 1973a)."
Peirce's criterion is derived from a statistical analysis of the Gaussian distribution. Unlike some other criteria for removing outliers, Peirce's method can be applied to identify two or more outliers.
"It is proposed to determine in a series of observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations."
Hawkins provides a formula for the criterion.
Peirce's criterion was used for decades at the United States Coast Survey.
"From 1852 to 1867 he served as the director of the longitude determinations of the U. S. Coast Survey and from 1867 to 1874 as superintendent of the Survey. During these years his test was consistently employed by all the clerks of this, the most active and mathematically inclined statistical organization of the era."
Peirce's criterion was discussed in William Chauvenet's book.
Applications
An application for Peirce's criterion is removing poor data points from observation pairs in order to perform a regression between the two observations (e.g., a linear regression). Peirce's criterion does not depend on observation
|
https://en.wikipedia.org/wiki/Reliable%20Server%20Pooling
|
Reliable Server Pooling (RSerPool) is a computer protocol framework for management of and access to multiple, coordinated (pooled) servers. RSerPool is an IETF standard, which has been developed by the IETF RSerPool Working Group and documented in RFC 5351, RFC 5352, RFC 5353, RFC 5354, RFC 5355 and RFC 5356.
Introduction
In the terminology of RSerPool a server is denoted as a Pool Element (PE). In its Pool, it is identified by its Pool Element Identifier (PE ID), a 32-bit number. The PE ID is randomly chosen upon a PE's registration to its pool. The set of all pools is denoted as the Handlespace. In older literature, it may be denoted as Namespace. This denomination has been dropped in order to avoid confusion with the Domain Name System (DNS). Each pool in a handlespace is identified by a unique Pool Handle (PH), which is represented by an arbitrary byte vector. Usually, this is an ASCII or Unicode name of the pool, e.g. "DownloadPool" or "WebServerPool".
Each handlespace has a certain scope (e.g. an organization or company), denoted as Operation Scope. It is explicitly not a goal of RSerPool to manage the global Internet's pools within a single handlespace. Due to the localisation of Operation Scopes, it is possible to keep the handlespace "flat". That is, PHs do not have any hierarchy in contrast to the Domain Name System with its top-level and sub-domains. This constraint results in a significant simplification of handlespace management.
Within an operation scope, the handlespace is managed by redundant Pool Registrars (PR), also denoted as ENRP servers or Name Servers (NS). PRs have to be redundant in order to avoid a PR to become a Single Point of Failure (SPoF). Each PR of an operation scope is identified by its Registrar ID (PR ID), which is a 32-bit random number. It is not necessary to ensure uniqueness of PR IDs. A PR contains a complete copy of the operation scope's handlespace. PRs of an operation scope synchronize their view of the handlespace using the Endpoint Handlespace Redundancy Protocol (ENRP). Older versions of this protocol use the term Endpoint Namespace Redundancy Protocol; this naming has been replaced to avoid confusion with DNS, but the abbreviation has been kept. Due to handlespace synchronization by ENRP, PRs of an operation scope are functionally equal. That is, if any of the PRs fails, each other PR is able to seamlessly replace it.
Using the Aggregate Server Access Protocol (ASAP), a PE can add itself to a pool or remove it from by requesting a registration or deregistration from an arbitrary PR of the operation scope. In case of successful registration, the PR chosen for registration becomes the PE's Home-PR (PR-H). A PR-H not only informs the other PRs of the operation scope about the registration or deregistration of its PEs, it also monitors the availability of its PEs by ASAP Keep-Alive messages. A keep-alive message by a PR-H has to be acknowledged by the PE within a certain time interval. If the PE fa
|
https://en.wikipedia.org/wiki/David%20Eppstein
|
David Arthur Eppstein (born 1963) is an American computer scientist and mathematician. He is a Distinguished Professor of computer science at the University of California, Irvine. He is known for his work in computational geometry, graph algorithms, and recreational mathematics. In 2011, he was named an ACM Fellow.
Biography
Born in Windsor, England, in 1963, Eppstein received a B.S. in Mathematics from Stanford University in 1984, and later an M.S. (1985) and Ph.D. (1989) in computer science from Columbia University, after which he took a postdoctoral position at Xerox's Palo Alto Research Center. He joined the UC Irvine faculty in 1990, and was co-chair of the Computer Science Department there from 2002 to 2005. In 2014, he was named a Chancellor's Professor. In October 2017, Eppstein was one of 396 members elected as fellows of the American Association for the Advancement of Science.
Eppstein is also an amateur digital photographer as well as a Wikipedia editor and administrator with over 200,000 edits.
Research interests
In computer science, Eppstein's research has included work on minimum spanning trees, shortest paths, dynamic graph data structures, graph coloring, graph drawing and geometric optimization. He has published also in application areas such as finite element meshing, which is used in engineering design, and in computational statistics, particularly in robust, multivariate, nonparametric statistics.
Eppstein served as the program chair for the theory track of the ACM Symposium on Computational Geometry in 2001, the program chair of the ACM-SIAM Symposium on Discrete Algorithms in 2002, and the co-chair for the International Symposium on Graph Drawing in 2009.
Selected publications
Republished in
Books
See also
Eppstein's algorithm
References
External links
David Eppstein's profile at the University of California, Irvine
1963 births
Living people
American computer scientists
British emigrants to the United States
Cellular automatists
Columbia School of Engineering and Applied Science alumni
Fellows of the American Association for the Advancement of Science
Fellows of the Association for Computing Machinery
Graph drawing people
Graph theorists
Palo Alto High School alumni
People from Irvine, California
Recreational mathematicians
Stanford University School of Humanities and Sciences alumni
Researchers in geometric algorithms
University of California, Irvine faculty
Science bloggers
Scientists at PARC (company)
American Wikimedians
20th-century American scientists
21st-century American scientists
|
https://en.wikipedia.org/wiki/Cuckoo%20hashing
|
Cuckoo hashing is a scheme in computer programming for resolving hash collisions of values of hash functions in a table, with worst-case constant lookup time. The name derives from the behavior of some species of cuckoo, where the cuckoo chick pushes the other eggs or young out of the nest when it hatches in a variation of the behavior referred to as brood parasitism; analogously, inserting a new key into a cuckoo hashing table may push an older key to a different location in the table.
History
Cuckoo hashing was first described by Rasmus Pagh and Flemming Friche Rodler in a 2001 conference paper. The paper was awarded the European Symposium on Algorithms Test-of-Time award in 2020.
Operations
Cuckoo hashing is a form of open addressing in which each non-empty cell of a hash table contains a key or key–value pair. A hash function is used to determine the location for each key, and its presence in the table (or the value associated with it) can be found by examining that cell of the table. However, open addressing suffers from collisions, which happens when more than one key is mapped to the same cell.
The basic idea of cuckoo hashing is to resolve collisions by using two hash functions instead of only one. This provides two possible locations in the hash table for each key. In one of the commonly used variants of the algorithm, the hash table is split into two smaller tables of equal size, and each hash function provides an index into one of these two tables. It is also possible for both hash functions to provide indexes into a single table.
Lookup
Cuckoo hashing uses two hash tables, and . Assuming is the length of each table, the hash functions for the two tables is defined as, and where be the key and be the set whose keys are stored in of or of . The lookup operation is as follows:
The logical or () denotes that, the value of the key is found in either or , which is in worst case.
Deletion
Deletion is performed in since there isn't involvement of probing—not considering the cost of shrinking operation if table is too sparse.
Insertion
Insertion of a new item, the first step involves examining if the slot of the table is occupied; if it is not, the item is inserted at that cell. However, if the slot is occupied, the preoccupied item gets removed—let it be —and is inserted at . The removed item is inserted into the table by following the same procedure; the process continues until an empty position is found to insert the key. To avoid the possible infinite iteration in the process loop, a is specified such that if the iterations exceeds the fixed threshold, the hash tables—both and —are rehashed with newer hash functions and the insertion procedure repeats. Following is a pseudocode for insertion:
On lines 10 and 15, the "cuckoo approach" of kicking other keys—which was preoccupied at —takes place until every key has its own "nest" i.e. the item is inserted into a spot on either one of the two tables; the notation e
|
https://en.wikipedia.org/wiki/Michael%20Bywater
|
Michael Bywater (born 11 May 1953) is an English non-fiction writer and broadcaster. He has worked for many London newspapers and periodicals and contributed to the design of computer games.
Biography
Bywater was educated at the independent Nottingham High School and at Corpus Christi College, Cambridge. He was a long-running columnist for The Independent on Sunday and an early futurist for The Observer. He spent ten years on the staff of Punch, where he wrote a regular computer column and the anonymous "Bargepole" column. He wrote regularly for The Times and had been a contributing editor to Cosmopolitan and Woman's Journal. He also writes regularly on high-tech subjects for The Daily Telegraph and a wide variety of technology magazines. He is termed a cultural critic for the New Statesman. In 1998 he was part of BBC Radio 4's five-part political satire programme Cartoons, Lampoons, and Buffoons. He also supervises on the Tragedy paper for a number of Cambridge colleges and in 2006 was Writer-in-Residence at Magdalene College, Cambridge. Bywater was the inspiration for his close friend Douglas Adams's character Dirk Gently.
Bywater was previously identified as a young fogey. In The Young Fogey Handbook (Poole, Dorset: Javelin Books, 1985), author Suzanne Lowry writes: "Michael Bywater, 30-year old Punch columnist and former trendy who once worked in films, made bold to criticise Burberrys for the inferior quality of their product - the trench coats are not what they were in the days of the trenches. Burberrys riposted that indeed they could live up to their past, and made Bywater a coat to the 1915 design devised by Kitchener and Burberry – complete with camel hair lining to protect a gentleman officer's flesh on the field..."
Games, books, music
In the mid-1980s, Bywater co-designed and co-wrote several interactive fiction games. He collaborated with Douglas Adams on Bureaucracy and the never-completed Milliways: The Restaurant At The End Of The Universe for Infocom, and with Anita Sinclair on Jinxter for Magnetic Scrolls. He revisited computer games in the late 1990s as a member of the writing team on another Douglas Adams project, Starship Titanic.
Bywater's book Lost Worlds, on the human tendency to nostalgia, appeared in 2004. His subsequent Big Babies, on the infantilisation of Western culture, was published in November 2006. A book on his journeys round the Australian Outback in a Cessna 172 continues to be a work in progress, due out "soon".
Bywater played church organ with Gary Brooker for the "Within Our House" charity concert.
Personal life
He has one daughter.
References
External links
Essay taken from Bywater's book, "Big Babies"
1953 births
British writers
Fellows of Magdalene College, Cambridge
Living people
People educated at Nottingham High School
|
https://en.wikipedia.org/wiki/Pool%20Registrar
|
In computing, a Pool Registrar (PR) is a component of the reliable server pooling (RSerPool) framework which manages a handlespace. PRs are also denoted as ENRP server or Name Server (NS).
The responsibilities of a PR are the following:
Register Pool Elements into a handlespace,
Deregister Pool Elements from a handlespace,
Monitor Pool Elements by keep-alive messages,
Provide handle resolution (i.e. server selection) to Pool Users,
Audit the consistency of a handlespace between multiple PRs,
Synchronize a handlespace with another PR.
Standards Documents
Aggregate Server Access Protocol (ASAP)
Endpoint Handlespace Redundancy Protocol (ENRP)
Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters
Reliable Server Pooling Policies
External links
Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
IETF RSerPool Working Group
Internet protocols
Internet Standards
|
https://en.wikipedia.org/wiki/Compaq%20Presario
|
Presario is a discontinued line of consumer desktop computers and notebooks originally produced by Compaq. The Presario family of computers was introduced in September 1993.
In the mid-1990s, Compaq began manufacturing PC monitors under the Presario brand. A series of all-in-one units, containing both the PC and the monitor in the same case, were also released.
After Compaq merged with HP in 2002, the Presario line of desktops and laptops were sold concurrently with HP’s other products, such as the HP Pavilion. The Presario laptops subsequently replaced the then-discontinued HP OmniBook line of notebooks around that same year.
The Presario brand name continued to be used for low-end home desktops and laptops from 2002 up until the Compaq brand name was discontinued by HP in 2013.
Desktop PC series
Compaq Presario 2100
Compaq Presario 2200
Compaq Presario 2240
Compaq Presario 2254
Compaq Presario 2256
Compaq Presario 2285V
Compaq Presario 2286
Compaq Presario 2288
Compaq Presario 4108
Compaq Presario 4110
Compaq Presario 4160
Compaq Presario 4505
Compaq Presario 4508
Compaq Presario 4528
Compaq Presario 4532
Compaq Presario 4540
Compaq Presario 4600
Compaq Presario 4620
Compaq Presario 4712
Compaq Presario 4800
Compaq Presario 5000 series
Compaq Presario 5000
Compaq Presario 5006US
Compaq Presario 5008US
Compaq Presario 5000A
Compaq Presario 5000T
Compaq Presario 5000Z
Compaq Presario 5010
Compaq Presario 5030
Compaq Presario 5050
Compaq Presario 5080
Compaq Presario 5100 series
Compaq Presario 5150
Compaq Presario 5170
Compaq Presario 5184
Compaq Presario 5185
Compaq Presario 5190
Compaq Presario 5200 series
Compaq Presario 5202
Compaq Presario 5222
Compaq Presario 5240
Compaq Presario 5280
Compaq Presario 5285
Compaq Presario 5360
Compaq Presario 5400
Compaq Presario 5460
Compaq Presario 5477
Compaq Presario 5500
Compaq Presario 5520
Compaq Presario 5599
Compaq Presario 5600 series
Compaq Presario 5660
Compaq Presario 5670
Compaq Presario 5686
Compaq Presario 5690
Compaq Presario 5695
Compaq Presario 5600i
Compaq Presario 5700N
Compaq Presario 5710
Compaq Presario 5726
Compaq Presario 5868
Compaq Presario 5900T
Compaq Presario 6000 series
Compaq Presario 6300US
Compaq Presario 6310US
Compaq Presario 6320US
Compaq Presario 7000 series
Compaq Presario 7000US
Compaq Presario 7002US
Compaq Presario 7003US
Compaq Presario 7006US
Compaq Presario 7000T
Compaq Presario 7000Z
Compaq Presario 7470
Compaq Presario 7478
Compaq Presario 7594
Compaq Presario 7596
Compaq Presario 7940
Compaq Presario 8000 series
Compaq Presario 8017US
Compaq Presario 8022US
Compaq Presario 8000T
Compaq Presario 8000Z
Compaq Presario 9232
Compaq Presario 9234
Compaq Presario 9546
Compaq Presario CQ3180AN
Compaq Presario CQ5814
Compaq Presario EZ2000 series
Compaq Presario EZ2000
Compaq Presario EZ2200
Compaq Presario SG1008IL
Compaq Presario SG3730IL
Compaq Presario SR1000 series
|
https://en.wikipedia.org/wiki/Constraint%20learning
|
In constraint satisfaction backtracking algorithms, constraint learning is a technique for improving efficiency. It works by recording new constraints whenever an inconsistency is found. This new constraint may reduce the search space, as future partial evaluations may be found inconsistent without further search. Clause learning is the name of this technique when applied to propositional satisfiability.
Definition
Backtracking algorithms work by choosing an unassigned variable and recursively solve the problems obtained by assigning a value to this variable. Whenever the current partial solution is found inconsistent, the algorithm goes back to the previously assigned variable, as expected by recursion. A constraint learning algorithm differs because it tries to record some information, before backtracking, in the form of a new constraint. This can reduce the further search because the subsequent search may encounter another partial solution that is inconsistent with this new constraint. If the algorithm has learned the new constraint, it will backtrack from this solution, while the original backtracking algorithm would do a subsequent search.
If the partial solution is inconsistent, the problem instance implies the constraint stating that cannot be true for all at the same time. However, recording this constraint is not useful, as this partial solution will not be encountered again due to the way backtracking proceeds.
On the other hand, if a subset of this evaluation is inconsistent, the corresponding constraint may be useful in the subsequent search, as the same subset of the partial evaluation may occur again in the search. For example, the algorithm may encounter an evaluation extending the subset of the previous partial evaluation. If this subset is inconsistent and the algorithm has stored this fact in the form of a constraint, no further search is needed to conclude that the new partial evaluation cannot be extended to form a solution.
Efficiency of constraint learning
The efficiency gain of constraint learning is balanced between two factors. On one hand, the more often a recorded constraint is violated, the more often backtracking avoids doing useless search. Small inconsistent subsets of the current partial solution are usually better than large ones, as they correspond to constraints that are easier to violate. On the other hand, finding a small inconsistent subset of the current partial evaluation may require time, and the benefit may not be balanced by the subsequent reduction of the search time.
Size is however not the only feature of learned constraints to take into account. Indeed, a small constraint may be useless in a particular state of the search space because the values that violate it will not be encountered again. A larger constraint whose violating values are more similar to the current partial assignment may be preferred in such cases.
Various constraint learning techniques exist, differing in strictness of
|
https://en.wikipedia.org/wiki/Transfers%20per%20second
|
In computer technology, transfers per second and its more common secondary terms gigatransfers per second (abbreviated as GT/s) and megatransfers per second (MT/s) are informal language that refer to the number of operations transferring data that occur in each second in some given data-transfer channel. It is also known as sample rate, i.e. the number of data samples captured per second, each sample normally occurring at the clock edge. The terms are neutral with respect to the method of physically accomplishing each such data-transfer operation; nevertheless, they are most commonly used in the context of transmission of digital data. 1 MT/s is 106 or one million transfers per second; similarly, 1 GT/s means 109, or equivalently in the US/short scale, one billion transfers per second.
Units
The choice of the symbol T for transfer conflicts with the International System of Units, in which T stands for the tesla unit of magnetic flux density (so "Megatesla per second" would be a reasonable unit to describe the rate of a rapidly changing magnetic field, such as in a pulsed field magnet or kicker magnet).
These terms alone do not specify the bit rate at which binary data is being transferred because they do not specify the number of bits transferred in each transfer operation (known as the channel width or word length). In order to calculate the data transmission rate, one must multiply the transfer rate by the information channel width. For example, a data bus eight-bytes wide (64 bits) by definition transfers eight bytes in each transfer operation; at a transfer rate of 1 GT/s, the data rate would be 8 × 109 B/s, i.e. 8 GB/s, or approximately 7.45 GiB/s. The bit rate for this example is 64 Gbit/s (8 × 8 × 109 bit/s).
The formula for a data transfer rate is: Channel width (bits/transfer) × transfers/second = bits/second.
Expanding the width of a channel, for example that between a CPU and a northbridge, increases data throughput without requiring an increase in the channel's operating frequency (measured in transfers per second). This is analogous to increasing throughput by increasing bandwidth but leaving latency unchanged.
The units usually refer to the "effective" number of transfers, or transfers perceived from "outside" of a system or component, as opposed to the internal speed or rate of the clock of the system. One example is a computer bus running at double data rate where data is transferred on both the rising and falling edge of the clock signal. If its internal clock runs at 100 MHz, then the effective rate is 200 MT/s, because there are 100 million rising edges per second and 100 million falling edges per second of a clock signal running at 100 MHz.
Buses like SCSI and PCI fall in the megatransfer range of data transfer rate, while newer bus architectures like the PCI-X, PCI Express, Ultra Path, and HyperTransport / Infinity Fabric operate at the gigatransfer rate.
See also
Data-rate units
Data transmission, also known as digi
|
https://en.wikipedia.org/wiki/A4018%20road
|
The A4018 is an A-road connecting the city centre of Bristol to the M5 motorway at Cribbs Causeway. It is one of the four principal roads which link central Bristol to the motorway network (the others being the M32 motorway, the A38 and the Portway).
Route
The A4018 runs for , starting at a junction with the A4 and A38 at The Centre, and finishing at junction 17 of the M5 motorway at Cribbs Causeway. The route includes Park Street and Whiteladies Road. It then passes over part of Durdham Down on Westbury Road, then along Falcondale Road and Passage Road through Westbury-on-Trym and Brentry. The final part of the A4018 is Cribbs Causeway, near Catbrain. Part of the road forms the boundary for the Westbury-on-Trym electoral ward in Bristol.
History
The original route of the A4018 went from Bristol to Avonmouth via Durdham Down and Shirehampton Road, the main road between Bristol and Avonmouth before the Portway was opened in 1926. By the 1940s only the route from the centre of Bristol to Durdham Down was designated the A4018, and the remainder of the route had been redesignated the B4054. In 1959 Passage Road was widened and rebuilt, and by 1962 the route of the A4018 was extended from Durdham Down to Cribbs Causeway along the former route of the B4055 (Westbury Road), unclassified roads (Falcondale Road and Passage Road) and a further part of the B4055 (Cribbs Causeway), linking with the New Filton Bypass which ran from Cribbs Causeway to the A38 north of Patchway. In December 1971 the New Filton Bypass was incorporated into the M5 motorway, and the A4018, by then dualled from Cribbs Causeway to Westbury-on-Trym, became the principal road linking the motorway to west Bristol.
Places of interest
Sites close to the route of the road include Blaise Castle, an Iron Age hill fortification.
References
Roads in England
Roads in Bristol
|
https://en.wikipedia.org/wiki/Ken%20Hamblin
|
Ken Loronzo Hamblin II (born October 22, 1940), the self-titled Black Avenger, was host of the Ken Hamblin Show, which was syndicated nationally on Entertainment Radio Networks. His show peaked in the 1990s, but he left the air, without warning, in July 2003 due to a contractual dispute with his syndicator, the American Views Radio Network. Hamblin, based in Denver, Colorado, is the author of the books Pick a Better Country: An Unassuming Colored Guy Speaks His Mind about America and Plain Talk and Common Sense from the Black Avenger.
Early career
The child of immigrant parents from Barbados, Hamblin is a policeman's son. He served in the United States Army's 101st Airborne Division before becoming a photographer for the Detroit Free Press. In the late 1960s Hamblin was a producer and film cameraman with the public television channel in Detroit, WTVS, Channel 56. An event Hamblin captured exclusively was the release of poet John Sinclair from prison after serving time for marijuana possession. Hamblin began his radio career in the 1970s. Hamblin has said he was once sympathetic to the radical left, including the Black Panthers, and gave them favorable coverage. He eventually came to the opinion the left had failed to bring about the type of America it spoke of, and he began to move to the conservative side of the spectrum. Hamblin is a licensed fixed-wing pilot and a motorcycle owner. He is a father and grandfather.
The Ken Hamblin Show
Hamblin had a long-running local talk program on powerful KOA radio in Denver, a clear-channel station heard across the western and central United States. Hamblin hosted the early evening shift, which he worked the evening of June 18, 1984, when Alan Berg, one of the station's biggest and most controversial hosts, was gunned down. He gained national attention when his show, then carried on another Denver radio station, was broadcast on C-SPAN during the early 1990s. He was heard on KNUS and KXKL radio in Denver, as well as across the nation. After his show was syndicated, he was heard across the United States on about 200 radio stations.
In 1999, Hamblin was named one of Colorado's Top 100 most influential media personalities.
Hamblin's show had several unique features: playing various versions of the "Star Spangled Banner" at the beginning of the show; playing "Taps" for fallen law enforcement officers; announcing the execution of convicts on death row, often with a clip from the movie Unforgiven, saying "It's a hell of a thing killin' a man; you take away all he's got, and all he's ever gonna have." The execution segment was notable for having "Another One Bites the Dust", sung by Queen. Hamblin frequently referred to liberals as "Egg-sucking dogs", and sometimes challenged listeners to call in to, "Name one major American city that improved morally, socially, and economically after the city elected a liberal black mayor ('You can't do it')". He has also been an outspoken critic of Louis Farrakan and the N
|
https://en.wikipedia.org/wiki/MIMIC
|
MIMIC, known in capitalized form only, is a former simulation computer language developed 1964 by H. E. Petersen, F. J. Sansom and L. M. Warshawsky of Systems Engineering Group within the Air Force Materiel Command at the Wright-Patterson AFB in Dayton, Ohio, United States. It is an expression-oriented continuous block simulation language, but capable of incorporating blocks of FORTRAN-like algebra.
MIMIC is a further development from MIDAS (Modified Integration Digital Analog Simulator), which represented analog computer design. Written completely in FORTRAN but one routine in COMPASS, and ran on Control Data supercomputers, MIMIC is capable of solving much larger simulation models.
With MIMIC, ordinary differential equations describing mathematical models in several scientific disciplines as in engineering, physics, chemistry, biology, economics and as well as in social sciences can easily be solved by numerical integration and the results of the analysis are listed or drawn in diagrams. It also enables the analysis of nonlinear dynamic conditions.
The MIMIC software package, written as FORTRAN overlay programs, executes input statements of the mathematical model in six consecutive passes. Simulation programs written in MIMIC are compiled rather than interpreted. The core of the simulation package is a variable step numerical integrator of fourth-order Runge-Kutta method. Many useful functions related to electrical circuit elements exist besides some mathematical functions found in most scientific programming languages. There is no need to sort the statements in order of dependencies of the variables, since MIMIC does it internally.
Parts of the software organized in overlays are:
MIMIN (input)– reads in user simulation program and data,
MIMCO (compiler) – compiles the user program and creates an in-core array of instructions,
MIMSO (sort)– sorts the instructions array after dependencies of variables,
MIMAS (assembler) – converts the BCD instructions into machine-oriented code,
MIMEX (execute)– executes the user program by integrating,
MIMOUT (output)– puts out the data as a list or diagram of data.
Example
Problem
Consider a predator-prey model from the field of marine biology to determine the dynamics of fish and shark populations. As a simple model, we choose the Lotka–Volterra equation and the constants given in a tutorial.
If
f(t): Fish population over time (fish)
s(t): Shark population over time (sharks)
df / dt or : growth rate of fish population (fish/year)
ds / dt or : growth rate of shark population (sharks/year)
: growth rate of fish in the absence of sharks (1/year)
: death rate per encounter of fish with sharks (1/sharks and year).
: death rate of sharks in the absence of their prey, fish (1/year)
: efficiency of turning predated fish into sharks (sharks/fish)
then
with initial conditions
The problem's constants are given as:
= 600 fish
= 50 sharks
= 0.7 fish/year
= 0.007 fish/shark and year
= 0.5 shar
|
https://en.wikipedia.org/wiki/GDB%20Human%20Genome%20Database
|
The GDB Human Genome Database was a community curated collection of human genomic data. It was a key database in the Human Genome Project and was in service from 1989 to 2008.
History
In 1989 the Howard Hughes Medical Institute provided funding to establish a central repository for human genetic mapping data. This project ultimately resulted in the creation of the GDB Human Genome DataBase in September 1990. In order to ensure a high degree of quality, records within GDB were subjected to a curation process by human genetics specialists, including the HUGO Gene Nomenclature Committee.
Established under the leadership of Peter Pearson and Dick Lucier, GDB received financial support from the US Department of Energy and the National Institutes of Health. Located at the Johns Hopkins University School of Medicine, GDB became a source of high quality mapping data which were made available both online as well as through numerous printed publications. The project was supported internationally by the EU, Japan, and other countries.
The GDB had several directors in its time. Peter Pearson, David T. Kingsbury, Stantley Letovsky, Peter Li, and A. Jamie Cuticchia.
Funds from the US Department of Energy that were previously allocated for GDB were transferred in 1998 due to the shift in emphasis in the human genome project. However that same year, A. Jamie Cuticchia obtained funding from Canadian public and private sources to continue the operations of GDB. While the data curation continued to be performed at Johns Hopkins, GDB central operations were moved to The Hospital for Sick Children (HSC) in Toronto, Ontario, Canada. In November 2001, the HSC fired Cuticchia due to a dispute over the GDB website domain name.
In 2003 RTI International became the new host for GDB where it continued to be maintained as a public resource; GDB was closed in 2008 after control of the project reverted to Johns Hopkins.
References
External links
https://web.archive.org/web/19970605132915/http://www.gdb.org/ archived version of the GDB website (1997)
Genome databases
|
https://en.wikipedia.org/wiki/Pure-FTPd
|
Pure-FTPd is a free (BSD license) FTP Server with a strong focus on software security. It can be compiled and run on a variety of Unix-like computer operating systems including Linux, OpenBSD, NetBSD, FreeBSD, DragonFly BSD, Solaris, Tru64, Darwin, Irix and HP-UX. It has also been ported to Android.
History
Pure-FTPd is based on Troll-FTPd, written by Arnt Gulbrandsen while he was working at Trolltech from 1995 to 1999. When Gulbrandsen stopped maintaining Troll-FTPd, Frank Denis created Pure-FTPd in 2001, and it is currently developed by a team led by Denis.
See also
List of FTP server software
vsftpd
References
External links
Official Webpage
FTP server software
Free server software
Free file transfer software
IRIX software
Software using the BSD license
Free software programmed in C
|
https://en.wikipedia.org/wiki/Algebraic%20specification
|
Algebraic specification is a software engineering technique for formally specifying system behavior. It was a very active subject of computer science research around 1980.
Overview
Algebraic specification seeks to systematically develop more efficient programs by:
formally defining types of data, and mathematical operations on those data types
abstracting implementation details, such as the size of representations (in memory) and the efficiency of obtaining outcome of computations
formalizing the computations and operations on data types
allowing for automation by formally restricting operations to this limited set of behaviors and data types.
An algebraic specification achieves these goals by defining one or more data types, and specifying a collection of functions that operate on those data types. These functions can be divided into two classes:
Constructor functions: Functions that create or initialize the data elements, or construct complex elements from simpler ones. The set of available constructor functions is implied by the specification's signature. Additionally, a specification can contain equations defining equivalences between the objects constructed by these functions. Whether the underlying representation is identical for different but equivalent constructions is implementation-dependent.
Additional functions: Functions that operate on the data types, and are defined in terms of the constructor functions.
Examples
Consider a formal algebraic specification for the boolean data type.
One possible algebraic specification may provide two constructor functions for the data-element: a true constructor and a false constructor. Thus, a boolean data element could be declared, constructed, and initialized to a value. In this scenario, all other connective elements, such as XOR and AND, would be additional functions. Thus, a data element could be instantiated with either "true" or "false" value, and additional functions could be used to perform any operation on the data element.
Alternatively, the entire system of boolean data types could be specified using a different set of constructor functions: a false constructor and a not constructor. In that case, an additional function true could be defined to yield the value not false, and an equation should be added.
The algebraic specification therefore describes all possible states of the data element, and all possible transitions between states.
For a more complicated example, the integers can be specified (among many other ways, and choosing one of the many formalisms) with two constructors
1 : Z
(_ - _) : Z × Z -> Z
and three equations:
(1 - (1 - p)) = p
((1 - (n - p)) - 1) = (p - n)
((p1 - n1) - (n2 - p2)) = (p1 - (n1 - (p2 - n2)))
It is easy to verify that the equations are valid, given the usual interpretation of the binary "minus" function. (The variable names have been chosen to hint at positive and negative contributions to the value.) Wi
|
https://en.wikipedia.org/wiki/T.120
|
T.120 is a suite of point-to-multipoint communication protocols for teleconferencing, videoconferencing, and computer-supported collaboration. It provides for application sharing, online chat, file sharing, and other functions. The protocols are standardised by the ITU Telecommunication Standardization Sector (ITU-T).
T.120 has been implemented in various real-time collaboration programmes, including WebEx and NetMeeting. IBM Sametime switched from the T.120 protocols to HTTP(S) in version 8.5.
The prefix T designates the ITU subcommittee that developed the standard, but it is not an abbreviation. The ITU (re)assigns these prefixes to committees incrementally and in alphabetic order.
The T.123 standard specifies that T.120 protocols use network port 1503 when communicating over TCP/IP.
Components
See also
References
External links
ITU-T recommendations
ITU-T T Series Recommendations
|
https://en.wikipedia.org/wiki/Computer%20Consoles%20Inc.
|
Computer Consoles Inc. or CCI was a telephony and computer company located in Rochester, New York, United States, which did business first as a private, and then ultimately a public company from 1968 to 1990. CCI provided worldwide telephone companies with directory assistance equipment and other systems to automate various operator and telephony services, and later sold a line of 68k-based Unix computers and the Power 6/32 Unix supermini.
History
Computer Consoles Inc. (CCI, incorporated May 20, 1968) was founded by three Xerox employees, Edward H. Nutter, Alfred J. Moretti, and Jeffrey Tai, to develop one of the earliest versions of a smart computer terminal, principally for the telephony market. Raymond J. Hasenauer (Manufacturing), Eiji Miki (Electronic design), Walter Ponivas (Documentation) and James M. Steinke (Mechanical design) joined the company at its inception. Due to the state of the art in electronics at the time, this smart terminal was the size of an average sized office desk.
Automating Operator Services
Due to the success of the smart computer terminal, and the expertise the company gained in understanding Operator Services, the company started development programs to offer networked computer systems that provided contract managed access time, specified as a guaranteed number of seconds to paint the operator's first screen of information, to various telephony databases such as directory assistance and intercept messages. The largest such system was designed and installed for British Telecom to provide initially Directory Assistance throughout Great Britain and Ireland. These systems combined Digital Equipment Corporation PDP-11 computers with custom hardware and software developed by CCI.
Automatic Voice Response
To provide higher levels of automation to operator services, CCI introduced in the early 1980s various Automatic Voice Response (AVR) systems tightly integrated with its popular Directory Assistance systems. AVR provided voice response of the customer requested data, almost universally starting the prompt with a variant of the phrase, "The number is". Early systems were based on very small vocabulary synthesised speech chips, follow-on systems utilized 8-bit PCM, and later ADPCM voice playback using audio authored either by CCI or the local phone company.
Digital Switching
To provide even higher levels of automation, CCI started a very aggressive program in the early 1980s to develop a PCM digital telephone switching system targeted for automated, user defined call scenarios. Initial installations handled intercept and calling card calls by capturing multi-frequency and DTMF audio band signaling via the DSP based multi-frequency receiver board. Later systems added speaker independent speech recognition via a quad digital audio processor board to initially automate collect calls.
PERPOS, Perpetual Processing Operating System
To provide better control over transaction processing, significant improvements in fa
|
https://en.wikipedia.org/wiki/Mighty%20Mike
|
Mighty Mike may refer to:
A re-release of the computer game Power Pete
Michael van Gerwen, darts player
Mighty Mike McGee, slam poet
Mike Anchondo, boxer
Mike Arnaoutis, boxer
Mike Van Sant, drag racer
Mike Cuozzo, saxophonist
"Mighty Mike C", a member of the Fearless Four
Mighty Mike (TV series), a French CGI-animated series
|
https://en.wikipedia.org/wiki/Job%20queue
|
In system software, a job queue ( batch queue, input queue), is a data structure maintained by job scheduler software containing jobs to run.
Users submit their programs that they want executed, "jobs", to the queue for batch processing.
The scheduler software maintains the queue as the pool of jobs available for it to run.
Multiple batch queues might be used by the scheduler to differentiate types of jobs
depending on parameters such as:
job priority
estimated execution time
resource requirements
The use of a batch queue gives these benefits:
sharing of computer resources among many users
time-shifts job processing to when the computer is less busy
avoids idling the compute resources without minute-by-minute human supervision
allows around-the-clock high utilization of expensive computing resources
Any process that comes to the CPU should wait in a queue.
See also
Command pattern
Command queue
Job scheduler
Priority queue
Task queue
Job scheduling
|
https://en.wikipedia.org/wiki/List%20of%20former%20WB%20affiliates
|
This is a list of stations which were affiliated with The WB in the United States at the time of the network's closure. The WB shut down September 17, 2006. Former affiliates of The WB became affiliates of The CW, MyNetworkTV, another network, reverted to independent status, or shut down entirely. Some WB affiliates dropped WB programming on September 5, 2006 in favor of MyNetworkTV.
From January 1995 to September 2006, Tribune Broadcasting was an investor in The WB, along with the Warner Bros. division of Time Warner. Tribune held an initial 12.5 percent stake in the network at its launch, and later increased it to 22 percent; most of Tribune's television properties were key WB affiliates but not owned-and-operated stations of the network as Time Warner had controlling interest in it. On January 24, 2006, Warner Bros. Television announced that they would merge The WB with the CBS-owned United Paramount Network to form a new programming service called the CW. All but three of Tribune's WB stations joined the CW on September 18, 2006, through ten-year agreements. Tribune does not have an ownership interest in the CW. In late March 2008 Tribune announced that San Diego affiliate KSWB-TV would switch its network affiliation to Fox in August of that year. The future status of the CW affiliation in San Diego remained unclear until early July when the network named the soon-to-be-displaced Fox affiliate, Tijuana-licensed XETV, as its new affiliate. Stations in bold are Tribune owned and operated stations.
Alabama
Birmingham WTTO1 21/WDBB 171 (1997-2006)
Dothan WBDO Cable 3/63 (1998-2006)
Huntsville/Decatur/Florence WAWB2 (no analog channel; broadcast only on digital subchannel of WZDX and on area cable systems) (1999-2006)
Mobile WBQP-CD 12 (1995-1996)
Mobile WFGX3 35 (1996-2001)
Mobile WBPG1 55 (2001-2006)
Montgomery WBMY Cable 14
Alaska
Anchorage KWBX Cable Channel 9 (1998-2006)
Fairbanks KWBX
Juneau KWJA
Arizona
Phoenix KTVK3 3 (January-September 1995)
Phoenix KASW1 61 (September 1995-2006)
Tucson KWBA1 58 (1998-2006)
Yuma KWUB Cable 6
Arkansas
Camden KKYK-TV 49 (1999-2001)
Fort Smith KBBL-TV2 34 (2004-2006)
Jonesboro KJOS Cable 60
Little Rock KKYK-LP 20 (1997-1999)
Little Rock KWBF2 42 (2001-2006)
California
Bakersfield KWFB cable 12 (1998-2006)
Chico KIWB cable 10 (1998-2006)
Eureka KWBT
Los Angeles KTLA-TV1 5
Monterey KMWB cable 14
Palm Springs KCWB
San Diego KSWB-TV1 69 (cable 5)
San Francisco-Oakland-San Jose KOFY-TV3 20
San Francisco-Oakland-San Jose KNTV 11 (2000-2001)
Sacramento-Stockton KMAX-TV 31 (1995-1998)
Sacramento-Stockton KQCA-TV2 58 (1998-2006)
Merced-Fresno KGMC 43 (1995-1997)
Clovis-Fresno KNSO 51 (1998-2001)
Sanger-Fresno KFRE-TV1 59 (2001-2006)
Santa Barbara/Santa Maria/San Luis Obispo KWCA Cable 5
Colorado
Denver KWGN-TV1 2
Grand Junction KWGJ
Connecticut
Hartford-New Haven WTNH 8 (January-April 1995) Secondary
Hartford-New Haven WTVU/WBNE 59 (April 1995-2001)
Hartford-New Haven WTXX1 20 (2001-2006)
District of Colum
|
https://en.wikipedia.org/wiki/Gems%20TV
|
Gems TV was a jewellery manufacturer and reverse auction TV shopping network headquartered in Chanthaburi, Thailand. It began its operations in October 2004 in the UK, and then expanded to Germany, America, Japan and China. Gems TV was formed from the merger of Thaigem Limited and Eagle Road Studios, which formed Gems TV UK Limited, which eventually became a subsidiary of Gems TV Holdings Limited when the company expended to other countries.
For the fiscal year ending 30 June 2009, revenues amounted to $162.16 million, down 31% from the previous year, with a gross profit of $53 million.
Since the closure of Gems TV USA and the sale of Gems TV (UK) and Gems London (Gems TV Japan) in 2010, the company no longer focuses on jewellery production and the sales through their own shopping channels.
History
Gems TV began in October 2004, in the UK from the joint venture of Eagle Road Studios and Thaigem. The partnership came about from Thaigem initially stocking Eagle Road Studios' channel Snatch It! and after the success of the jewellery on the channel, one of Eagle Road's other channels Factory Outlet was replaced with 'Gems.tv'. Eagle Road Studios ran the channel alongside 'Deal Of The Day' and 'Snatch It!'.
In April 2005, Eagle Road Studios announced that 'Snatch It!' was to close down and was to be replaced with a second jewellery channel, focusing solely on Sterling Silver after a successful trial run on the channel. During the closing down process of Snatch It!, the new channel was being named 'Gems TV Silver'. However, when the 2nd channel officially launched on 12 May 2005, Gems TV was rebranded as Gems TV Gold and received a completely new identity and studio. The 2nd channel was simply known as 'Gems TV'. Both channels ran alongside 'Deal Of The Day' at the same time was replaced with mobile phone shopping channel; 'MyPhone.tv'. In June 2005, it was announced that both Thaigem and Eagle Road Studios had merged to form 'Gems TV UK Limited' and as a result, on the 19 July 2005, the 'MyPhone.tv channel was sold off to Canis Media Group.
Gems TV Holdings was listed on the Singapore Stock Exchange in November 2006 and at this time Gems TV employed over 2200 people worldwide dropping to around 1,100 people worldwide by 2010.
Jewellery production
Gems TV owns its gem production facilities are in Chanthaburi, Thailand. The Gems TV company (now also known by the parent company name of TGGC Limited, or Gemporia) buys cut and polished gems, crafts its products, and then sell them through its various television channels; hence the motto, '[C]utting out the middlemen', and its claim that they can consistently undercut high street prices.
The company claims to sell the world's widest variety of gems, including rarities such as Block D Tanzanite. The channel utilizes a falling price - or 'reverse' - auction game.
Sale of Gems TV (UK) Limited
On 18 June 2010, Gems TV Holdings sold Gems TV (UK) Limited (a wholly owned subsidiary) to The Colourful Comp
|
https://en.wikipedia.org/wiki/CAP%20Scientific
|
CAP Scientific Ltd was a British defence software company, and was part of CAP (Computer Analysts and Programmers) Group plc. In 1988, CAP Group merged with the French firm Sema-Metra SA in 1988 as Sema Group plc. In 1991 Sema Group put most of its defence operations (CAP Scientific Ltd and YARD Ltd) into joint venture with British Aerospace called BAeSEMA, which British Aerospace bought out in 1998. Parts of the former CAP Scientific are now BAE Systems (Insyte).
Formation of CAP Scientific
CAP Scientific was formed in 1979 by four colleagues who had previously worked in Scicon, a BP subsidiary. Seeking to start a specialist software company for defence applications in the United Kingdom, they approached CAP-CPP, a commercial software house, to back a start-up operation.
By 1985, CAP Scientific had established significant work in several areas. It had a strong naval business based on supporting the Admiralty Research Establishment. This Maritime Technology business applied the technologies fostered in research contracts on major development programmes. CAP worked with Vosper Thornycroft Controls to develop machinery control and surveillance systems for the Royal Navy's new generation ships and submarines.
An associated Naval Command Systems business had built a strong Action Information Organisation design team, working with both surface and submarine fleets, and a Land Air Systems business also took research and development contracts and was prime contractor for the British Army's Brigade and Battlegroup Trainer (BBGT). The non-defence scientific sector was addressed by setting a Scientific Systems business with expertise in energy generation and conservation. In that year, CAP Scientific established the Centre for Operational Research and Defence Analysis (CORDA) as an independent unit to provide impartial assistance for investment appraisal.
At that time military computer systems were purpose-built by major contractors, and CAP Scientific's strategy was to form joint ventures with companies which had market access but could not afford the investment to move into the new technology of microprocessors and distributed systems.
The Falklands breakthrough, and DCG
In its early years, CAP Scientific took time to establish itself, but in 1982 there came a breakthrough. While the UK was mustering its naval taskforce for the Falklands War, it became clear that for some purposes the Royal Navy needed more computational power. An Urgent Operational Requirement was raised to provide improved fire control solutions for RN Sub-Harpoon. Working in frantic haste, CAP's engineers were able to add an experimental Digital Equipment Corporation PDP-8 installation into a Royal Navy submarine before she sailed to the South Atlantic. This was one of the first examples of commercial off-the-shelf equipment being employed for military use. The success of this experimental deployment led to the development of a standard RN submarine fit, DCG, which allowed ex
|
https://en.wikipedia.org/wiki/Contiki%20%28disambiguation%29
|
Contiki can refer to:
Contiki, an open-source operating system designed for computers with limited memory.
Contiki Tours, a series of bus holidays operated by Contiki Holidays for 18-35s.
Con-Tici or Kon-Tiki, an old name for the Andean deity Viracocha
See also
Kontiki (disambiguation)
|
https://en.wikipedia.org/wiki/IBM%20TPNS
|
Teleprocessing Network Simulator (TPNS) is an IBM licensed program, first released in 1976 as a test automation tool to simulate the end-user activity of network terminal(s) to a mainframe computer system, for functional testing, regression testing, system testing, capacity management, benchmarking and stress testing.
In 2002, IBM re-packaged TPNS and released
Workload Simulator for z/OS and S/390 (WSim) as a successor product.
History
Teleprocessing Network Simulator (TPNS) Version 1 Release 1 (V1R1) was introduced as Program Product 5740-XT4 in February 1976, followed by four additional releases up to V1R5 (1981).
In August 1981, IBM announced TPNS Version 2 Release 1 () as Program Product 5662-262, followed by three additional releases up to V2R4 (1987).
In January 1989, IBM announced TPNS Version 3 Release 1 () as Program Product 5688-121, followed by four additional releases up to (1996).
In December 1997, IBM announced a Service Level 9711 Functional and Service Enhancements release.
In September 1998, IBM announced the TPNS Test Manager (for ) as a usability enhancement to automate the test process further in order to improve productivity through a logical flow, and to streamline TPNS-based testing of IBM 3270 applications or CPI-C transaction programs.
In December 2001, IBM announced a Service Level 0110 Functional and Service Enhancements release.
In August 2002, IBM announced Workload Simulator for z/OS and S/390 (WSim) V1.1 as Program Number 5655-I39, a re-packaged successor product to TPNS, alongside the WSim Test Manager V1.1, a re-packaged successor to the TPNS Test Manager.
In November 2012, IBM announced a maintenance update of Workload Simulator for z/OS and S/390 (WSim) V1.1, to simplify the installation of updates to the product.
In December 2015, IBM announced enhancements to Workload Simulator for z/OS and S/390 (WSim) V1.1, providing new utilities for TCP/IP data capture and script generation.
Features
Simulation support
Teleprocessing Network Simulator (TPNS)
TPNS supports the simulation of a wide range of networking protocols and devices: SNA/SDLC, start-stop, BSC, TWX, TTY, X.25 Packet Switching Network, Token Ring Local Area Networking, and TCP/IP servers and clients (Telnet 3270 & 5250, Telnet Line Mode Network Virtual Terminal, FTP and simple UDP clients). TPNS can also simulate devices using the Airline Line Control (ALC) and the HDLC protocols. The full implementation of SNA in TPNS enables it to simulate all LU types (including LU6.2 and CPI-C), PU types (including PU2.1), and SSCP functions. Finally, TPNS also provides extensive user exit access to its internal processes to enable the simulation of user-defined (home-grown) line disciplines, communications protocols, devices (terminals and printers) and programs.
TPNS is therefore the appropriate test tool for installations that need to test:
either the entire system configuration path of hardware and software components, from the teleprocessing
|
https://en.wikipedia.org/wiki/The%20WB%20100%2B%20Station%20Group
|
The WB 100+ Station Group (originally called The WeB from its developmental stages until March 1999) was a national programming service of The WB—owned by the Warner Bros. Entertainment division of Time Warner, the Tribune Company, and group founder and longtime WB network president Jamie Kellner—intended primarily for American television markets ranked #100 and above by Nielsen Media Research estimates. Operating from September 21, 1998 to September 17, 2006, The WB 100+ comprised an affiliate group that was initially made exclusively of individually branded cable television channels serving areas that lacked availability for a locally based WB broadcast affiliate and supplied a nationalized subfeed consisting of WB network and syndicated programs; in the network's waning years, the WB 100+ group began maintaining primary affiliations on full-power and low-power stations in certain markets serviced by the feed.
The WB 100+ Station Group was also essentially structured as a de facto national feed of The WB, and maintained a master schedule of syndicated and brokered programs for broadcast on all affiliates of the feed outside of time periods designated for The WB's prime time, daytime and Saturday morning programming. Programming and promotional services for The WB 100+ were housed at The WB's corporate headquarters in Burbank, California; engineering and master control operations were based at the California Video Center in Los Angeles.
History
Pre-launch
The history of The WB 100+ can be traced back to a charter affiliation agreement reached on December 3, 1993, between The WB and Tribune Broadcasting (whose corporate parent, the Tribune Company (later Tribune Media), held minority ownership in the network), which resulted in Tribune's Chicago television flagship WGN-TV carrying The WB's prime time programming (the Kids' WB block – which debuted in September 1995, eight months after The WB's launch – would air instead on independent station WCIU-TV before moving to WGN-TV in September 2004).
Through that deal, WGN's national superstation feed (later separately branded as WGN America and operating as a conventional basic cable channel) would act as a default WB affiliate for select markets where the network would have difficulty securing an affiliation with a broadcast television station at The WB's launch on January 11, 1995 (either due to the lack of available over-the-air stations or the absence of a secondary affiliation with an existing station within the market). This arrangement was conceived to give the network enough time to find affiliates in those "white areas" (a term referring to areas in which a national broadcaster does not have market clearance), allowing the WGN superstation feed to nationally distribute The WB's programming to a broader audience than would be possible without such an agreement in the interim. Some cable providers also carried either KTLA (for areas in the Pacific Time Zone) or WPIX (for areas in the Eastern
|
https://en.wikipedia.org/wiki/Northern%20Virginia%20trolleys
|
The Northern Virginia trolleys were the network of electric passenger rails that moved people around the Northern Virginia suburbs of Washington, D.C., from 1892 to 1941. They consisted of six lines operated by as many as three separate companies connecting Rosslyn, Great Falls, Bluemont, Mount Vernon, Fairfax City, Camp Humphries and Nauck across the Potomac River to the District of Columbia.
After early success, the trolleys struggled. They were unable to set their own prices and found it difficult to compete with automobiles and buses. As roads were paved and improved, they gradually lost customers. A final blow came in 1932, when they were forced to give up their direct connection to Washington, D.C.; much of the system was shut down that year. The Great Depression led to further contractions of the system. The last passenger service was terminated in 1941.
Northern Virginia's trolleys were originally operated by three companies that all planned to operate within the District of Columbia, but were never integrated into the Washington streetcar network. Two companies were founded in 1892: the Washington, Arlington and Falls Church Railway Company and the Washington, Arlington and Mount Vernon Railway. Their tracks were laid when most of Northern Virginia was undeveloped and had few streets and roads. As a result, the trolleys mostly operated on private right-of-ways that their companies leased or owned. After they began operating, a number of communities developed along their routes.
In 1910, following bankruptcy, they merged into one system, the Washington-Virginia Railway. Twelve years later, that company went into receivership. In 1927, two companies emerged. They were eventually purchased or transformed into bus companies and by the end of 1939 were no longer operating trolleys. A third company operated electric cars from 1911 to 1936 as the Washington and Old Dominion Railway; then from 1936 to 1941, and again briefly in 1943, as the Washington and Old Dominion Railroad.
At its peak, the system consisted of lines that ran from downtown D.C. to Fort Humphries/Mount Vernon, to Fairfax via Clarendon and to Rosslyn; from Rosslyn to Fairfax and Nauck; From Alexandria to Bluemont via Bon Air; from Georgetown to Bon Air and from Georgetown to Great Falls.
The major lines of the Washington-Virginia Railway converged at Arlington Junction, which was located in the northwest corner of present-day Crystal City south of the Pentagon and in Rosslyn at the south end of the Aqueduct Bridge, near the spot where the Key Bridge is now. There it had a terminal next to the Rosslyn station of the W&OD.
From Arlington Junction, the W-V Railway's trolleys crossed the Potomac River near the site of the present 14th Street bridges over the 1872 Long Bridge and then, beginning in 1906, the old Highway Bridge. They traveled to a terminal in downtown Washington along Pennsylvania Avenue NW, and D Street NW, between 12th and Streets NW, on a site that is
|
https://en.wikipedia.org/wiki/Nin
|
Nin or NIN may refer to:
National identification number, a system used by governments around the world to keep track of their citizens
National Information Network
National Institute of Nutrition, Hyderabad, an institution in Hyderabad, India
Netherlands Institute for Neuroscience, a neuroscience research institute in Amsterdam, the Netherlands
Nine Inch Nails, an American industrial rock band founded by Trent Reznor
NIN (magazine), a Serbian political magazine
NIN (cuneiform), the Sumerian sign for lady
NIN (gene), a human gene
Nin (surname), a surname
Nion or Nin, a letter in the Ogham alphabet
Akira Nishitani (a.k.a. Nin or Nin-Nin), co-creator of the game Street Fighter II
Anaïs Nin, French-Cuban author
Nin, Croatia, a town in the Zadar County in Croatia
Bishop Gregory of Nin, an important figure in the 10th century ecclesiastical politics of Dalmatia.
See also
Nin (surname)
National Insurance number
|
https://en.wikipedia.org/wiki/Two-dimensional%20nuclear%20magnetic%20resonance%20spectroscopy
|
Two-dimensional nuclear magnetic resonance spectroscopy (2D NMR) is a set of nuclear magnetic resonance spectroscopy (NMR) methods which give data plotted in a space defined by two frequency axes rather than one. Types of 2D NMR include correlation spectroscopy (COSY), J-spectroscopy, exchange spectroscopy (EXSY), and nuclear Overhauser effect spectroscopy (NOESY). Two-dimensional NMR spectra provide more information about a molecule than one-dimensional NMR spectra and are especially useful in determining the structure of a molecule, particularly for molecules that are too complicated to work with using one-dimensional NMR.
The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at the Université Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976.
Fundamental concepts
Each experiment consists of a sequence of radio frequency (RF) pulses with delay periods in between them. The timing, frequencies, and intensities of these pulses distinguish different NMR experiments from one another. Almost all two-dimensional experiments have four stages: the preparation period, where a magnetization coherence is created through a set of RF pulses; the evolution period, a determined length of time during which no pulses are delivered and the nuclear spins are allowed to freely precess (rotate); the mixing period, where the coherence is manipulated by another series of pulses into a state which will give an observable signal; and the detection period, in which the free induction decay signal from the sample is observed as a function of time, in a manner identical to one-dimensional FT-NMR.
The two dimensions of a two-dimensional NMR experiment are two frequency axes representing a chemical shift. Each frequency axis is associated with one of the two time variables, which are the length of the evolution period (the evolution time) and the time elapsed during the detection period (the detection time). They are each converted from a time series to a frequency series through a two-dimensional Fourier transform. A single two-dimensional experiment is generated as a series of one-dimensional experiments, with a different specific evolution time in successive experiments, with the entire duration of the detection period recorded in each experiment.
The end result is a plot showing an intensity value for each pair of frequency variables. The intensities of the peaks in the spectrum can be represented using a third dimension. More commonly, intensity is indicated using contour lines or different colors.
Homonuclear through-bond correlation methods
In these methods, magnetization transfer occurs between nuclei of the same type, through J-coupling of nuclei connected by up to a few bonds.
Correlation spectroscopy (COSY)
The first and most popular two-dimension NMR experiment is the homonuclear correlation spectroscopy (CO
|
https://en.wikipedia.org/wiki/Metropolitan%20Community%20College
|
Metropolitan Community College may refer to
Metropolitan Community College (Nebraska), a three-campus public community college in Omaha, Nebraska
Metropolitan Community College (Missouri), a network of five community colleges in Kansas City, Missouri
Metropolitan Community College (Illinois), a community college in East St. Louis, Illinois from 1996 to 1998
|
https://en.wikipedia.org/wiki/Arping
|
arping is a computer software tool for discovering and probing hosts on a computer network. Arping probes hosts on the examined network link by sending link layer frames using the Address Resolution Protocol (ARP) request method addressed to a host identified by its MAC address of the network interface. The utility program may use ARP to resolve an IP address provided by the user.
The function of arping is analogous to the utility ping that probes the network with the Internet Control Message Protocol (ICMP) at the Internet Layer of the Internet Protocol Suite.
Two popular arping implementations exist. One is part of Linux iputils suite, and cannot resolve MAC addresses to IP addresses. The other arping implementation, written by Thomas Habets, can ping hosts by MAC address as well as by IP address, and adds more features. Having both arping implementations on a system may introduce conflicts. Some Linux distros handle this by removing iputils arping along with dependent packages like NetworkManager if Habets's arping is installed. Others (e.g. Debian-based distros like Ubuntu) have iputils-arping split into a separate package to avoid this problem.
In networks employing repeaters that implement proxy ARP, the ARP response may originate from such proxy hosts and not directly from the probed target.
Example
Example session output of arping from iputils:
ARPING 192.168.39.120 from 192.168.39.1 eth0
Unicast reply from 192.168.39.120 [00:01:80:38:F7:4C] 0.810ms
Unicast reply from 192.168.39.120 [00:01:80:38:F7:4C] 0.607ms
Unicast reply from 192.168.39.120 [00:01:80:38:F7:4C] 0.602ms
Unicast reply from 192.168.39.120 [00:01:80:38:F7:4C] 0.606ms
Sent 4 probes (1 broadcast(s))
Received 4 response(s)
Example session output from Thomas Habets's arping:
ARPING 192.168.16.96
60 bytes from 00:04:5a:4b:b6:ec (192.168.16.96): index=0 time=292.000 usec
60 bytes from 00:04:5a:4b:b6:ec (192.168.16.96): index=1 time=310.000 usec
60 bytes from 00:04:5a:4b:b6:ec (192.168.16.96): index=2 time=256.000 usec
^C
--- 192.168.16.96 statistics ---
3 packets transmitted, 3 packets received, 0% unanswered (0 extra)
See also
ArpON
arpwatch
References
External links
arping by Thomas Habets
iputils suite (including arping)
arping source on github
Internet Protocol based network software
Free network management software
|
https://en.wikipedia.org/wiki/Fair%20queuing
|
Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.
Fair queuing is implemented in some advanced network switches and routers.
History
The term fair queuing was coined by John Nagle in 1985 while proposing round-robin scheduling in the gateway between a local area network and the internet to reduce network disruption from badly-behaving hosts.
A byte-weighted version was proposed by Alan Demers, Srinivasan Keshav and Scott Shenker in 1989, and was based on the earlier Nagle fair queuing algorithm. The byte-weighted fair queuing algorithm aims to mimic a bit-per-bit multiplexing by computing theoretical departure date for each packet.
The concept has been further developed into weighted fair queuing, and the more general concept of traffic shaping, where queuing priorities are dynamically controlled to achieve desired flow quality of service goals or accelerate some flows.
Principle
Fair queuing uses one queue per packet flow and services them in rotation, such that each flow can "obtain an equal fraction of the resources".
The advantage over conventional first in first out (FIFO) or priority queuing is that a high-data-rate flow, consisting of large packets or many data packets, cannot take more than its fair share of the link capacity.
Fair queuing is used in routers, switches, and statistical multiplexers that forward packets from a buffer. The buffer works as a queuing system, where the data packets are stored temporarily until they are transmitted.
With a link data-rate of R, at any given time the N active data flows (the ones with non-empty queues) are serviced each with an average data rate of R/N. In a short time interval the data rate may fluctuate around this value since the packets are delivered sequentially in turn.
Fairness
In the context of network scheduling, fairness has multiple definitions. Nagel's article uses round-robin scheduling of packets, which is fair in terms of the number of packets, but not on the bandwidth use when packets have varying size. Several formal notions of fairness measure have been defined including max-min fairness, worst-case fairness, and fairness index.
Generalisation to weighted sharing
The initial idea gives to each flow the same rate. A natural extension consists in letting the user specify the portion of bandwidth allocated to each flow leading to weighted fair queuing and generalized processor sharing.
A byte-weighted fair queuing algorithm
This algorithm attempts to emulate the fairness of bitwise round-robin sharing of link resources among competing flows. Packet-based flows, however, must be transmitted packetwise and in sequence. The byte-weighted fair queuing algorithm selects transmission orde
|
https://en.wikipedia.org/wiki/Universal%20hashing
|
In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
Introduction
Assume we want to map keys from some universe into bins (labelled ). The algorithm will have to handle some data set of keys, which is not known in advance. Usually, the goal of hashing is to obtain a low number of collisions (keys from that land in the same bin). A deterministic hash function cannot offer any guarantee in an adversarial setting if , since the adversary may choose to be precisely the preimage of a bin. This means that all data keys land in the same bin, making hashing useless. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function.
The solution to these problems is to pick a function randomly from a family of hash functions. A family of functions is called a universal family if, .
In other words, any two different keys of the universe collide with probability at most when the hash function is drawn uniformly at random from . This is exactly the probability of collision we would expect if the hash function assigned truly random hash codes to every key.
Sometimes, the definition is relaxed by a constant factor, only requiring collision probability rather than . This concept was introduced by Carter and Wegman in 1977, and has found numerous applications in computer science (see, for .
If we have an upper bound of on the collision probability, we say that we have -almost universality. So for example, a universal family has -almost universality.
Many, but not all, universal families have the following stronger uniform difference property:
, when is drawn randomly from the family , the difference is uniformly distributed in .
Note that the definition of universality is only concerned with whether , which counts collisions. The uniform difference property is stronger.
(Similarly, a universal family can be XOR universal if , the value is uniformly distributed in where is the bitwise exclusive or operation. This is only possible if is a power of two.)
An even stronger condition is pairwise independence: we have this property when we have the probability that will hash to any pair of hash values is as if they were perfectly random: . Pairwise independence is sometimes called strong universality.
Another property is uniformity. We say
|
https://en.wikipedia.org/wiki/1989%20in%20New%20Zealand%20television
|
This is a list of New Zealand television-related events in 1989.
Events
3 April – Network News at Six was reduced in duration from an hour to 30 minutes; Holmes premiered on TV One and screened at 6.30pm (right after Network News at Six); and the regional news programmes – Top Half (Auckland), Today Tonight (Wellington), The Mainland Touch (Christchurch) and The South Tonight (Dunedin) – were transferred to Network Two at 5.45pm.
3 April – New Zealand quiz show Sale of the Century premiered and screened weeknights at 7pm on Network Two (right after the Australian soap Neighbours). By the end of July, the show was transferred to TV One and Neighbours was moved to a 'double episode' format from 6.30-7.30pm.
August – Network Two was renamed as Channel 2. Despite the name being used as "Channel 2", it was seen on screen as just "Network Two" until October.
October – A new look for Channel 2 was unveiled.
6 November – Breakfast television – weekdays from 6.30am and weekends from 7am – was introduced to Channel 2 with an early morning news service called Breakfast News with Tom Bradley as anchor and Penelope Barr as weather presenter. Breakfast News initially aired as a half hour bulletin on Channel 2 at 7am, with a five-minute news and weather update at 8am, before switching to five-minute news and weather bulletins at 7am, 7.30am, 8am and 8.30am by January 1990. Cartoons, Sesame Street and British sitcom reruns were shown throughout the morning, although Sesame Street was still shown on Tuesday and Thursday afternoons until December 1989, and Aerobics Oz Style and the US soaps Santa Barbara and Days of Our Lives were transferred from TV One to Channel 2.
11 November – Saturday morning television was introduced to Channel 2 with a brand new wrapper programme called The Breakfast Club with Jason Gunn as host. What Now and other children's programmes on weekend mornings were transferred from TV One to Channel 2.
26 November – TV3 commenced broadcasting with a two-hour Grand Preview from 8pm.
27 November – TV3's regular programming began at 7am with The Early Bird Show and news updates on the half hour.
5 December – Australian soap Home and Away premiered in New Zealand with the series initially transmitting on TV3 in a double episode format on Tuesdays and Wednesdays at 7.30-8.30pm.
8 December – The final editions of Top Half (Auckland) and Today Tonight (Wellington) were broadcast on Channel 2 at 5.45pm.
Debuts
Domestic
13 February – After 2 (Network Two) (1989-1991)
13 February – 3:45 Live (Network Two) (1989-1990)
2 April – CV (Network Two) (1989)
3 April – Holmes (TV One) (1989-2004)
3 April – Sale of the Century (Network Two) (1989-1993)
5 April – Shark in the Park (TV One) (1989-1991)
13 April – Missing (TV One) (1989)
7 May – LIFE (Life in the Fridge Exists) (Network Two) (1989-1991)
12 June – The Mostly Useful Job Guide (Network Two) (1989)
18 June – Don't Tell Me (Network Two) (1989)
18 June – Strangers (Network Two) (1989)
9 July – The Shad
|
https://en.wikipedia.org/wiki/Bernard%20Ponsonby
|
Bernard Ponsonby is a Scottish broadcast journalist for regional news and current affairs programming for STV. He joined the station in 1990 and was appointed political editor in 2000, following the retirement of longstanding political editor Fiona Ross. Since 2019, Ponsonby has been Special Correspondent for STV News.
Early life
Ponsonby was born in Castlemilk, Glasgow. He was educated at Trinity High School, Rutherglen, and Strathclyde University. He has been a supporter of Celtic F.C. since boyhood.
Bernard was brought up in The Garngad .An area within Glasgow ,known by locals as The Garden of God .It is an area with a large percentage of Roman Catholics of Irish decent.There is a large base of support for Glasgow Celtic F.C. from within the residents of The Garngad.
Political career
Ponsonby joined the Social Democratic Party (SDP) as a young man, and upon leaving university was briefly employed as a researcher for the former MP Dr Dickson Mabon. After the SDP merged with the Liberal Party in 1988, he stood for the Liberal Democrats – then styled as the "Democrats" – in that year's Govan by-election, losing his deposit with a 4.1% share of the vote. Following that he became the party's press officer in Scotland.
Television career
Ponsonby joined Scottish Television (STV) in 1990. For seven years, he presented the channel's flagship political programme Platform. He currently reports and provides political commentary for all three editions of the station's flagship regional news programme, STV News at Six, in the North, East and West of Scotland. He has also contributed to the weekly political programme Politics Now, for which he became presenter in January 2009, until the programme's end in 2011. He now commentates on the replacement programme Scotland Tonight.
Ponsonby co-presented the political programme Scottish Questions (1992–93), was the lead presenter on Scottish Voices (1994–95), co-presented Trial By Night (1993–96) and more recently, Seven Days (2000–2001).
Ponsonby has produced several documentary programmes in the Scottish Reporters series and produced two political documentaries (The Dewar Years and The Salmond Years) on two of Scotland's most influential politicians of the postwar period.
In 2002, Ponsonby was arrested for drunk driving and convicted of being over three times the legal drink limit.
In May 2009, Ponsonby became the first journalist in the UK to report the resignation of the speaker of the House of Commons and Glasgow North East MP, Michael Martin – the first speaker to be forced from office since 1695.
On 5 August 2014, Ponsonby moderated Salmond & Darling: The Debate, the first head-to-head televised debate between First Minister Alex Salmond and Alistair Darling ahead of the forthcoming Scottish independence referendum.
The Prime Minister's office refused to allow Ponsonby to interview David Cameron on STV about the Scottish independence referendum.
Ponsonby stood down as Political Editor of STV News
|
https://en.wikipedia.org/wiki/Hartmut%20Esslinger
|
Hartmut Esslinger (born 5 June 1944) is a German-American industrial designer and inventor. He is best known for founding the design consultancy frog, and his work for Apple Computers in the early 1980s.
Life and career
Esslinger was born in Beuren (Simmersfeld), in Germany's Black Forest. At age 25, Esslinger finished his studies at the Hochschule für Gestaltung Schwäbisch Gmünd in Schwäbisch Gmünd. After facing vicious criticism of a radio clock he designed while in school and the disapproval of his mother (who burned his sketchbooks), he started his own design agency in 1969, Esslinger Design, later renamed Frogdesign. For his first client, German avant-garde consumer electronics company Wega, he created the first "full plastics" color TV and HiFi series "Wega system 3000". His work for Wega won him instant international fame. In 1974, Esslinger was hired by Sony – Sony also acquired Wega shortly after – and he was instrumental in creating a global design image for Sony, especially with the Sony Trinitron and personal music products. The Sony-Wega Music System Concept 51K was acquired by the Museum of Modern Art, New York. In 1976, Esslinger also worked for Louis Vuitton.
In 1982 he entered into an exclusive $2,000,000 per year contract with Apple Computer to create a design strategy which transformed Apple from a "Silicon Valley Start-Up" into a global brand. Setting up shop in California for the first time, Esslinger and Frogdesign created the "Snow White design language" which was applied to all Apple product lines from 1984 to 1990, commencing with the Apple IIc and including the Macintosh computer. The original Apple IIc was acquired by the Whitney Museum of Art in New York and Time voted it Design of the Year. Soon after Steve Jobs' departure, Esslinger broke his own contract with Apple and followed Jobs to NeXT. Other major client engagements include Lufthansa's global design and brand strategy, SAP's corporate identity and software user interface, Microsoft Windows branding and user interface design, Siemens, NEC, Olympus, HP, Motorola and General Electric.
In December 1990 Esslinger was featured on the cover of BusinessWeek, the only living designer thus honored since Raymond Loewy in 1934. The cover included the headline "Rebel with a cause," referencing his controversial personality and desire to be seen as a "non-conformist" within the field of design, as well as the movie Rebel Without a Cause, which Esslinger has described as his first American movie and a cultural inspiration.
Esslinger is a founding Professor of the Karlsruhe University of Arts and Design, Germany, and since 2006 he is a Professor for convergent industrial design at the University of Applied Arts in Vienna, Austria. In 1996, Esslinger was awarded an honorary doctorate of Fine Arts by the Parsons School of Design, New York City. Since 2012 Esslinger has served as a DeTao Master of Industrial Design with The Beijing DeTao Masters Academy (DTMA) in Shanghai,
|
https://en.wikipedia.org/wiki/IBM%20LPFK
|
The Lighted Program Function Keyboard (LPFK) is a computer input device manufactured by IBM that presents an array of buttons associated with lights.
Each button is associated to a function in supporting software, and according to the availability of that function in current context of the application, the light is switched on or off, giving the user a graphical feedback on the set of available functions. Usually the button to function mapping is customizable.
External links
http://brutman.com/IBM_LPFK/IBM_LPFK.html
Computer keyboards
LPFK
|
https://en.wikipedia.org/wiki/Rhine-Ruhr%20S-Bahn
|
The Rhine-Ruhr S-Bahn () is a polycentric and electrically driven S-bahn network covering the Rhine-Ruhr Metropolitan Region in the German federated state of North Rhine-Westphalia. This includes most of the Ruhr (and cities such as Dortmund, Duisburg and Essen), the Berg cities of Wuppertal and Solingen and parts of the Rhineland (with cities such as Cologne and Düsseldorf). The easternmost city within the S-Bahn Rhine-Ruhr network is Unna, the westernmost city served is Mönchengladbach.
The S-Bahn operates in the areas of the Verkehrsverbund Rhein-Ruhr and Verkehrsverbund Rhein-Sieg tariff associations, touching areas of the Aachener Verkehrsverbund (AVV) at Düren and Westfalentarif at Unna. The network was established in 1967 with a line connecting Ratingen Ost to Düsseldorf-Garath.
The system consists of 16 lines. With a system length of , it is the second-largest S-Bahn network in Germany, behind only the S-Bahn Mitteldeutschland. Most of them are operated by DB Regio NRW, while line S28 is operated by Regiobahn and S7 by Vias. The S19 will run 24/7 between Düren and Hennef for 17 stations and not only between Cologne Hbf and Cologne/Bonn Airport.
Rolling stock history
Age of steam
The predecessor of the S-Bahn was the so-called Bezirksschnellverkehr between the cities of Düsseldorf and Essen, which consisted of steam-powered push-pull trains, mainly hauled by Class 78, since 1951 also Class 65 engines.
Early electric years
The first S-Bahn lines were operated using Silberling cars and Class 141 locomotives. However these were not suited for operations on a rapid transit network and were soon replaced by Class 420 electric multiple units.
Originally designed for the Munich S-Bahn, the Class 420 was judged in the mid-1970s to be unsuitable for the network, mainly due to being uncomfortable and lacking on-board toilets and not being walk-through, since one could travel rather long distances on the Rhine-Ruhr network.
The x-Wagen era
Constructing an improved version of the 420 with the tentative designation Class 422 was discussed, but in 1978 the Deutsche Bundesbahn commissioned a batch of coaches from Duewag and MBB. These lightweight and modern coaches were designated as x-Wagen ("x-car") after their classification code Bx. Among the design elements inherited from the recent LHB prototype carriages were the bogies with disc brakes and rubber airbag shock absorbers that also included automated level control, ensuring level boarding from S-Bahn platforms with a standard height of 96 cm regardless of varying passenger loading.
In late 1978, the first prototypes of 2nd class type Bx 794.0 cars and Bxf 796.0 control cars were handed over to DB, followed by split first/second class cars type ABx 791.0 in early 1979. The prototypes were successful, so from 1981 to 1994 several series were commissioned, with some going to the Nuremberg S-Bahn system.
The x-Wagen were mechanically coupled to form fixed sets of typically one ABx car, one
|
https://en.wikipedia.org/wiki/KNAT-TV
|
KNAT-TV (channel 23) is a religious television station in Albuquerque, New Mexico, United States, owned and operated by the Trinity Broadcasting Network (TBN). The station's transmitter is located on Sandia Crest.
The station formerly operated from a studio located on Coors Boulevard in northwestern Albuquerque. That facility was one of several closed by TBN in 2019 following the Federal Communications Commission (FCC)'s abolition of the "Main Studio Rule", which required full-service television stations like KNAT-TV to maintain facilities in or near their communities of license.
History
KMXN-TV
Channel 23 began broadcasting as KMXN-TV on August 10, 1975. It was owned by Spanish Television of New Mexico, headed by state senator Odis Echols, and affiliated with the Spanish International Network, broadcasting from a transmitter atop the Western Bank Building.
Problems emerged with the station's management more than a year after it began operations. At the start of 1977, Herbert Taylor, a former officer of Spanish Television of New Mexico, sued Echols, fellow state senator C. B. Trujillo of Taos, and John Aragon, stockholder and president of New Mexico Highlands University, alleging that the three were using KMXN-TV to provide advertising kickbacks and for other political purposes. The First National Bank sued the station in December 1977, claiming it had defaulted on a $67,500 loan made in March 1976; by that time, Echols had stepped down.
Channel 23 also began to branch out beyond Spanish-language shows. When ABC affiliate KOAT-TV (channel 7) decided not to air Monday Night Baseball, KMXN-TV stepped in to carry it instead; the station then added high school football games.
KLKK-TV
In 1978, Eddie Peña began buying out the partners of Spanish Television of New Mexico. Peña was granted a construction permit the next year to move the transmitter from downtown to Sandia Crest, the main tower site for the Albuquerque area.
Peña also prepared a total relaunch of channel 23's programming. The station shifted to an English-language independent—New Mexico's first—on May 19, 1980, and took on the call letters KLKK-TV. As part of the changes, channel 23 disaffiliated from SIN, which Peña blamed for providing Latin American programming that was not well received in the Albuquerque market. Local productions included pre-existing shows from the Hispanic Chamber of Commerce that had aired on KMXN-TV, as well as Pueblo Speaks, focusing on Native American issues, and a live call-in show. SIN would not be gone from Albuquerque for long, as a translator carrying the network began broadcasting in August.
Not long after the relaunch, Peña began seeking buyers. Rumors circulated as early as the spring of 1981 that channel 23 would be sold. When fired general manager Milt Ledet sued the station for breach of contract at year's end, he revealed that a sale was near, and that he was entitled to two percent of the proceeds. While a $7 million purchase by Malcolm Gla
|
https://en.wikipedia.org/wiki/National%20Center%20for%20Data%20Mining
|
The National Center for Data Mining (NCDM) is a center of the University of Illinois at Chicago (UIC), established in 1998 to serve as a resource for research, standards development, and outreach for high performance and distributed data mining and predictive modeling.
NCDM won the High Performance Bandwidth Challenge at SuperComputing '06 in Tampa, FL and recently demonstrated the use of UDP Data Transport.
External links
National Center for Data Mining
SC06 Bandwidth Challenge Results
University of Illinois Chicago
|
https://en.wikipedia.org/wiki/Center%20for%20Computer-Assisted%20Legal%20Instruction
|
The Center for Computer-Assisted Legal Instruction, also known as CALI, is a 501(c)(3) nonprofit that does research and development in online legal education. CALI publishes over 1,200 interactive tutorials, free casebooks, and develops software for experiential learning. Over 90% of US law schools are members which provide students with unlimited and free access to these materials.
CALI was incorporated in 1982 in the state of Minnesota by the University of Minnesota Law School and Harvard Law School. The cost of membership to CALI is US$8,000 per year for US law schools; free for legal-aid organizations, library schools, state and county law librarians; and US$250 per year for law firms, paralegal programs, undergraduate departments, government agencies, individuals, and other organizations.
Services
CALI Lessons
CALI Lessons are interactive tutorials written by law faculty covering various law study material in 20–40 minute lessons.
CALIcon Conference
CALI's CALIcon is a two-day conference where faculty, law librarians, tech staff and educational technologists gather to share ideas, experiences and expertise. Exhibitors have included legal and education researchers as well as law companies.
CALI first hosted The Conference for Law School Computing in 1991 (then known as the Conference for Law School Computing Professionals) at Chicago-Kent. From 1991 to 1994 the conference was hosted at Chicago-Kent, and since 1995 the conference has been hosted on-site by various CALI member law schools.
References
External links
CALI's website
Legal research
Charities based in Minnesota
Legal education in the United States
Organizations established in 1982
Harvard Law School
1982 establishments in Minnesota
|
https://en.wikipedia.org/wiki/Dome%20magnifier
|
A dome magnifier is a dome-shaped magnifying device made of glass or acrylic plastic, used to enlarge words on a page or computer screen. They are plano-convex lenses: the flat (planar) surface is placed on the object to be magnified, and the convex (dome) surface provides the enlargement. They usually provide between 1.8× and 6× magnification. Dome magnifiers are often used by the visually impaired. They are good for reading maps or basic text and their inherent 180° design naturally amplifies illumination from ambient side-light. They are suitable for people with tremors or impaired motor skills, because they are held in contact with the page during use.
See also
Reading stone
References
Magnifiers
Magnifier, dome
|
https://en.wikipedia.org/wiki/Assortative%20mixing
|
In the study of complex networks, assortative mixing, or assortativity, is a bias in favor of connections between network nodes with similar characteristics. In the specific case of social networks, assortative mixing is also known as homophily. The rarer disassortative mixing is a bias in favor of connections between dissimilar nodes.
In social networks, for example, individuals commonly choose to associate with others of similar age, nationality, location, race, income, educational level, religion, or language as themselves. In networks of sexual contact, the same biases are observed, but mixing is also disassortative by gender – most partnerships are between individuals of opposite sex.
Assortative mixing can have effects, for example, on the spread of disease: if individuals have contact primarily with other members of the same population groups, then diseases will spread primarily within those groups. Many diseases are indeed known to have differing prevalence in different population groups, although other social and behavioral factors affect disease prevalence as well, including variations in quality of health care and differing social norms.
Assortative mixing is also observed in other (non-social) types of networks, including biochemical networks in the cell, computer and information networks, and others.
Of particular interest is the phenomenon of assortative mixing by degree, meaning the tendency of nodes with high degree to connect to others with high degree, and similarly for low degree. Because degree is itself a topological property of networks, this type of assortative mixing gives rise to more complex structural effects than other types. Empirically it has been observed that most social networks mix assortatively by degree, but most networks of other types mix disassortatively, although there are exceptions.
See also
Assortative mating
Assortativity
Complex network
Friendship paradox
Graph theory
Heterophily
Homophily
Preferential attachment
References
Network theory
Social science methodology
Social networks
Epidemiology
Organization
|
https://en.wikipedia.org/wiki/Open%20for%20Business%20%28blog%29
|
Open for Business (OFB) was an online news blog with a technology focus. It featured articles on topics including computers, technology, politics, current events, theology and philosophy. The site also contained a fiction section with short stories and poetry.
History
OFB was founded on October 5, 2001 as the "open-source migration guide". It was started by Timothy R. Butler after a mailing list discussion, and featured articles and white papers discussing migration to Linux. Originally, OFB featured very little original content, instead mimicking Slashdot and similar sites that included little more than a few small comments on the articles posted. Steven Hatfield helped add postings to the site.
The site then started to add free and open source software news. About a month after the site was founded, the first original editorial content appeared and OFB continued to publish approximately one original work per month after that. In late April 2002, Butler announced a relaunch of the site that included a reduction in links to other sites and a further increase in original content. The relaunch also brought forth the first version of a blue sphere logo and the new tagline "the Independent Journal of Open Source Migration".
On July 4, 2002, Open for Business, LinuxandMain.com, KernelTrap and Device Forge's LinuxDevices and DesktopLinux.com formed LinuxDailyNews (LDN), an aggregation site that was intended to help increase the publicity of independent open source news sites. LDN featured a center column that showed story highlights and two side columns that displayed all stories from the member sites in blocks. The site was launched as part of DesktopLinux.com's "wIndependence Day" promotion and had an early spike in popularity following a mention on Slashdot. In subsequent months, the site's traffic decreased. It was taken down in 2004 after a hacker managed to deface the site; although plans existed to restore the site, they were never followed through with and Device Forge assumed the rights to LDN's domain name.
In February 2003, the site finalized its transition to an original content provider, as opposed to a site of links, by moving non-original content to a separate "News Watch" section. New contributing editor Ed Hurst began a series on his switch to FreeBSD in September 2003, beginning a long running series of FreeBSD articles that Hurst continued to add until October 2006. Butler also began OFB's coverage of Mac OS X computers. OFB's second associate editor Eduardo Sánchez returned in mid-2004 as a contributing editor. Hurst was promoted to associate editor simultaneously.
The site continued in a similar fashion, with its mix of coverage on Linux, BSD and Mac OS X through early 2006. During this period it changed its motto to "the Independent Journal of Open Standards and Open Source". Due to other obligations, the site's editors ceased writing content for the site in early 2006, though it remained open during this period.
On its fift
|
https://en.wikipedia.org/wiki/Valmet%20Nr%20I
|
Nr I is a class of articulated six-axle (B′2′B′ wheel arrangement), chopper-driven tram operated by Helsinki City Transport on the Helsinki tram network. All trams of this type were built by the Finnish metal industry corporation Valmet between the years 1973 and 1975.
Between 1993 and 2004 all trams in the class were modernised by HKL and redesignated as Nr I+ class. Currently HKL classifies them as NRV I.
Overview
Nr I were the first type of articulated tram operated by the HKL. The design of the Nr I type trams was based on the GT6 type trams built by Duewag for various cities in western Europe since 1956, but the Nr I incorporated several technological innovations that had not been available when the GT6 was designed. The Nr I trams were delivered by Valmet between 1973 and 1975, with the first seven trams delivered in 1973, further 18 delivered in 1974 and the final 15 in 1975. As the first mass-produced tram type in the world, the Nr I featured thyristor chopper control. The first tram of this class entered revenue-earning service on 16 December 1973 on line 10. Although the trams of this type are numbered 31 to 70, the first unit was not the 31st tram to be used by the HKL. The HKL tram numbering system had been reset in 1959, with the numbering of new trams delivered that year beginning from 1.
In the early 80s the city of Gothenburg, the forerunner in creation of modern light rail systems in Europe, wished to purchase Nr I -based trams from Valmet for its own tram network. However, due to pressure from the Swedish government, Göteborgs spårvägar were forced to place an order with the Swedish ASEA instead. In Helsinki a further developed version of the Nr I, the Nr II class, was delivered by Valmet for HKL between 1983 and 1987. The Nr II class trams have an identical external appearance and very similar interior layout to the Nr I class.
From November 1993 onwards, starting with tram number 45, all Nr I units were modernised by HKL into Nr I+ class. The modernisation included updates to the technics of the trams, changes to the interior layout, addition of electronic displays displaying the name of the next stop, as well as replacement of the original seats with new ones. The last tram to be modernised was number 53, modernised in July 2004.
A second modernisation process, labelled "life extension programme" by the HKL, begun in 2005. Like the earlier process, this programme includes updating much of the technics and changes to the interiors. Additionally the chassis of the trams will be sand-blasted and given a new surface finish. For some trams the life extension programme will be carried out in Germany.
Liveries
For the Nr I type trams, HKL decided to adopt a new livery. Instead of the traditional green/yellow colours, the new trams were painted light grey, with an orange stripes running along the top and bottom of the carriage. In 1986 HKL decided to abandon the unpopular orange/grey livery, and by 1995 all trams of this type
|
https://en.wikipedia.org/wiki/Polity%20data%20series
|
The Polity data series is a data series in political science research. Along with the V-Dem Democracy indices project and Democracy Index (The Economist), Polity is among prominent datasets that measure democracy and autocracy.
The Polity study was initiated in the late 1960s by Ted Robert Gurr and is now continued by Monty G. Marshall, one of Gurr's students. It was sponsored by the Political Instability Task Force (PITF) until February 2020. The PITF is funded by the Central Intelligence Agency.
The data series has been criticized for its methodology, Americentrism, and connections to the CIA. Seva Gunitsky, an assistant professor at the University of Toronto, stated that the data series was appropriate "for research that examines constraints on governing elites, but not for studying the expansion of suffrage over the nineteenth century".
Scoring chart
Scores for 2018
Criticism
The 2002 paper "Conceptualizing and Measuring Democracy" claimed several problems with commonly used democracy rankings, including Polity, opining that the criteria used to determine "democracy" were misleadingly narrow.
The Polity data series has been criticized by Fairness & Accuracy in Reporting for its methodology and determination of what is and isn't a democracy. FAIR has criticized the data series for Americentrism with the United States being shown as the only democracy in the world in 1842, being given a nine out of ten during slavery, and a ten out of ten during the Jim Crow era. The organization has also been critical of the data series for ignoring European colonialism in Africa and Asia with those areas being labeled as no data before the 1960s. FAIR has also been critical of the data series' connection to the Central Intelligence Agency. Max Roser, the founder of Our World in Data, stated that Polity IV was far from perfect and was concerned at the data series' connections with the Central Intelligence Agency.
Seva Gunitsky, an assistant professor at the University of Toronto, wrote in The Washington Post where he stated that "Polity IV measures might be appropriate for research that examines constraints on governing elites, but not for studying the expansion of suffrage over the nineteenth century". Gunitsky was critical of the data series for ignoring suffrage.
See also
Democracy-Dictatorship Index
Democracy Index
Democracy Ranking
V-Dem Democracy indices
List of democracy indices
List of freedom indices
Freedom in the World
References
External links
Polity IV Project webpage
Democracy
|
https://en.wikipedia.org/wiki/Computer%20Lib/Dream%20Machines
|
Computer Lib/Dream Machines is a 1974 book by Ted Nelson, printed as a two-front-cover paperback to indicate its "intertwingled" nature. Originally self-published by Nelson, it was republished with a foreword by Stewart Brand in 1987 by Microsoft Press.
In Steven Levy's book Hackers, Computer Lib is described as "the epic of the computer revolution, the bible of the hacker dream. [Nelson] was stubborn enough to publish it when no one else seemed to think it was a good idea."
Published just before the release of the Altair 8800 kit, Computer Lib is often considered the first book about the personal computer.
Background
Prior to the initial release of Computer Lib/Dream Machines, Nelson was working on the first hypertext project, Project Xanadu, founded in 1960. An integral part to the Xanadu vision was computing technology and the freedom he believed came with it. These ideas were later compiled and elaborated upon in the 1974 text, around the time when locally networked computers had appeared and Nelson found global networks as a space for the hypertext system.
Synopsis
Computer Lib
In Computer Lib. You can and must understand computers NOW, Nelson covers both the technical and political aspects of computers.
Nelson attempts to explain computers to the laymen during a time when personal computers had not yet become mainstream and anticipated the machine being open for anyone to use. Nelson writes about the need for people to understand computers more deeply than was generally promoted as computer literacy, which he considers a superficial kind of familiarity with particular hardware and software. His rallying cry "Down with Cybercrud" is against the centralization of computers such as that performed by IBM at the time, as well as against what he sees as the intentional untruths that "computer people" tell to non-computer people to keep them from understanding computers.
Dream Machines
Dream Machines. New Freedom through Computer Screens- a Minority Report, is the opposite of the Computer Lib side. Nelson explores what he believes is the future of computers and the alternative uses for them. This side was his counterculture approach to how computers had typically been used.
Nelson covers the flexible media potential of the computer, which was shockingly new at the time. He saw the use of hypermedia and hypertext, both terms he coined, being beneficial for creativity and education. He urged readers to look at the computer not as just a scientific machine, but as an interactive machine that can be accessible to anyone.
In this section, Nelson also described the details of Project Xanadu. He proposed the idea of a future Xanadu Network, where users could shop at Xanadu stands and access material from global storage systems.
Format
Both the 1974 and 1987 editions have an unconventional layout, with two front covers. The Computer Lib cover features a raised fist in a computer. Once flipped over, the Dream Machines cover shows a man wit
|
https://en.wikipedia.org/wiki/Three%20Angels%20Broadcasting%20Network
|
The Three Angels Broadcasting Network, or 3ABN, is a Christian media television and radio network which broadcasts Seventh-day Adventist religious, music and health-oriented programming, based in West Frankfort, Illinois, United States. Although it is not formally tied to any particular church or denomination, much of its programming focuses on Seventh-day Adventist theology and Adventist doctrine.
History
Three Angels Broadcasting Network is located in West Frankfort, Illinois. In July 2017, 3ABN announced the sale of 60 low-powered television (LPTV) stations and 10 LPTV construction permits to Edge Spectrum. In October 2017, 3ABN announced the sale of 14 LPTV stations to HC2 Holdings.
Programming
The stated goal of 3ABN's programming is a blend of family and social programs, health and lifestyle, gospel music, and a wide variety of Bible-based presentations.
3ABN maintains several distinct subchannels, separated by language and format.
3ABN (the flagship service with a mixture of programs from the other subchannels)
3ABN Proclaim! (all-televangelism)
3ABN Latino Network (Spanish language)
3ABN Latino Radio Network (Spanish language)
3ABN Radio Network
3ABN Radio Music Channel
3ABN Russia (Russian language)
3ABN Russia Radio Network (Russian language)
3ABN Français Network (French language)
3ABN International Network (partial simulcast of the main 3ABN with some foreign programming)
3ABN Dare to Dream Network ("urban Christian lifestyle")
3ABN Kids Network (children's programming, also covers the network's E/I liabilities)
3ABN Praise Him Music Network (worship music)
3ABN Australia Radio Network
3ABN Plus (3ABN+) live streaming broadcasts of all 3ABN television and radio networks with videos on demand, and so much more, and the subscription is free
As of early 2009, 3ABN's main TV channel had 69% original programming; 3ABN Latino had 67% original programming; and 3ABN Russia had 100% original programming.
The 3ABN International network has the same/similar lineup of programs as 3ABN's flagship network. 3ABN International carries "3ABN Now", the flagship program and some other programming produced by 3ABN Australia.
Not only 3ABN produced their programming at their World Headquarters in West Frankfort, Illinois, 3ABN also produces and carries their programming in their world branches at Three Angels Russian Evangelism Centre in Nizhny Novgorod in Russia and 3ABN Australia Production Centre in Morisset, New South Wales in Australia.
Availability
3ABN television networks are available viewing worldwide through various ways and platforms like international satellites including DISH Network (United States), local downlink stations, and over-the-air stations (United States), cable television, Internet, YouTube, Facebook, 3ABN+ app with Apple and Android mobile devices, Roku, Amazon Fire TV, Smart TVs, Android TV, Apple TV (4th and 5th Generation and future), 3ABN networks are available via FaithStream (Australia), MySDATV, In
|
https://en.wikipedia.org/wiki/Bart%20Has%20Two%20Mommies
|
"Bart Has Two Mommies" is the fourteenth episode of the seventeenth season of the American animated television series The Simpsons. It originally aired on the Fox network in the United States on March 19, 2006. In the episode, Marge babysits for Flanders' sons while Bart is kidnapped by a chimpanzee.
Plot
The Simpson family attend a church fundraiser for a new steeple. Ned Flanders wins a rubber duck racing contest and is awarded a computer, although he gives it to Marge because he does not have any use for it. Marge babysits Rod and Todd while Ned attends a left-handed convention to repay the favor. She finds that all the games they play are boring and overly safe, such as a "sitting still contest," and helps Rod and Todd have fun by encouraging them to liven up. At the left-handed convention, Ned meets baseball hall of famer Randy Johnson (who was then playing for the New York Yankees), who is there to pitch his own line of teddy bears called Randy Johnson's Southpawz. After asking Johnson if he has any mailman teddy bears, the pitcher tells Ned that a teddy bear can't be a mailman. Johnson then asks Ned how many doctor bears he wants as they come in a box of 1,000. When Ned tells him he only wants one box, this makes Johnson angry with Ned.
With Marge spending so much time at the Flanders' house, Homer, Bart, and Lisa go to an animal sanctuary for retired film animals. Bart sees an elderly female chimpanzee named Toot-Toot and offers her some ice cream, only to be taken into her cage and "adopted." Ned comes home and sees Todd wearing a Band-Aid, having injured himself during one of Marge's games. Marge encourages Ned to let his kids start taking more risks, showing him a flyer for a child-safe activity center.
Marge takes Rod and Todd to the activity center. Ned follows her and is surprised to see Rod climbing a structure, yelling that he will get hurt. Rod gets worried and falls, chipping a tooth against the structure. A news broadcast plays about Bart's kidnapping, surprising Marge and causing Ned to view her as a bad mother. Following this, he starts child-proofing the house, although Rod and Todd protest and tell him that they liked having Marge over.
Lisa suggests that Toot-Toot is keeping Bart captive because her real son has gone missing. When Marge goes into the cage to talk to Toot-Toot, she escapes and climbs atop the unfinished church steeple. With Toot-Toot's son, Mr. Teeny, Rod climbs up the steeple and Ned encourages him. Toot-Toot happily reunites with Mr. Teeny and lets Bart go. In a mid-credits scene, Maude Flanders looks down from Heaven, proud that Rod is growing up.
Cultural references
The episode title refers to the book Heather Has Two Mommies.
Left-handed pitcher Randy Johnson makes a cameo appearance at the Left-Handed convention selling his own line of left-handed teddy bears.
Ned sings "Welcome to the Jungle" by Guns N' Roses with alternate lyrics as "Welcome to the Jungle Gym" while child-proofing the backyar
|
https://en.wikipedia.org/wiki/Wizard%20%282005%20video%20game%29
|
Wizard is a video game created in 1980 for the Atari Video Computer System (later renamed the Atari 2600) by Chris Crawford while working for Atari, Inc. The game was not advertised or released by Atari. Wizard uses a 2K ROM, the last Atari 2600 game developed by Atari with less than 4K. Wizard was eventually released as part of the Atari Flashback 2 package in 2005.
Gameplay
The player is a wizard from Irata (Atari spelled backwards) and battles imps in a maze. It's not a symmetric battle: the player is faster than the enemy, but the enemy can go through walls and fire faster than the player can. There is no need to aim, as the angle of the player's fire is automatically sent in the direction of the enemy. The enemy remains invisible when it is behind a wall. It also has heart beat audio, which becomes louder as the player gets closer to the enemy.
Development
The production of Wizard is detailed extensively in the book Chris Crawford on Game Design. Crawford wanted to write software for the new Atari home computers, but Atari management required developers for the system to create an Atari VCS game first.
Wizard was never published for the Atari VCS. It was included with the Atari Flashback 2, 25 years after it was written. Chris Crawford learned about the release in an email from a fan. Crawford's original prototype did not contain a two-player mode, but the game released with the Atari Flashback 2 does.
References
External links
The Wizard Game Manual
2005 video games
Atari games
Cancelled Atari 2600 games
Atari 2600 games
Maze games
Chris Crawford (game designer) games
Video games developed in the United States
|
https://en.wikipedia.org/wiki/Mary%20Beth%20Decker
|
Mary Beth Decker (born January 11, 1981) is an American former model and television personality who attended Texas A&M University. She was the "Cyber Girl of the Week" for Playboy in the fourth week of September 2002, and "Cyber Girl of the Month" for January 2003 as well as a cast member on a season of MTV's show Road Rules, Road Rules: South Pacific.
Personal life
Decker was born in Houston, Texas. While at Texas A&M University she worked as a bartender at The Tap, a bar located in College Station, Texas.
She was featured in two issues of Playboy: October 2002, as part of the Girls of the Big 12 (as an A&M student), and March 2004 as a Cyber Girl. She has appeared on the HDNet show Get Out!, in the season three Costa Rica episode.
On August 31, 2007, she gave birth to a boy named Gavin.
References
External links
Living people
Road Rules cast members
Texas A&M University alumni
1981 births
|
https://en.wikipedia.org/wiki/Narrow-gauge%20railways%20in%20Saxony
|
The narrow-gauge railways in Saxony were once the largest single-operator narrow-gauge railway network in Germany. In Saxony, the network peaked shortly after World War I with over of tracks. At first, it was primarily created to connect the small towns and villages in Saxony – which had formed a viable industry in the 19th century – to already established standard-gauge railways. But even shortly after 1900, some of the railways would become important for tourism in the area.
History
Beginnings
Around 1875, the Royal Saxon State railway network, unlike other states in Germany, had already expanded to cover most of the territory of Saxony. Due to the mountainous terrain, any further expansion was met with a disproportional cost increase. In order to keep costs down, most new track projects were then planned and executed as branch lines, with smaller radii for curves, simpler operating rules and unsupervised stations and yards as the primary means to save costs. However, to connect the small towns and villages in the deep and narrow Ore Mountain valleys with their diverse industry, standard-gauge tracks were only feasible with an enormous amount of technical and financial investment. Therefore, the directorate of the Royal Saxon State Railways, given the example of the existing Bröl Valley Railway and Upper Silesian Railway, decided in favor of narrow-gauge railways.
The first narrow-gauge railway in Saxony opened in 1881 between Wilkau-Haßlau and Kirchberg. In addition, the Weißeritztalbahn and the Mügeln railway network were already under construction. Many additional narrow-gauge railways, such as the Thumer Netz, were built in short order, almost all of them using a standardized track gauge. In the meantime, standard-gauge projects in Saxony were scaled back to tracks that connected already existing standard-gauge railways, or where the transfer of goods between the standard and narrow tracks was not feasible or profitable.
Expansion before World War I
Within 20 years, the Saxon narrow-gauge railway network had almost reached its final size. After 1900, only few additional railways were added. Most were just additions to existing lines that brought operational advantages.
Although the narrow-gauge network made very little profit, it was very important for the industrial development of Saxony. Without the narrow-gauge tracks – that permitted industrial sidings to small companies in narrow and steep valleys – an industrial development in the poor Ore Mountain area of Saxony would have hardly been possible.
However, it was soon evident that the narrow-gauge railways were not always up to task for all cargo demands. Mainly, the transloading of freight between the breaks of gauge was time-consuming and expensive. To avoid additional cargo handling on the Dresden-Klotzsche–Königsbrück line, a container system was tested ("Umsetzkästen") in which the whole cargo box of a freight car was transferred between standard and narrow-gauge frames
|
https://en.wikipedia.org/wiki/Sun386i
|
The Sun386i (codenamed Roadrunner) is a discontinued hybrid UNIX workstation/PC compatible computer system produced by Sun Microsystems, launched in 1988. It is based on the Intel 80386 microprocessor but shares many features with the contemporary Sun-3 series systems.
Hardware
Unlike the Sun-3 models, the Sun386i has a PC-like motherboard and "mini-tower"-style chassis. Two variants were produced, the Sun386i/150 and the Sun386i/250 with a 20 or 25 MHz CPU respectively. The motherboard includes the CPU, 80387 FPU, 82380 timer/DMA/interrupt controller and a custom Ethernet IC called BABE ("Bus Adapter Between Ethernet"). Floppy disk, SCSI, RS-232 and Centronics parallel interfaces are also provided, as are four ISA slots (one 8-bit, three 16-bit) and four proprietary 32-bit "local" bus slots. The latter are used for RAM and frame buffer cards.
Two types of RAM card are available, a 4 or 8 MB card, and the "XP Cache" card, incorporating up to 8 MB with an 82385 cache controller and 32 KB of cache SRAM. Up to two memory cards can be installed, to give a maximum RAM capacity of 16 MB.
Mass storage options are either 91 or 327 MB internal SCSI hard disks and a 1.44 MB 3.5-in floppy drive. A storage expansion box that holds two more disks can be mounted to the top of the chassis.
Frame buffer options include the 1024×768 or 1152×900-pixel monochrome BW2 card, the 8-bit color CG3 with similar resolutions, or the accelerated 8-bit color CG5, otherwise known as the Roadracer or GXi framebuffer. This uses the TI TMS34010 graphics processor and had a resolution of 1152x900. In addition, a "SunVGA" accelerator card can be installed in the ISA expansion slot that allows a DOS session to display a full VGA window on the desktop.
The Sun386i introduced the Sun Type 4 keyboard, a hybrid of the earlier Type 3 and PC/AT layouts. This was later used for the SPARCstation line of workstations.
Software
The Sun386i's firmware is similar to the Sun-3's "PROM Monitor". A 386 port of SunOS is the native operating system. SunOS releases 4.0, 4.0.1 and 4.0.2 support the architecture. A beta version of SunOS 4.0.3 for the Sun386i also exists but was not generally available, except possibly to the U.S. government. Included with SunOS are the SunView GUI and VP/ix MS-DOS emulator. This runs as a SunOS process and thus allows multiple MS-DOS session to be run simultaneously, a major selling point of the Sun386i. Unix long file names are accessed using a mapping to DOS 8.3 filenames, the file names being modified to include a tilde and to be unique as far as possible. This system is similar to, but predates, that used for long file names in Microsoft's VFAT. Special drive letters are used including H: for the user's home directory and D: for the current working directory when the DOS shell is started. The C: drive corresponds to a file in the Unix file system which appears to DOS as a 20 MB hard disk. This is used especially for the installation of copy-protected soft
|
https://en.wikipedia.org/wiki/SDU
|
Sdu or SDU may refer to:
Communications
Satellite Data Unit, a part of a satellite telecommunication system for aircraft
Service Data Unit, a telecommunications term related to the layered protocol concept
Universities
University of Southern Denmark, Danish: Syddansk Universitet (SDU)
Süleyman Demirel University, a university in Isparta, Turkey
Suleyman Demirel University, a university in Almaty, Kazakhstan
Shandong University (山东大学 SDU), a university in Shandong, China
Sanda University (上海杉达学院 SDU), a university in Shanghai, China
Other
Single dwelling unit, a single-family, free-standing residential building (home). It is defined in opposition to a multi-family residential dwelling (e.g. apartment).
Special Detective Unit, a specialist branch of the Garda Síochána, Ireland's national police
Special Duties Unit, a paramilitary special force of the Hong Kong Police Force
Surveillance Detection Unit, a surveillance program connected to US embassies.
Santos Dumont Airport, the smaller of the two airports in Rio de Janeiro, Brazil (IATA code)
SDU: Sex Duties Unit, a 2013 Hong Kong action comedy film
Social Development Unit, a matchmaking agency in Singapore
Social Democratic Union (disambiguation), a name of a number of political parties
Sonic Diver Unit, special mecha unit piloted by the Sky Girls (Japanese anime)
Sodium diuranate, a uranium salt that is an intermediate in the production of the metal
Sdu (publishing company), a Dutch publishing company, formerly the Staatsdrukkerij en Uitgeverij
Sewer Dosing Unit, a plumbing device that facilitates sewage disposal with low liquid-flow rates
Sweden Democratic Youth, the former youth league of the Swedish political party Sweden Democrats
State Disbursement Unit, a government agency in the United States that handles child support payments
NHS Sustainable Development Unit in the United Kingdom
|
https://en.wikipedia.org/wiki/Shebang%20%28Unix%29
|
In computing, a shebang is the character sequence consisting of the characters number sign and exclamation mark () at the beginning of a script. It is also called sharp-exclamation, sha-bang, hashbang, pound-bang, or hash-pling.
When a text file with a shebang is used as if it is an executable in a Unix-like operating system, the program loader mechanism parses the rest of the file's initial line as an interpreter directive. The loader executes the specified interpreter program, passing to it as an argument the path that was initially used when attempting to run the script, so that the program may use the file as input data. For example, if a script is named with the path path/to/script, and it starts with the line #!/bin/sh, then the program loader is instructed to run the program /bin/sh, passing path/to/script as the first argument.
The shebang line is usually ignored by the interpreter, because the "#" character is a comment marker in many scripting languages; some language interpreters that do not use the hash mark to begin comments still may ignore the shebang line in recognition of its purpose.
Syntax
The form of a shebang interpreter directive is as follows:
#! interpreter [optional-arg]
in which interpreter is a path to an executable program. The space between and interpreter is optional. There could be any number of spaces or tabs either before or after interpreter. The optional-arg will include any extra spaces up to the end-of-line.
In Linux, the file specified by interpreter can be executed if it has the execute rights and is one of the following:
a native executable, such as an ELF binary
any kind of file for which an interpreter was registered via the binfmt_misc mechanism (such as for executing Microsoft .exe binaries using wine)
another script starting with a shebang
On Linux and Minix, an interpreter can also be a script. A chain of shebangs and wrappers yields a directly executable file that gets the encountered scripts as parameters in reverse order. For example, if file /bin/A is an executable file in ELF format, file /bin/B contains the shebang , and file /bin/C contains the shebang , then executing file /bin/C resolves to , which finally resolves to .
In Solaris- and Darwin-derived operating systems (such as macOS), the file specified by interpreter must be an executable binary and cannot itself be a script.
Examples
Some typical shebang lines:
#!/bin/sh – Execute the file using the Bourne shell, or a compatible shell, assumed to be in the /bin directory
#!/bin/bash – Execute the file using the Bash shell
#!/usr/bin/pwsh – Execute the file using PowerShell
#!/usr/bin/env python3 – Execute with a Python interpreter, using the env program search path to find it
#!/bin/false – Do nothing, but return a non-zero exit status, indicating failure. Used to prevent stand-alone execution of a script file intended for execution in a specific context, such as by the . command from sh/bash, source from csh/tcsh, or as a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.