source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Access%20control%20matrix
|
In computer science, an access control matrix or access matrix is an abstract, formal security model of protection state in computer systems, that characterizes the rights of each subject with respect to every object in the system. It was first introduced by Butler W. Lampson in 1971.
An access matrix can be envisioned as a rectangular array of cells, with one row per subject and one column per object. The entry in a cell – that is, the entry for a particular subject-object pair – indicates the access mode that the subject is permitted to exercise on the object. Each column is equivalent to an access control list for the object; and each row is equivalent to an access profile for the subject.
Definition
According to the model, the protection state of a computer system can be abstracted as a set of objects , that is the set of entities that needs to be protected (e.g. processes, files, memory pages) and a set of subjects , that consists of all active entities (e.g. users, processes). Further there exists a set of rights of the form , where , and . A right thereby specifies the kind of access a subject is allowed to process object.
Example
In this matrix example there exist two processes, two assets, a file, and a device. The first process is the owner of asset 1, has the ability to execute asset 2, read the file, and write some information to the device, while the second process is the owner of asset 2 and can read asset 1.
Utility
Because it does not define the granularity of protection mechanisms, the Access Control Matrix can be used as a model of the static access permissions in any type of access control system. It does not model the rules by which permissions can change in any particular system, and therefore only gives an incomplete description of the system's access control security policy.
An Access Control Matrix should be thought of only as an abstract model of permissions at a given point in time; a literal implementation of it as a two-dimensional
|
https://en.wikipedia.org/wiki/WebML
|
WebML (Web Modeling Language) is a visual notation and a methodology for designing complex data-intensive Web applications. It provides graphical, yet formal, specifications, embodied in a complete design process, which can be assisted by visual design tools.
In 2013 WebML has been extended to cover a wider spectrum of front-end interfaces, thus resulting in the Interaction Flow Modeling Language (IFML), adopted as a standard by the Object Management Group (OMG).
This method has five models: structure, derivation, composition, navigation, and presentation. These models are developed in an iterative process.
Concepts
WebML enables designers to express the core features of a site at a high level without committing to detailed architectural details. WebML concepts are associated with an intuitive graphic representation, which can be easily supported by CASE tools and effectively communicated to the non-technical members of the site development team (e.g., with the graphic designers and the content producers). WebML also supports an XML syntax, which instead can be fed to software generators for automatically producing the implementation of a Web site. The specification of a site in WebML consists of four orthogonal perspectives:
Structural Model: it expresses the data content of the site, in terms of the relevant entities and relationships. WebML does not propose yet another language for data modeling, but is compatible with classical notations like the E/R model, the ODMG object-oriented model, and UML class diagrams.
Hypertext Model: it describes one or more hypertexts that can be published in the site. Each different hypertext defines a so-called site view. Site view descriptions in turn consist of two sub-models.
Composition Model: it specifies which pages compose the hypertext, and which content units make up a page.
Navigation Model: it expresses how pages and content units are linked to form the hypertext. Links are either non-contextual, when they connect s
|
https://en.wikipedia.org/wiki/Ocean%20fertilization
|
Ocean fertilization or ocean nourishment is a type of technology for carbon dioxide removal from the ocean based on the purposeful introduction of plant nutrients to the upper ocean to increase marine food production and to remove carbon dioxide from the atmosphere. Ocean nutrient fertilization, for example iron fertilization, could stimulate photosynthesis in phytoplankton. The phytoplankton would convert the ocean's dissolved carbon dioxide into carbohydrate, some of which would sink into the deeper ocean before oxidizing. More than a dozen open-sea experiments confirmed that adding iron to the ocean increases photosynthesis in phytoplankton by up to 30 times.
This is one of the more well-researched carbon dioxide removal (CDR) approaches, however this approach would only sequester carbon on a timescale of 10-100 years dependent on ocean mixing times. While surface ocean acidity may decrease as a result of nutrient fertilization, when the sinking organic matter remineralizes, deep ocean acidity will increase. A 2021 report on CDR indicates that there is medium-high confidence that the technique could be efficient and scalable at low cost, with medium environmental risks. One of the key risks of nutrient fertilization is nutrient robbing, a process by which excess nutrients used in one location for enhanced primary productivity, as in a fertilization context, are then unavailable for normal productivity downstream. This could result in ecosystem impacts far outside the original site of fertilization.
A number of techniques, including fertilization by the micronutrient iron (called iron fertilization) or with nitrogen and phosphorus (both macronutrients), have been proposed. But research in the early 2020s suggested that it could only permanently sequester a small amount of carbon. Therefore, there is no major future in its role to sequester carbon.
Rationale
The marine food chain is based on photosynthesis by marine phytoplankton that combine carbon with in
|
https://en.wikipedia.org/wiki/Pazend
|
Pazend () or Pazand (; ) is one of the writing systems used for the Middle Persian language. It was based on the Avestan alphabet, a phonetic alphabet originally used to write Avestan, the language of the Avesta, the primary sacred texts of Zoroastrianism.
Pazend's principal use was for writing the commentaries (Zend) on and/or translations of the Avesta. The word "Pazend" ultimately derives from the Avestan words paiti zainti, which can be translated as either "for commentary purposes" or "according to understanding" (phonetically).
Pazend had the following characteristics, both of which are to be contrasted with Pahlavi, which is one of the other systems used to write Middle Persian:
Pazend was a variant of the Avestan alphabet (Din dabireh), which was a phonetic alphabet. In contrast, Pahlavi script was only an abjad.
Pazend did not have ideograms. In contrast, ideograms were an identifying feature of the Pahlavi system, and these huzvarishn were words borrowed from Semitic languages such as Aramaic that continued to be spelled as in Aramaic (in Pahlavi script) but were pronounced as the corresponding word in Persian.
In combination with its religious purpose, these features constituted a "sanctification" of written Middle Persian. The use of the Avestan alphabet to write Middle Persian required the addition of one symbol to the Avestan alphabet: This character, to represent the phoneme of Middle Persian, had not previously been needed.
Following the fall of the Sassanids, after which Zoroastrianism came to be gradually supplanted by Islam, Pazend lost its purpose and soon ceased to be used for original composition. In the late 11th or early 12th century, Indian Zoroastrians (the Parsis) began translating Avestan or Middle Persian texts into Sanskrit and Gujarati. Some Middle Persian texts were also transcribed into the Avestan alphabet. The latter process, being a form of interpretation, was known as 'pa-zand'. "Pazand texts, transcribed phonetically, re
|
https://en.wikipedia.org/wiki/Index%20of%20biotechnology%20articles
|
Biotechnology is a technology based on biology, especially when used in agriculture, food science, and medicine.
Of the many different definitions available, the one formulated by the UN Convention on Biological Diversity is one of the broadest:
"Biotechnology means any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use." (Article 2. Use of Terms)
More about Biotechnology...
This page provides an alphabetical list of articles and other pages (including categories, lists, etc.) about biotechnology.
A
Agrobacterium -- Affymetrix -- Alcoholic beverages -- :Category:Alcoholic beverages -- Amgen -- Antibiotic -- Artificial selection
B
Biochemical engineering -- Biochip -- Biodiesel -- Bioengineering -- Biofuel -- Biogas -- Biogen Idec -- Bioindicator -- Bioinformatics -- :Category:Bioinformatics -- Bioleaching -- Biological agent -- Biological warfare -- Bioluminescence -- Biomimetics -- Bionanotechnology -- Bionics --Biopharmacology -- Biophotonics -- Bioreactor -- Bioremediation -- Biostimulation -- Biosynthesis -- Biotechnology -- :Category:Biotechnology -- :Category:Biotechnology companies -- :Category:Biotechnology products -- Bt corn
C
Cancer immunotherapy -- Cell therapy -- Chimera (genetics) -- Chinese hamster -- Chinese Hamster Ovary cell -- Chiron Corp. -- Cloning -- Compost -- Composting -- Convention on Biological Diversity -- Chromatography
D
Directive on the patentability of biotechnological inventions -- DNA microarray -- Dwarfing
E
Enzymes -- Electroporation -- Environmental biotechnology -- Eugenics
F
Fermentation -- :Category:Fermented foods
G
Gene knockout -- Gene therapy -- Genentech -- Genetic engineering -- Genetically modified crops --Genetically modified food -- Genetically modified food controversies -- Genetically modified organisms -- Genetics -- Genomics -- Genzyme -- Global Knowledge Center on Crop Biotechnology - Glycomic
|
https://en.wikipedia.org/wiki/Shridhar%20Chillal
|
Shridhar Chillal (born 29 January 1937) is an Indian man from the city of Pune, who held the world record for the longest fingernails ever reached on a single hand, with a combined length of 909.6 centimeters (358.1 inches). Chillal's longest single nail is his thumbnail, measuring 197.8 centimeters (77.87 inches). He stopped cutting his nails in 1952.
Although proud of his record-breaking nails, Chillal has faced increasing difficulties due to the weight of his finger nails, including disfigurement of his fingers and loss of function in his left hand. He claims that nerve damage to his left arm from the nails' immense weight has also caused deafness in his left ear.
Chillal has appeared in films and television displaying his nails, such as Jackass 2.5.
On 11 July 2018, Chillal had his fingernails cut with a power tool at the Ripley's Believe It or Not! museum in New York City, where the nails will be put on display. A technician wearing protective gear cut the nails during a "nail clipping ceremony".
See also
Lee Redmond, who held the record for the longest fingernails on both hands.
References
External links
Shridhar Chillal on IMDB
1937 births
Living people
People from Pune
World record holders
Biological records
20th-century Indian photographers
|
https://en.wikipedia.org/wiki/Extension%20%28telephone%29
|
In residential telephony, an extension telephone is an additional telephone wired to the same telephone line as another. In middle 20th century telephone jargon, the first telephone on a line was a "Main Station" and subsequent ones "Extensions" or even called as intercom. Such extension phones allow making or receiving calls in different rooms, for example in a home, but any incoming call would ring all extensions and any one extension being in use would cause the line to be busy for all users. Some telephones intended for use as extensions have built in intercom features; a key telephone system for a small business may offer two to five lines, lamps indicating lines already in use, the ability to place calls on 'hold' and an intercom on each of the multiple extensions.
In business telephony, a telephone extension may refer to a phone on an internal telephone line attached to a private branch exchange (PBX) or Centrex system. The PBX operates much as a community switchboard does for a geographic telephone numbering plan and allows multiple lines inside the office to connect without each phone requiring a separate outside line. In these systems, one usually has to dial a number (typically 9 in North America, 0 in Europe) to tell the PBX to connect with an outside landline (also called DDCO, or Direct Dial Central Office) to dial an external number. Within the PBX, the user merely dials the extension number to reach any other user directly. For inbound calls, a switchboard operator or automated attendant may request the number of the desired extension or the call may be completed with direct inbound dialing, if outside numbers are assigned to individual extensions.
An off-premises extension, where a worker at a remote location employs a telephone configured to appear as if it were an extension located at the main business site, may be created in analog telephony by using a leased line to connect the extension to the main enterprise system. Voice over IP makes th
|
https://en.wikipedia.org/wiki/Bicomplex%20number
|
In abstract algebra, a bicomplex number is a pair of complex numbers constructed by the Cayley–Dickson process that defines the bicomplex conjugate , and the product of two bicomplex numbers as
Then the bicomplex norm is given by
a quadratic form in the first component.
The bicomplex numbers form a commutative algebra over C of dimension two, which is isomorphic to the direct sum of algebras .
The product of two bicomplex numbers yields a quadratic form value that is the product of the individual quadratic forms of the numbers:
a verification of this property of the quadratic form of a product refers to the Brahmagupta–Fibonacci identity. This property of the quadratic form of a bicomplex number indicates that these numbers form a composition algebra. In fact, bicomplex numbers arise at the binarion level of the Cayley–Dickson construction based on with norm z2.
The general bicomplex number can be represented by the matrix , which has determinant . Thus, the composing property of the quadratic form concurs with the composing property of the determinant.
Bicomplex numbers feature two distinct imaginary units. Multiplication being associative and commutative, the product of these imaginary units must have positive one for its square. Such an element as this product has been called a hyperbolic unit.
As a real algebra
Bicomplex numbers form an algebra over C of dimension two, and since C is of dimension two over R, the bicomplex numbers are an algebra over R of dimension four. In fact the real algebra is older than the complex one; it was labelled tessarines in 1848 while the complex algebra was not introduced until 1892.
A basis for the tessarine 4-algebra over R specifies z = 1 and z = −i, giving the matrices
, which multiply according to the table given. When the identity matrix is identified with 1, then a tessarine t = w + z j .
History
The subject of multiple imaginary units was examined in the 1840s. In a long series "On quaternions, or on a new
|
https://en.wikipedia.org/wiki/Structural%20testing
|
Structural testing is the evaluation of an object (which might be an assembly of objects) to ascertain its characteristics of physical strength. Testing includes evaluating compressive strength, shear strength, tensile strength, all of which may be conducted to failure or to some satisfactory margin of safety. Evaluations may also be indirect, using techniques such as x-ray ultrasound, and ground-penetrating radar, among others, to assess the quality of the object.
Structural engineers conduct structural testing to evaluate material suitability for a particular application and to evaluate the capacity of existing structures to withstand foreseeable loads.
Items may include buildings (or components), bridges, airplane wings or other types of structures.
See also
Structural analysis
Structural load
References
Structural engineering
Product testing
|
https://en.wikipedia.org/wiki/9-Pin%20Protocol
|
The Sony 9-Pin Protocol or P1 protocol is a two-way communications protocol to control advanced video recorders. Sony introduced this protocol to control reel-to-reel type C video tape recorders (VTR) as well as videocassette recorders (VCR). It uses an DE-9 D-Sub connector with 9 pins (hence the name), where bi-directional communication takes place over a four wire cable according to the RS-422 standard.
While nowadays all post-production editing is done with a non-linear editing system, in those days editing was done linearly, using online editing. Editing machines relied heavily on the 9-Pin Protocol to remotely control automatic players and recorders.
Many modern hard disk recorders and solid-state drive recorders can still emulate a 1982 Sony BVW-75 Betacam tape recorder.
Sony's standard also specifies a pinout:
This 9-pin RS-422 pinout has become a de facto standard, used by most brands in the broadcast industry. In the new millennium, RS-422 is slowly phased out in favor of Ethernet for control functions. However its simple way to perform troubleshooting means it will stay around for a long time.
In broadcast automation the Video Disk Control Protocol (VDCP) use the 9-Pin Protocol to playout broadcast programming schedules.
External links
Sony 9-Pin Remote Protocol (Archived)
Copy of Sony 9-Pin Remote Protocol
Brainboxes serial port 9-pin protocol support
Drastic support of 9-pin protocol
Blackmagic Decklink (a video capture/generation card) support of 9-pin protocol
Blackmagic Hyperdeck (an SSD recorder) support of 9-pin protocol
Ross Kiva (a presentation server) RS-422 9-pin connector
JLCooper
Grass Valley K2 Summit (a media server) RS-422 connections
References
Protocol of Remote-1 (9-pin) Connector, 2nd Edition, Sony, document number 9-977-544-13
Communications protocols
Serial buses
Television terminology
|
https://en.wikipedia.org/wiki/External%20ray
|
An external ray is a curve that runs from infinity toward a Julia or Mandelbrot set.
Although this curve is only rarely a half-line (ray) it is called a ray because it is an image of a ray.
External rays are used in complex analysis, particularly in complex dynamics and geometric function theory.
History
External rays were introduced in Douady and Hubbard's study of the Mandelbrot set
Types
Criteria for classification :
plane : parameter or dynamic
map
bifurcation of dynamic rays
Stretching
landing
plane
External rays of (connected) Julia sets on dynamical plane are often called dynamic rays.
External rays of the Mandelbrot set (and similar one-dimensional connectedness loci) on parameter plane are called parameter rays.
bifurcation
Dynamic ray can be:
bifurcated = branched = broken
smooth = unbranched = unbroken
When the filled Julia set is connected, there are no branching external rays. When the Julia set is not connected then some external rays branch.
stretching
Stretching rays were introduced by Branner and Hubbard:
"The notion of stretching rays is a generalization of that of external rays for the Mandelbrot set to higher degree polynomials."
landing
Every rational parameter ray of the Mandelbrot set lands at a single parameter.
Maps
Polynomials
Dynamical plane = z-plane
External rays are associated to a compact, full, connected subset of the complex plane as :
the images of radial rays under the Riemann map of the complement of
the gradient lines of the Green's function of
field lines of Douady-Hubbard potential
an integral curve of the gradient vector field of the Green's function on neighborhood of infinity
External rays together with equipotential lines of Douady-Hubbard potential ( level sets) form a new polar coordinate system for exterior ( complement ) of .
In other words the external rays define vertical foliation which is orthogonal to horizontal foliation defined by the level sets of potential.
Uniformization
|
https://en.wikipedia.org/wiki/Fontenelle%20Reservoir
|
Fontenelle Reservoir is an artificial reservoir located in southwest Wyoming. It lies almost entirely within Lincoln County, although the east end of the Fontenelle Dam and a tiny portion of the reservoir are actually in northwestern Sweetwater County. Impounded by Fontenelle Dam, the reservoir acts primarily as a storage reservoir for the U.S. Bureau of Reclamation's Colorado River Storage Project, retaining Wyoming water in the state as a means of asserting Wyoming's water rights, with a secondary purpose of power generation. Water from Fontenelle Reservoir is used in local industries such as mining and power generation. Although initially projected to provide irrigation water for agriculture, the irrigation component was downgraded after difficulties with efficient irrigation in Wyoming's high semi-desert became apparent.
Plagued by chronic leakage problems at the dam, the reservoir was hurriedly emptied in 1965 and 1986 amid concerns about dam failure. The reservoir has facilities for recreation, with boat launching ramps and campgrounds. Fishing is available for brown, cutthroat and rainbow trout.
The land used for the Fontenelle Reservoir and dam was previously the Stepp Ranch, owned by one of the few black ranching families in Wyoming in the 1960s. The Stepps fought for their land in court, but ultimately lost. The land had been in the Stepp family since the turn of the 19th century. [4]
See also
List of largest reservoirs of Wyoming
Black Pioneers, Images of the Black Experience on the North American Frontier
References
External links
Fontenelle Dam at the U.S. Bureau of Reclamation
Seedskadee Project at the U.S. Bureau of Reclamation
Fontenelle Reservoir at recreation.gov
Lakes of Lincoln County, Wyoming
Reservoirs in Wyoming
Lakes of Sweetwater County, Wyoming
Colorado River Storage Project
|
https://en.wikipedia.org/wiki/Arithmetica%20Universalis
|
Arithmetica Universalis ("Universal Arithmetic") is a mathematics text by Isaac Newton. Written in Latin, it was edited and published by William Whiston, Newton's successor as Lucasian Professor of Mathematics at the University of Cambridge. The Arithmetica was based on Newton's lecture notes.
Whiston's original edition was published in 1707. It was translated into English by Joseph Raphson, who published it in 1720 as the Universal Arithmetick. John Machin published a second Latin edition in 1722.
None of these editions credit Newton as author; Newton was unhappy with the publication of the Arithmetica, and so refused to have his name appear. In fact, when Whiston's edition was published, Newton was so upset he considered purchasing all of the copies so he could destroy them.
The Arithmetica touches on algebraic notation, arithmetic, the relationship between geometry and algebra, and the solution of equations. Newton also applied Descartes' rule of signs to imaginary roots. He also offered, without proof, a rule to determine the number of imaginary roots of polynomial equations. A rigorous proof of Newton's counting formula for equations up to and including the fifth degree was published by James Joseph Sylvester in 1864.
References
The Arithmetica Universalis from the Grace K. Babson Collection, including links to PDFs of English and Latin versions of the Arithmetica
Centre College Library information on Newton's works
External links
Arithmetica Universalis (1707), first edition
Universal Arithmetick (1720), English translation by Joseph Raphson
Arithmetica Universalis (1722), second edition
1707 books
1720 books
Mathematics books
Books by Isaac Newton
18th-century Latin books
|
https://en.wikipedia.org/wiki/Deborah%20M.%20Gordon
|
Deborah M. Gordon (born December 30, 1955) is a biologist, appointed as a professor in the Department of Biology at Stanford University.
Major research
Gordon studies ant colony behavior and ecology, with a particular focus on red harvester ants. She focuses on the developing behavior of colonies, even as individual ants change functions within their own lifetimes.
Gordon's fieldwork includes a long-term study of ant colonies in Arizona. She is the author of numerous articles and papers as well as the book Ants at Work for the general public, and she was profiled in The New York Times Magazine in 1999.
In 2012, she found that the foraging behavior of red harvester ants matches the TCP congestion control algorithm.
Education
Gordon received a Ph.D. in zoology from Duke in 1983, an M.Sc. in Biology from Stanford in 1977 and a bachelor's degree from Oberlin College, where she majored in French.
She was a junior fellow of the Harvard Society of Fellows.
Awards and recognition
In 1993, Gordon was named a Stanford MacNamara Fellow. In 1995 Gordon received an award for teaching excellence from the Phi Beta Kappa Northern California Association. In 2001 Gordon was awarded a Guggenheim fellowship from the John Simon Guggenheim Memorial Foundation. In 2003, Gordon was invited to speak at a TED conference. She is also an adviser to the Microbes Mind Forum.
Bibliography
References
External links
The Gordon Lab
1955 births
Living people
Myrmecologists
American entomologists
Women entomologists
Harvard Fellows
Duke University alumni
Oberlin College alumni
Stanford University alumni
Stanford University Department of Biology faculty
Center for Advanced Study in the Behavioral Sciences fellows
|
https://en.wikipedia.org/wiki/List%20of%20backmasked%20messages
|
The following is an incomplete list of backmasked messages in music.
See also
Backmasking
Phonetic reversal
Hidden message
Subliminal message
References
External links
Backmask Online — clips and analysis of various alleged and actual backmasked messages
Jeff Milner's Backmasking Page — a Flash player which allows backwards playback of various alleged and actual messages with and without lyrics; the focus of the Wall Street Journal article
Audio engineering
Urban legends
Music-related lists
Perception
Popular music
|
https://en.wikipedia.org/wiki/Truth%20predicate
|
In formal theories of truth, a truth predicate is a fundamental concept based on the sentences of a formal language as interpreted logically. That is, it formalizes the concept that is normally expressed by saying that a sentence, statement or idea "is true."
Languages which allow a truth predicate
Based on "Chomsky Definition", a language is assumed to be a countable set of sentences, each of finite length, and constructed out of a countable set of symbols. A theory of syntax is assumed to introduce symbols, and rules to construct well-formed sentences. A language is called fully interpreted if meanings are attached to its sentences so that they all are either true or false.
A fully interpreted language L which does not have a truth predicate can be extended to a fully interpreted language Ľ
that contains a truth predicate T, i.e., the sentence A ↔ T(⌈A⌉) is true for every sentence A of Ľ, where T(⌈A⌉) stands for "the sentence (denoted by) A is true". The main tools to prove this result are ordinary and transfinite induction, recursion methods, and ZF set theory (cf.
and ).
See also
Pluralist theory of truth
References
Mathematical logic
Theories of truth
Predicate
|
https://en.wikipedia.org/wiki/Soil%20ecology
|
Soil ecology is the study of the interactions among soil organisms, and between biotic and abiotic aspects of the soil environment. It is particularly concerned with the cycling of nutrients, formation and stabilization of the pore structure, the spread and vitality of pathogens, and the biodiversity of this rich biological community.
Overview
Soil is made up of a multitude of physical, chemical, and biological entities, with many interactions occurring among them. Soil is a variable mixture of broken and weathered minerals and decaying organic matter. Together with the proper amounts of air and water, it supplies, in part, sustenance for plants as well as mechanical support.
The diversity and abundance of soil life exceeds that of any other ecosystem. Plant establishment, competitiveness, and growth is governed largely by the ecology below-ground, so understanding this system is an essential component of plant sciences and terrestrial ecology.
Features of the ecosystem
Moisture is a major limiting factor on land. Terrestrial organisms are constantly confronted with the problem of dehydration. Transpiration or evaporation of water from plant surfaces is an energy dissipating process unique to the terrestrial environment.
Temperature variations and extremes are more pronounced in the air than in the water medium.
On the other hand, the rapid circulation of air throughout the globe results in a ready mixing and remarkably constant content of oxygen and carbon dioxide.
Although soil offers solid support, air does not. Strong skeletons have been evolved in both land plants and animals and also special means of locomotion have been evolved in the latter.
Land, unlike the ocean, is not continuous; there are important geographical barriers to free movement.
The nature of the substrate, although important in water is especially vital in terrestrial environment. Soil, not air, is the source of highly variable nutrients; it is a highly developed ecological subsystem.
|
https://en.wikipedia.org/wiki/VAXmate
|
VAXmate was an IBM PC/AT compatible personal computer introduced by Digital Equipment Corporation in September, 1986. The replacement to the Rainbow 100, in its standard form it was the first commercial diskless personal computer.
OS and files
The operating system and files could be served from a VAX/VMS server running the company's VAX/VMS Services for MS-DOS software, which went through several name changes, finally becoming Pathworks. Alternatively an optional expansion box containing either 20 MB or 40 MB hard disk could be purchased which allowed it to operate as a more conventional stand-alone PC.
Original specifications
The basic system contained an 8 MHz Intel 80286 CPU with 1 Mbyte of RAM, a 1.2 MB RX33 5¼-inch floppy disk drive, a 14-inch (diagonal) amber or green monochrome CRT and a thinwire Ethernet interface all contained in the system unit. It was also provided with a parallel printer port and a serial communications port. A separate mouse and LK250 keyboard was used with the device.
As well as the expansion box, an 80287 numeric coprocessor could be ordered as an option, and the memory could be expanded by 2 MB with another option to 3 MB. In North America, an internal modem was also available.
DECstation
It was superseded by the DECstation 200 and 300 in January 1989.
References
Notes
External links
VAXmate at research.microsoft.com
IBM PC compatibles
DEC computers
Computer-related introductions in 1986
|
https://en.wikipedia.org/wiki/JXplorer
|
JXplorer is a free, open-source client for browsing Lightweight Directory Access Protocol (LDAP) servers and LDAP Data Interchange Format (LDIF) files. It is released under an Apache-equivalent license. JXplorer is written in Java and is platform independent, configurable, and has been translated into a number of languages. In total, as of 2018, JXplorer has been downloaded over 2 million times from SourceForge and is bundled with several Linux distributions.
Several common Linux distributions include JXplorer Software for LDAP server administration. The software also runs on BSD-variants, AIX, HP-UX, OS X, Solaris, Windows (2000, XP) and z/OS.
Key features are:
SSL, SASL and GSSAPI
DSML
LDIF
Localisation (currently available in German, French, Japanese, Traditional Chinese, Simplified Chinese, Hungarian);
Optional LDAP filter constructor GUI; extensible architecture
The primary authors and maintainers are Chris Betts and Trudi Ersvaer, originally both working in the CA (then Computer Associates) Directory (now CA Directory) software lab in Melbourne, Australia. Version 3.3, the '10th Anniversary Edition' was released in July 2012.
See also
List of LDAP software
References
External links
Directory services
|
https://en.wikipedia.org/wiki/Network%20Device%20Control%20Protocol
|
Network Device Control Protocol (NDCP) was designed by Laurent Grumbach who at the time was an engineer with Harris Broadcast. Previous to that he had worked for Louth Automation which was acquired by Harris. NDCP was designed to be a network based protocol instead of the traditional serial connection protocols to Broadcast devices. NDCP was an XML compliant protocol and loosely based on the concepts of SOAP. The intent was that vendors would standardize their Broadcast devices on a single protocol instead of each vendor offering proprietary protocols for their devices. The use of a network based protocol would also allow the devices to be remote from the controlling application and not limited by the connection length of an RS422 serial line.
External links
Harris Corporation Launches New, Network-Based Automation Protocol for Controlling Broadcast Audio and Video Devices
RDD 38:2016 - SMPTE Registered Disclosure Docs - Networked Device Control Protocol — Message Data Structure and Method of Communication
Network protocols
|
https://en.wikipedia.org/wiki/Oil%27s%20Well
|
Oil's Well (a pun on "all's well") is a video game published by Sierra On-Line in 1983. The game was written for the Atari 8-bit family by Thomas J. Mitchell. Oil's Well is similar to the 1982 arcade game Anteater, re-themed to be about drilling for oil instead of a hungry insectivore. Ports were released in 1983 for the Apple II and Commodore 64, in 1984 for ColecoVision and the IBM PC (as a self-booting disk), then in 1985 for MSX and the Sharp X1. A version with improved visuals and without Mitchell's involvement was released for MS-DOS in 1990.
Gameplay
The player collects oil for a drilling operation by moving the drill head through a maze using four directional control buttons. The drill bit is trailed by a pipeline connecting it to the base. Subterranean creatures populate the maze; the head can destroy the creatures, but the pipeline is vulnerable. As the player traverses the maze, the pipe grows longer, but pressing a button quickly retracts the head. There are 8 levels to play through.
Reception
Dave Stone reviewed the game for Computer Gaming World, and stated that "The action's well-paced, the difficulty progressive. While getting to a higher level is somewhat dependent on getting the right breaks — good eye-hand coordination, timing, and strategy are essential."
Ahoy! stated that while the Commodore version's graphics and sounds were only "serviceable; gameplay is, in my experience, unique ... Recommended". InfoWorld called the IBM PCjr version "a clever, basic game".
The U.S. gaming magazine Computer Games awarded Oil's Well the 1984 Golden Floppy Award for Excellence, in the category of "Maze Game of the Year."
Legacy
Despite already being a clone of Anteater, several additional clones borrowed the theme of Oil's Well: Pipeline Run for the Commodore 64 in 1990 and Oilmania for the Atari ST in 1991.
References
External links
Oil's Well at Atari Mania
Commodore 64 video at archive.org
Review in GAMES magazine
1983 video games
Apple II games
At
|
https://en.wikipedia.org/wiki/Gaussian%20grid
|
A Gaussian grid is used in the earth sciences as a gridded horizontal coordinate system for scientific modeling on a sphere (i.e., the approximate shape of the Earth). The grid is rectangular, with a set number of orthogonal coordinates (usually latitude and longitude).
At a given latitude (or parallel), the gridpoints are equally spaced. On the contrary along a longitude (or meridian) the gridpoints are unequally spaced. The spacing between grid points is defined by Gaussian quadrature. By contrast, in the "normal" geographic latitude-longitude grid, gridpoints are equally spaced along both latitudes and longitudes. Gaussian grids also have no grid points at the poles.
In a regular Gaussian grid, the number of gridpoints along the longitudes is constant, usually double the number along the latitudes. In a reduced (or thinned) Gaussian grid, the number of gridpoints in the rows decreases towards the poles, which keeps the gridpoint separation approximately constant across the sphere.
Examples of Gaussian grids
CCCma global climate models of climate change
[96×48]
[128×64]
European Centre for Medium-Range Weather Forecasts
192×96
320×160
512×256
640×320
800×400
1024×512
1600×800
2048×1024
2560×1280
Features for ERA-40 grids
See also
Global climate model
Spectral method
Spherical harmonics
References
NCAR Command Language documentation
W.M. Washington and C.L. Parkinson, 2005. An Introduction to Three-Dimensional Climate Modeling. Sausalito, CA, University Science Books. 368 pp.
Hortal, Mariano, and A. J. Simmons, 1991. Use of reduced Gaussian grids in spectral models. Monthly Weather Review 119.4 : 1057-1074.
Geodesy
Geographic coordinate systems
|
https://en.wikipedia.org/wiki/Sound%20Blaster%20Live%21
|
Sound Blaster Live! is a PCI add-on sound card from Creative Technology Limited for PCs. Moving from ISA to PCI allowed the card to dispense with onboard memory, storing digital samples in the computer's main memory and then accessing them in real time over the bus. This allowed for a much wider selection of, and longer playing, samples. It also included higher quality sound output at all levels, quadrophonic output, and a new MIDI synthesizer with 64 sampled voices. The Live! was introduced in August 1998 and variations on the design remained Creative's primary sound card line into the early 2000's.
Overview
Sound Blaster Live! (August 1998) saw the introduction of the EMU10K1 audio processor. Manufactured in a 0.35 µm 3-metal-layer CMOS process, it is a 2.44 million transistor ASIC rated at 1000 MIPS. The EMU10K1 featured hardware acceleration for DirectSound and EAX 1.0 and 2.0 (environmental audio extensions), along with a high-quality 64-voice MIDI sample-based synthesizer and an integrated FX8010 DSP chip for real-time digital audio effects.
A major design change from its predecessor (the EMU8000) was that the EMU10K1 used system memory, accessed over the PCI bus, for the wavetable samples, rather than using expensive on-board memory. This was possible at this point because systems were being equipped with far more RAM than previously, and PCI offered far faster and more efficient data transfer than the old ISA bus.
The integrated FX8010 was a 32-bit programmable processor with 1 kilobyte of instruction memory. It provided real-time postprocessing effects (such as reverb, flanging, or chorus). This capability let users select a pre-defined listening environment from a control-panel application (concert hall, theater, headphones, etc.) It also provided hardware-acceleration for EAX, Creative's environmental audio technology. The Effect algorithms were created by a development system that integrated into Microsoft Developer Studio. The effects were written i
|
https://en.wikipedia.org/wiki/Sound%20Blaster%20Audigy
|
Sound Blaster Audigy is a product line of sound cards from Creative Technology. The flagship model of the Audigy family used the EMU10K2 audio DSP, an improved version of the SB-Live's EMU10K1, while the value/SE editions were built with a less-expensive audio controller.
The Audigy family is available for PCs with a PCI or PCI Express slot, or a USB port.
First generation
The Audigy cards equipped with EMU10K2 (CA0100 chip) could process up to 4 EAX environments simultaneously with its on-chip DSP and native EAX 3.0 ADVANCED HD support, and supported from stereo up to 5.1-channel output. The audio processor could mix up to 64 DirectSound3D sound channels in hardware, up from Live!'s 32 channels.
Creative Labs advertised the Audigy as a 24-bit sound card, a controversial marketing claim for a product that did not support end-to-end playback of 24-bit/96 kHz audio streams. The Audigy and Live shared a similar architectural limitation: the audio transport (DMA engine) was fixed to 16-bit sample precision at 48 kHz. So despite its 24-bit/96 kHz high-resolution DACs, the Audigy's DSP could only process 16-bit/48 kHz audio sources. This fact was not immediately obvious in Creative's literature, and was difficult to ascertain even upon examination of the Audigy's spec sheets. (A resulting class-action settlement with Creative later awarded US customers a 35% discount on Creative products, up to a maximum discount of $65.)
Aside from the lack of an end-to-end path for 24-bit audio, Dolby Digital (AC-3) and DTS passthrough (to the S/PDIF digital out) had issues that have never been resolved.
Audigy card supports the professional ASIO 1 driver interface natively, making it possible to obtain low latencies from Virtual Studio Technology (VST) instruments. Some versions of Audigy featured an external break out box with connectors for S/PDIF, MIDI, IEEE 1394, analog and optical signals. The ASIO and break out box features were an attempt to tap into the "home studio" ma
|
https://en.wikipedia.org/wiki/Sound%20Blaster%2016
|
The Sound Blaster 16 is a series of sound cards by Creative Technology, first released in June 1992 for PCs with an ISA or PCI slot. It was the successor to the Sound Blaster Pro series of sound cards and introduced CD-quality digital audio to the Sound Blaster line. For optional wavetable synthesis, the Sound Blaster 16 also added an expansion-header for add-on MIDI-daughterboards, called a Wave Blaster connector, and a game port for optional connection with external MIDI sound modules.
The Sound Blaster 16 retained the Pro's OPL-3 support for FM synthesis, and was mostly compatible with software written for the older Sound Blaster and Sound Blaster Pro sound cards. The SB16's MPU-401 emulation was limited to UART (dumb) mode only, but it was sufficient for most MIDI software. When a daughterboard, such as the Wave Blaster, Roland SCB-7, Roland SCB-55, Yamaha DB50XG, Yamaha DB60XG was installed on the Sound Blaster, the Wave Blaster behaved like a standard MIDI device, accessible to any MPU-401 compatible MIDI software.
The Sound Blaster 16 was hugely popular. Creative's audio revenue grew from US$40 million per year to nearly US$1 billion following the launch of the Sound Blaster 16 and related products. Rich Sorkin was General Manager of the global business during this time, responsible for product planning, product management, marketing and OEM sales. Due to its popularity and wide support, the Sound Blaster 16 is emulated in a variety of virtualization and/or emulation programs, such as DOSBox, QEMU, Bochs, VMware and VirtualBox, with varying degrees of faithfulness and compatibility.
Features
The ASP or CSP chip added some new features to the Sound Blaster line, such as hardware-assisted speech synthesis (through the TextAssist software), QSound audio spatialization technology for digital (PCM) wave playback, and PCM audio compression and decompression. Software needed to be written to leverage its unique abilities, yet the offered capabilities lacked compe
|
https://en.wikipedia.org/wiki/Sound%20Blaster%20AWE64
|
Sound Blaster Advanced Wave Effects 64 ISA sound card from Creative Technology. It is an add-on board for PCs. The card was launched in November 1996.
Overview
The Sound Blaster AWE64 is significantly smaller than its predecessor, the Sound Blaster AWE32. It offers a similar feature set, but also has a few notable improvements.
AWE64 has support for greater polyphony than the AWE32. Unfortunately, these additional voices are achieved via software-based processing on the system CPU. The technology, called WaveGuide, synthesizes the instrument sounds rather than using stored instrument patches like the hardware voices. This not only demands more processing power from the host system, but also is not of equal quality to available SoundFonts. The inability to adjust synthesis parameters, unlike with the hardware portion of the AWE64, also limited the WaveGuide function's usefulness.
Another improvement comes from better on-board circuitry that increases the signal-to-noise ratio and overall signal quality compared to the frequently quite noisy AWE32 and Sound Blaster 16 boards. This improvement is most notable with the AWE64 Gold, because of its superior gold plated RCA connector outputs. The improvement also comes from increased integration of components on AWE64 compared to its predecessors. Increased integration means the board can be simpler and trace routing to components is reduced, decreasing the amount of noise-inducing signal travel. This also made it possible to reduce the size of AWE64's board noticeably, compared to AWE32.
The Sound Blaster AWE32 boards allowed sample RAM expansion through the installation of 30-pin fast-page DRAM SIMMs. These SIMMs were commodity items during the time of AWE32 and AWE64, because they were used for many other applications, including plain system RAM. As such, Creative had no control over their sale. So, with the AWE64, Creative moved to proprietary RAM expansion modules which only they manufactured and sold. These memor
|
https://en.wikipedia.org/wiki/Sound%20Blaster%20AWE32
|
The Sound Blaster AWE32 is an ISA sound card from Creative Technology. It is an expansion board for PCs and is part of the Sound Blaster family of products. The Sound Blaster AWE32, introduced in March 1994, was a near full-length ISA sound card, measuring 14 inches (356 mm) in length, due to the number of features included.
Sound Blaster AWE32
Backward compatibility
The AWE32's digital audio section was basically an entire Sound Blaster 16, and as such, was compatible with Creative's earlier Sound Blaster 2.0 (minus the C/MS audio chips.) Its specifications included 16-bit 44.1 kHz AD/DA conversion with real-time on-board compression / decompression and the Yamaha OPL3 FM synthesizer chip. However, compatibility was not always perfect and there were situations where various bugs could arise in games. Many of the Sound Blaster AWE32 cards had codecs that supported bass, treble, and gain adjustments through Creative's included mixer software. There were many variants and revisions of the AWE32, however, with numerous variations in audio chipset, amplifier selection and design, and supported features. For example, the Sound Blaster AWE32 boards that utilize the VIBRA chip do not have bass and treble adjustments.
MIDI capability
The Sound Blaster AWE32 included two distinct audio sections; one being the Creative digital audio section with their audio codec and optional CSP/ASP chip socket, and the second being the E-mu MIDI synthesizer section. The synthesizer section consisted of the EMU8000 synthesizer and effects processor chip, 1 MB EMU8011 sample ROM, and a variable amount of RAM (none on the SB32, 512 KB on the AWE32; RAM was expandable to 28 MB on both cards). These chips comprised a powerful and flexible sample-based synthesis system, based on E-mu's high-end sampler systems such as the E-mu Emulator III and E-mu Proteus. The effects processor generated various effects (i.e. reverb and chorus) and environments on MIDI output, similar to the later EAX standar
|
https://en.wikipedia.org/wiki/Mass%20flow%20controller
|
A mass flow controller (MFC) is a device used to measure and control the flow of liquids and gases. A mass flow controller is designed and calibrated to control a specific type of liquid or gas at a particular range of flow rates. The MFC can be given a setpoint from 0 to 100% of its full scale range but is typically operated in the 10 to 90% of full scale where the best accuracy is achieved. The device will then control the rate of flow to the given setpoint. MFCs can be either analog or digital. A digital flow controller is usually able to control more than one type of fluid whereas an analog controller is limited to the fluid for which it was calibrated.
All mass flow controllers have an inlet port, an outlet port, a mass flow sensor and a proportional control valve. The MFC is fitted with a closed loop control system which is given an input signal by the operator (or an external circuit/computer) that it compares to the value from the mass flow sensor and adjusts the proportional valve accordingly to achieve the required flow. The flow rate is specified as a percentage of its calibrated full scale flow and is supplied to the MFC as a voltage signal.
Mass flow controllers require the supply gas or liquid to be within a specific pressure range. Low pressure will starve the MFC of fluid and cause it to fail to achieve its setpoint. High pressure may cause erratic flow rates. There are many different technologies which can help to measure the flow of the fluids and eventually help in controlling flow. Those technologies define the types of Mass Flow Controllers, and they include differential pressure (ΔP), differential temperature (ΔT), Coriolis, Ultrasonic, electromagnetic, turbine, etc.
See also
Control valve
Coriolis
Flow control valve
Flow limiter
Flow measurement
Mass flow meter
Thermal mass flow meter
Mass flow rate
References
External links
How a Mass Flow Controller works video
Thermal Mass Flow Meter / Controller (Principle of operation)
|
https://en.wikipedia.org/wiki/Skypix
|
Skypix is the name of a markup language used to encode graphics content such as changeable fonts, mouse-controlled actions, animations and sound to bulletin board system. The system was written by Michael Cox on the Amiga in 1987, and first hosted on the Atredes BBS system, which was later renamed Skyline. Skypix allowed BBS sysops to create interactive BBS systems with graphics, fonts, mouse-controlled actions, animations and sound.
Skypix used an extension of the ANSI graphics system that added new instructions. The graphics were normally created using Skypaint, which could generate Skypix files directly from a familiar-looking paint program. The files could be placed in the system and any Skypix-enabled terminal program would notice the encoding and recreate the graphics.
The underlying BBS software could be programmed in the ARexx language (a variant of REXX for the Amiga). This resulted in an enthusiastic group of Skypix hobbyists.
Skypix was available only on the Amiga computer, hosted on the Skyline BBS and accessed using the Skyterm terminal emulator. Skypix support was later implemented in JR-Comm by Johnathan Radigan. At one time over a thousand Skyline systems were operating the world over. Amiga inventor Jay Miner himself ran a Skyline system for a time.
With the terminal program JR-Comm, other BBS software programs started to support Skypix. C-Net Amiga Pro BBS Software was one of them. Today there are several of these boards still alive using Telnet. One of these boards still offers Skypix graphics when using JR-Comm.
References
External links
"BBS: The Documentary, An Overview of BBS Programs", expanded view. This document is part of the making of the DVD.
(Ripscrip Graphic Protocol Specs citating briefly Skypix as primitive markup language)
MC MIcroComputer Magazine issue nr. 97, June 1990, Italy; PDF Collection of Computer Magazine MC-Microcomputer (In Italian Language) citing briefly Skypix as emerging promising language for BBS.
"Comp
|
https://en.wikipedia.org/wiki/Alternating%20permutation
|
In combinatorial mathematics, an alternating permutation (or zigzag permutation) of the set {1, 2, 3, ..., n} is a permutation (arrangement) of those numbers so that each entry is alternately greater or less than the preceding entry. For example, the five alternating permutations of {1, 2, 3, 4} are:
1, 3, 2, 4 because 1 < 3 > 2 < 4,
1, 4, 2, 3 because 1 < 4 > 2 < 3,
2, 3, 1, 4 because 2 < 3 > 1 < 4,
2, 4, 1, 3 because 2 < 4 > 1 < 3, and
3, 4, 1, 2 because 3 < 4 > 1 < 2.
This type of permutation was first studied by Désiré André in the 19th century.
Different authors use the term alternating permutation slightly differently: some require that the second entry in an alternating permutation should be larger than the first (as in the examples above), others require that the alternation should be reversed (so that the second entry is smaller than the first, then the third larger than the second, and so on), while others call both types by the name alternating permutation.
The determination of the number An of alternating permutations of the set {1, ..., n} is called André's problem. The numbers An are known as Euler numbers, zigzag numbers, or up/down numbers. When n is even the number An is known as a secant number, while if n is odd it is known as a tangent number. These latter names come from the study of the generating function for the sequence.
Definitions
A permutation is said to be alternating if its entries alternately rise and descend. Thus, each entry other than the first and the last should be either larger or smaller than both of its neighbors. Some authors use the term alternating to refer only to the "up-down" permutations for which , calling the "down-up" permutations that satisfy by the name reverse alternating. Other authors reverse this convention, or use the word "alternating" to refer to both up-down and down-up permutations.
There is a simple one-to-one correspondence
|
https://en.wikipedia.org/wiki/Ball%20screw
|
A ball screw (or ballscrew) is a mechanical linear actuator that translates rotational motion to linear motion with little friction. A threaded shaft provides a helical raceway for ball bearings which act as a precision screw. As well as being able to apply or withstand high thrust loads, they can do so with minimum internal friction. They are made to close tolerances and are therefore suitable for use in situations in which high precision is necessary. The ball assembly acts as the nut while the threaded shaft is the screw.
In contrast to conventional leadscrews, ball screws tend to be rather bulky, due to the need to have a mechanism to recirculate the balls.
History
The ball screw was invented independently by H.M. Stevenson and D. Glenn who were issued in 1898 patents 601,451 and 610,044 respectively.
Early precise screwshafts were produced by starting with a low-precision screwshaft, and then lapping the shaft with several spring-loaded nut laps. By rearranging and inverting the nut laps, the lengthwise errors of the nuts and shaft were averaged. Then, the very repeatable shaft's pitch is measured against a distance standard. A similar process is sometimes used today to produce reference standard screw shafts, or master manufacturing screw shafts.
Design
Low friction in ball screws yields high mechanical efficiency compared to alternatives. A typical ball screw may be 90 percent efficient, versus 20 to 25 percent efficiency of an Acme lead screw of equal size. Lack of sliding friction between the nut and screw lends itself to extended lifespan of the screw assembly (especially in no-backlash systems), reducing downtime for maintenance and parts replacement, while also decreasing demand for lubrication. This, combined with their overall performance benefits and reduced power requirements, may offset the initial costs of using ball screws.
Ball screws may also reduce or eliminate backlash common in lead screw and nut combinations. The balls may be p
|
https://en.wikipedia.org/wiki/Network%20load%20balancing
|
Network load balancing is the ability to balance traffic across two or more WAN links without using complex routing protocols like BGP.
This capability balances network sessions like Web, email, etc. over multiple connections in order to spread out the amount of bandwidth used by each LAN user, thus increasing the total amount of bandwidth available. For example, a user has a single WAN connection to the Internet operating at 1.5 Mbit/s. They wish to add a second broadband (cable, DSL, wireless, etc.) connection operating at 2.5 Mbit/s. This would provide them with a total of 4 Mbit/s of bandwidth when balancing sessions.
Session balancing does just that, it balances sessions across each WAN link. When Web browsers connect to the Internet, they commonly open multiple sessions, one for the text, another for an image, another for some other image, etc. These sessions can be balanced across the available connections. An FTP application only uses a single session so it is not balanced; however if a secondary FTP connection is made, then it may be balanced so that the traffic is distributed across two of the various connections and thus provides an overall increase in throughput.
Additionally, network load balancing is commonly used to provide network redundancy so that in the event of a WAN link outage, access to network resources is still available via the secondary link(s). Redundancy is a key requirement for business continuity plans and generally used in conjunction with critical applications like VPNs and VoIP.
Finally, most network load balancing systems also incorporate the ability to balance both outbound and inbound traffic. Inbound load balancing is generally performed via dynamic DNS which can either be built into the system, or provided by an external service or system. Having the dynamic DNS service within the system is generally thought to be better from a cost savings and overall control point of view.
Microsoft NLB
Microsoft has also purch
|
https://en.wikipedia.org/wiki/Microinjection
|
Microinjection is the use of a glass micropipette to inject a liquid substance at a microscopic or borderline macroscopic level. The target is often a living cell but may also include intercellular space. Microinjection is a simple mechanical process usually involving an inverted microscope with a magnification power of around 200x (though sometimes it is performed using a dissecting stereo microscope at 40–50x or a traditional compound upright microscope at similar power to an inverted model).
For processes such as cellular or pronuclear injection the target cell is positioned under the microscope and two micromanipulators—one holding the pipette and one holding a microcapillary needle usually between 0.5 and 5 µm in diameter (larger if injecting stem cells into an embryo)—are used to penetrate the cell membrane and/or the nuclear envelope. In this way the process can be used to introduce a vector into a single cell. Microinjection can also be used in the cloning of organisms, in the study of cell biology and viruses, and for treating male subfertility through intracytoplasmic sperm injection (ICSI, ).
History
The use of microinjection as a biological procedure began in the early twentieth century, although even through the 1970s it was not commonly used. By the 1990s, its use had escalated significantly and it is now considered a common laboratory technique, along with vesicle fusion, electroporation, chemical transfection, and viral transduction, for introducing a small amount of a substance into a small target.
Basic types
There are two basic types of microinjection systems. The first is called a constant flow system and the second is called a pulsed flow system. In a constant flow system, which is relatively simple and inexpensive though clumsy and outdated, a constant flow of a sample is delivered from a micropipette and the amount of the sample which is injected is determined by how long the needle remains in the cell. This system typically requires
|
https://en.wikipedia.org/wiki/Tree%20of%20Life%20Web%20Project
|
The Tree of Life Web Project is an Internet project providing information about the diversity and phylogeny of life on Earth.
This collaborative peer reviewed project began in 1995, and is written by biologists from around the world. The site has not been updated since 2011, however the pages are still accessible.
The pages are linked hierarchically, in the form of the branching evolutionary tree of life, organized cladistically. Each page contains information about one particular group of organisms and is organized according to a branched tree-like form, thus showing hypothetical relationships between different groups of organisms.
In 2009 the project ran into funding problems from the University of Arizona. Pages and Treehouses submitted took a considerably longer time to be approved as they were being reviewed by a small group of volunteers, and apparently, around 2011, all activities ended.
History
The idea of this project started in the late 1980s. David Maddison was working on a computer program MacClade during his PhD research. This is an application that gives insight into species' phylogenetic trees. He wanted to extend this program with a feature that allowed the user to browse through phylogenetic trees and zoom into other lower or higher taxa.
Hence, this association was not unique in a stand-alone application. The researchers came up with the idea to export the application into the World Wide Web and this was realized in 1995. From 1996 to 2011, over 300 biologists from around the globe added taxa web pages into the phylogeny browser.
Quality
To ensure the quality of ToL project, the board made use of peer-review. The pages that were reviewed were sent to two or three researchers that specialized in the particular subject. It is possible to visit the personal page of the author. If this is not accessible then the institution is always at the footnote. The entire tree structure that contained 35,960 species until the website's demise, is available
|
https://en.wikipedia.org/wiki/Curiously%20recurring%20template%20pattern
|
The curiously recurring template pattern (CRTP) is an idiom, originally in C++, in which a class X derives from a class template instantiation using X itself as a template argument. More generally it is known as F-bound polymorphism, and it is a form of F-bounded quantification.
History
The technique was formalized in 1989 as "F-bounded quantification." The name "CRTP" was independently coined by Jim Coplien in 1995, who had observed it in some of the earliest C++ template code
as well as in code examples that Timothy Budd created in his multiparadigm language Leda. It is sometimes called "Upside-Down Inheritance" due to the way it allows class hierarchies to be extended by substituting different base classes.
The Microsoft Implementation of CRTP in Active Template Library (ATL) was independently discovered, also in 1995, by Jan Falkin, who accidentally derived a base class from a derived class. Christian Beaumont first saw Jan's code and initially thought it could not possibly compile in the Microsoft compiler available at the time. Following the revelation that it did indeed work, Christian based the entire ATL and Windows Template Library (WTL) design on this mistake.
General form
// The Curiously Recurring Template Pattern (CRTP)
template <class T>
class Base
{
// methods within Base can use template to access members of Derived
};
class Derived : public Base<Derived>
{
// ...
};
Some use cases for this pattern are static polymorphism and other metaprogramming techniques such as those described by Andrei Alexandrescu in Modern C++ Design.
It also figures prominently in the C++ implementation of the Data, Context, and Interaction paradigm.
In addition, CRTP is used by the C++ standard library to implement the std::enable_shared_from_this functionality.
Static polymorphism
Typically, the base class template will take advantage of the fact that member function bodies (definitions) are not instantiated until long after their declarations, and will u
|
https://en.wikipedia.org/wiki/Standard%20time%20%28manufacturing%29
|
In industrial engineering, the standard time is the time required by an average skilled operator, working at
a normal pace, to perform a specified task using a prescribed method. It includes appropriate allowances to allow the person to recover from fatigue and, where necessary, an additional allowance to cover contingent elements which may occur but have not been observed.
Standard time = normal time + allowance
Where;
Normal time = average time × rating factor (take rating factor between 1.1 and 1.2)
Usage of the standard time
Time times for all operations are known.
Staffing (or workforce planning): the number of workers required cannot accurately be determined unless the time required to process the existing work is known.
Line balancing (or production leveling): the correct number of workstations for optimum work flow depends on the processing time, or standard, at each workstation.
Materials requirement planning (MRP): MRP systems cannot operate properly without accurate work standards.
System simulation: simulation models cannot accurately simulate operation unless times for all operations are known.
Wage payment: comparing expected performance with actual performance requires the use of work standards.
Cost accounting: work standards are necessary for determining not only the labor component of costs, but also the correct allocation of production costs to specific products.
Employee evaluation: in order to assess whether individual employees are performing as well as they should, a performance standard is necessary against which to measure the level of performance.
Techniques to establish a standard time
The standard time can be determined using the following techniques:
Time study
Predetermined motion time system aka PMTS or PTS
Standard data system
Work sampling
Method of calculation
The Standard Time is the product of three factors:
Observed time: The time measured to complete the task.
Performance rating factor: The pace the person is
|
https://en.wikipedia.org/wiki/Capstone%20%28cryptography%29
|
Capstone is a United States government long-term project to develop cryptography standards for public and government use. Capstone was authorized by the Computer Security Act of 1987, driven by the National Institute of Standards and Technology (NIST) and the National Security Agency (NSA); the project began in 1993.
Project
The initiative involved four standard algorithms: a data encryption algorithm called Skipjack, along with the Clipper chip that included the Skipjack algorithm, a digital signature algorithm, Digital Signature Algorithm (DSA), a hash function, SHA-1, and a key exchange protocol. Capstone's first implementation was in the Fortezza PCMCIA card. All Capstone components were designed to provide 80-bit security.
The initiative encountered massive resistance from the cryptographic community, and eventually the US government abandoned the effort. The main reasons for this resistance were concerns about Skipjack's design, which was classified, and the use of key escrow in the Clipper chip.
References
External links
EFF archives on Capstone
National Security Agency encryption devices
History of cryptography
|
https://en.wikipedia.org/wiki/RSVP-TE
|
Resource Reservation Protocol - Traffic Engineering (RSVP-TE) is an extension of the Resource Reservation Protocol (RSVP) for traffic engineering. It supports the reservation of resources across an IP network. Applications running on IP end systems can use RSVP to indicate to other nodes the nature (bandwidth, jitter, maximum burst, and so forth) of the packet streams they want to receive. RSVP runs on both IPv4 and IPv6.
RSVP-TE generally allows the establishment of Multiprotocol Label Switching (MPLS) label-switched paths (LSPs), taking into consideration network constraint parameters such as available bandwidth and explicit hops.
History
, the Internet Engineering Task Force (IETF) MPLS working group deprecated Constraint-based Routing Label Distribution Protocol (CR-LDP) and decided to focus purely on RSVP-TE. Operational overhead of RSVP-TE compared to the more widely deployed Label Distribution Protocol (LDP) will generally be higher. This is a classic trade-off between complexity and optimality in the use of technologies in telecommunications networks.
Standards
- RSVP-TE: Extensions to RSVP for LSP Tunnels
- The Multiprotocol Label Switching (MPLS) Working Group decision on MPLS signaling protocols
- Fast Reroute Extensions to RSVP-TE for LSP Tunnels
- Exclude Routes - Extension to Resource ReserVation Protocol-Traffic Engineering (RSVP-TE)
- Crankback Signaling Extensions for MPLS and GMPLS RSVP-TE
- Inter-Domain MPLS and GMPLS Traffic Engineering—Resource Reservation Protocol-Traffic Engineering (RSVP-TE) Extensions
- Encoding of Attributes for Multiprotocol Label Switching (MPLS) Label Switched Path (LSP) Establishment Using Resource ReserVation Protocol-Traffic Engineering (RSVP-TE)
- Node Behavior upon Originating and Receiving Resource Reservation Protocol (RSVP) Path Error Messages
- Generalized MPLS (GMPLS) Protocol Extensions for Multi-Layer and Multi-Region Networks (MLN/MRN)
References
Further reading
Internet architectu
|
https://en.wikipedia.org/wiki/Quasi-peak%20detector
|
A quasi-peak detector is a type of electronic detector or rectifier. Quasi-peak detectors for specific purposes have usually been standardized with mathematically precisely defined dynamic characteristics of attack time, integration time, and decay time or fall-back time.
Quasi-peak detectors play an important role in electromagnetic compatibility (EMC) testing of electronic equipment, where allowed levels of electromagnetic interference (EMI), also called radio frequency interference (RFI), are given with reference to measurement by a specified quasi-peak detector. This was originally done because the quasi-peak detector was believed to better indicate the subjective annoyance level experienced by a listener hearing impulsive interference to an AM radio station. Over time standards incorporating quasi-peak detectors as the measurement device were extended to frequencies up to 1 GHz, although there may not be any justification beyond previous practice for using the quasi-peak detector to measure interference to signals other than AM radio. The quasi-peak detector parameters to be used for EMC testing vary with frequency. Both CISPR and the U.S. Federal Communications Commission (FCC) limit EMI at frequencies above 1 GHz with reference to an average-power detector, rather than quasi-peak detector.
Conceptually, a quasi-peak detector for EMC testing works like a peak detector followed by a lossy integrator. A voltage impulse entering a narrow-band receiver produces a short-duration burst oscillating at the receiver centre frequency. The peak detector is a rectifier followed by a low-pass filter to extract a baseband signal consisting of the slowly (relative to the receiver centre frequency) time-varying amplitude of the impulsive oscillation. The following lossy integrator has a rapid rise time and longer fall time, so the measured output for a sequence of impulses is higher when the pulse repetition rate is higher. The quasi-peak detector is calibrated to produce
|
https://en.wikipedia.org/wiki/Nexans
|
Nexans S.A. is a global company in the cable and optical fibre industry headquartered in Paris, France.
The group is active in four main business areas: buildings and territories (construction, local infrastructure, smart cities / grids, e-mobility), high voltage and projects (offshore wind farms, subsea interconnections, land high voltage), data and telecoms (telecom networks), data transmission, FTTx, LAN cabling, renewable energies, petroleum, railways and rolling stock, aeronautical and automation.
It is the world's second largest manufacturer of cables after Prysmian S.p.A. In 2017, the Group had industrial presence in 34 countries with over 26,000 employees and sales of around €6.4 billion. Nexans was founded in 2000 as a business unit of the telecommunications firm Alcatel after its acquisition of a number of companies in the cable sector. It was spun out and listed on the Paris stock exchange the following year. It is currently listed on Euronext Paris, Compartment A. Nexans will supply and install HVDC cables for EuroAsia Interconnector, the longest and the deepest HVDC subsea cable project ever, with bi-pole cables of 2x900km.
History
1897 - Foundation of la Société Française des Câbles Électriques, Système Berthoud, Borel et Cie.
1912 - Acquisition of the company by Alcatel.
1917 - The company is renamed Compagnie générale des câbles de Lyon.
1986 - Câbles de Lyon becomes Alcatel Câbles.
1996 - Alcatel Câbles merges with Alcatel.
2000 - Alcatel Câbles and Components become Nexans.
2001 - Nexans is listed on Paris Stock Exchange.
2008 - Nexans acquires Madeco, in the cable industry in South America.
2012 - Nexans acquires AmerCable, an American company specialized in power cables based in El Dorado, Arkansas for 275 million dollars. The group also acquires a Chinese company Shandong Yanggu Cable Group.
2019 - Nexans shuts down the Hanover factory - it continues operation into 2020
2020 - Nexans shuts down the Chester NY factory and exits
|
https://en.wikipedia.org/wiki/Operation%20Europe%3A%20Path%20to%20Victory
|
Operation Europe: Path to Victory, released in Japan as , is a combat strategy video game for multiple platforms where one or two players can compete in World War II action. The MS-DOS version of the game was only released to North America.
Gameplay
The object of the game is to fulfill any one of the military objectives for either the Axis or the Allied forces. Players engage in modern warfare around Western Europe, Eastern Europe, Central Europe, and North Africa. This game uses abstract numbers and figures in the map view and saves the concrete illustrations of soldiers only when they lock horns on the battlefield or in an urban setting. Urban settings give a traditional 1930s view of housing and office buildings that provide extra protection for units that are guarding them. However, there are massive numbers to crunch and the lack of graphics help enhance the number crunching ability of game's artificial intelligence.
As a way to utilize the Nobunaga's Ambition video game engine while simulating modern warfare, each general's statistics are completely randomized by a roulette system. 84 different characters are used for generals, including those from the American television show Combat!. Examples of non-fictional characters include Adolf Hitler, Josef Stalin and Walter Bedell Smith.
Weapons are automatically replenished at the end of each scenario. Units cannot be built from scratch; they must be requested from the head of the brigade instead.
The Japanese version of the game has four game modes: Campaign, demonstration, one player game and two-player game. In the Campaign mode, the player can only play the Germans. Starting this mode with the invasion of France, the German army continues fighting in Africa, and so on, ending with Berlin defense. Officers gain experience in every scenario, retaining it after winning the battle. The troops are often replaced by the same type (one of: Infantry, Artillery, Howitzers and Reactive Artillery, Tanks and Self-Prope
|
https://en.wikipedia.org/wiki/The%20Untouchables%20%28video%20game%29
|
The Untouchables is a video game released by Ocean Software in 1989 on ZX Spectrum, Amstrad CPC, Commodore 64, MSX, Atari ST, Amiga, DOS, NES, and SNES. It is based on the film The Untouchables.
Gameplay
A side-scrolling based loosely on the movie, the game plays out some of the more significant parts of the film. Set in Chicago, the primary goal of the game is to take down Al Capone's henchmen and eventually detain Capone.
Reception
Electronic Gaming Monthly gave the Super NES version a 5.8 out of 10, commenting that "This title would have been better if it were Super Scope compatible, for it is a bit difficult to use the pad during the shooting sequences."
The reviewer from Crash called the game "Great stuff. Ocean have brought Chicago to life. Atmospheric title tune (128k), beautifully detailed graphics and challenging gameplay add up to one addictive mean game!"
Sinclair User commented that "The Untouchables is a cracking conversion. Easily one of the most successful and accurate movie licenses to date."
Paul Rand of Computer and Video Games stated that "The Untouchables is a well thought out package which will find a niche in most people's software collections [...] those who buy it won't be disappointed."
The Games Machine added that "The six levels are all trigger-pumping fun, with suitable graphics to give an authentic Twenties feel, and some nice touches [...] It all make The Untouchables a winner."
References
External links
The Untouchables at MobyGames
Review in Info
1989 video games
Amiga games
Atari ST games
Commodore 64 games
DOS games
Golden Joystick Award for Game of the Year winners
MSX games
Nintendo Entertainment System games
Ocean Software games
Organized crime video games
Super Nintendo Entertainment System games
The Untouchables
Video games about police officers
Video games based on films
Video games based on adaptations
Video games developed in the United Kingdom
Video games set in Chicago
ZX Spectrum games
|
https://en.wikipedia.org/wiki/Semi-log%20plot
|
In science and engineering, a semi-log plot/graph or semi-logarithmic plot/graph has one axis on a logarithmic scale, the other on a linear scale. It is useful for data with exponential relationships, where one variable covers a large range of values, or to zoom in and visualize that - what seems to be a straight line in the beginning - is in fact the slow start of a logarithmic curve that is about to spike and changes are much bigger than thought initially.
All equations of the form form straight lines when plotted semi-logarithmically, since taking logs of both sides gives
This is a line with slope and vertical intercept. The logarithmic scale is usually labeled in base 10; occasionally in base 2:
A log–linear (sometimes log–lin) plot has the logarithmic scale on the y-axis, and a linear scale on the x-axis; a linear–log (sometimes lin–log) is the opposite. The naming is output–input (y–x), the opposite order from (x, y).
On a semi-log plot the spacing of the scale on the y-axis (or x-axis) is proportional to the logarithm of the number, not the number itself. It is equivalent to converting the y values (or x values) to their log, and plotting the data on linear scales. A log–log plot uses the logarithmic scale for both axes, and hence is not a semi-log plot.
Equations
The equation of a line on a linear–log plot, where the abscissa axis is scaled logarithmically (with a logarithmic base of n), would be
The equation for a line on a log–linear plot, with an ordinate axis logarithmically scaled (with a logarithmic base of n), would be:
Finding the function from the semi–log plot
Linear–log plot
On a linear–log plot, pick some fixed point (x0, F0), where F0 is shorthand for F(x0), somewhere on the straight line in the above graph, and further some other arbitrary point (x1, F1) on the same graph. The slope formula of the plot is:
which leads to
or
which means that
In other words, F is proportional to the logarithm of x times the slope of th
|
https://en.wikipedia.org/wiki/Wavefront%20coding
|
In optics and signal processing, wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of field of a digital imaging system such as a video camera.
Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field.
Encoding
The wavefront of a light wave passing through the camera system is modulated using optical elements that introduce a spatially varying optical path length. The modulating elements must be placed at or near the plane of the aperture stop or pupil so that the same modulation is introduced for all field angles across the field-of-view. This modulation corresponds to a change in complex argument of the pupil function of such an imaging device, and it can be engineered with different goals in mind: e.g. extending the depth of focus.
Linear phase mask
Wavefront coding with linear phase masks works by creating an optical transfer function that encodes distance information.
Cubic phase mask
Wavefront Coding with cubic phase masks works to blur the image uniformly using a cubic shaped waveplate so that the intermediate image, the optical transfer function, is out of focus by a constant amount. Digital image processing then removes the blur and introduces noise depending upon the physical characteristics of the processor. Dynamic range is sacrificed to extend the depth of field depending upon the type of filter used. It can also correct optical aberration.
The mask was developed by using the ambiguity function and the stationary phase method
History
The technique was pioneered by radar engineer Edward Dowski and his thesis adviser Thomas Cathey at the University of Colorado in the United States in the 1990s. The University filed a patent on the invention. Cathey, Dowski and Merc Mercure founded a company to commercialize the method called CDM-Optics, and licensed the invention from the University. The company was acquired in
|
https://en.wikipedia.org/wiki/Leibniz%20operator
|
In abstract algebraic logic, a branch of mathematical logic, the Leibniz operator is a tool used to classify deductive systems, which have a precise technical definition and capture a large number of logics. The Leibniz operator was introduced by Wim Blok and Don Pigozzi, two of the founders of the field, as a means to abstract the well-known Lindenbaum–Tarski process, that leads to the association of Boolean algebras to classical propositional calculus, and make it applicable to as wide a variety of sentential logics as possible. It is an operator that assigns to a given theory of a given sentential logic, perceived as a term algebra with a consequence operation on its universe, the largest congruence on the algebra that is compatible with the theory.
Formulation
In this article, we introduce the Leibniz operator in the special case of classical propositional calculus, then we abstract it to the general notion applied to an arbitrary sentential logic and, finally, we summarize some of the most important consequences of its use in the theory of abstract algebraic logic.
Let
denote the classical propositional calculus. According to the classical
Lindenbaum–Tarski process, given a theory
of ,
if
denotes the binary relation on the set of formulas
of , defined by
if and only if
where denotes the usual
classical propositional equivalence connective, then
turns out to be a congruence
on the formula algebra. Furthermore, the quotient
is a Boolean algebra
and every Boolean algebra may be formed in this way.
Thus, the variety of Boolean algebras, which is,
in algebraic logic terminology, the
equivalent algebraic semantics (algebraic counterpart)
of classical propositional calculus, is the class of
all algebras formed by taking appropriate quotients
of term algebras by those special kinds of
congruences.
Notice that the condition
that defines
is equivalent to the
condition
for every formula : if and only if .
Passing now to an arbitrary sent
|
https://en.wikipedia.org/wiki/Knuth%20Prize
|
The Donald E. Knuth Prize is a prize for outstanding contributions to the foundations of computer science, named after the American computer scientist Donald E. Knuth.
History
The Knuth Prize has been awarded since 1996 and includes an award of US$5,000. The prize is awarded by ACM SIGACT and by IEEE Computer Society's Technical Committee on the Mathematical Foundations of Computing. Prizes are awarded in alternating years at the ACM Symposium on Theory of Computing and at the IEEE Symposium on Foundations of Computer Science, which are among the most prestigious conferences in theoretical computer science. The recipient of the Knuth Prize delivers a lecture at the conference.
For instance, David S. Johnson "used his Knuth Prize lecture to push for practical applications for algorithms."
In contrast with the Gödel Prize, which recognizes outstanding papers, the Knuth Prize is awarded to individuals for their overall impact in the field.
Winners
Since the prize was instituted in 1996, it has been awarded to the following individuals, with the citation for each award quoted (not always in full):
Selection Committees
See also
List of computer science awards
References
External links
Knuth Prize website
Awards established in 1996
Theoretical computer science
Computer science awards
Donald Knuth
IEEE society and council awards
Awards of the Association for Computing Machinery
|
https://en.wikipedia.org/wiki/JUICE%20%28software%29
|
JUICE is a widely used non-commercial software package for editing and analysing phytosociological data.
It was developed at the Masaryk University in Brno, Czech Republic in 1998, and is fully described in English manual. It makes use of the previously-developed TURBOVEG software for entering and storing such data) and it offers a quite powerful tool for vegetation data analysis, including:
creation of synoptic tables
determination of diagnostic species according to their fidelity
calculation of Ellenberg indicator values for relevés, various indices of alpha and beta diversity
classification of relevés using TWINSPAN or cluster analysis
expert system for vegetation classification based on COCKTAIL method etc.
See also
Phytosociology
Phytogeography
Biogeography
External links
Tichy, L. 2002. JUICE, software for vegetation classification. J. Veg. Sci. 13: 451-453. (Basic scientific article on the program).
Uses in scientific journals
Pyšek P., Jarošík V., Chytrý M., Kropáč Z., Tichý L. & Wild J. 2005. Alien plants in temperate weed communities: prehistoric and recent invaders occupy different habitats. Ecology 86: 772–785.
Ewald, J A critique for phytosociology Journal of Vegetation Science (April 2003) 14(2)291-296
Science software
Botany
Biogeography
Ecological data
|
https://en.wikipedia.org/wiki/Oh%20Mummy
|
Oh Mummy is a video game for the Amstrad CPC models of home computer. It was developed by Gem Software and published by Amsoft in 1984. It was often included in the free bundles of software that came with the computer. The gameplay is similar to that of the 1981 arcade game Amidar.
Gameplay
The object of the game is to unveil all of the treasure within each level (or pyramid) of the game whilst avoiding the mummies. Each level consists of a two-dimensional board. In contrast with Pac-Man, when the player's character walks around, footprints are left behind. By surrounding an area of the maze with footprints, its content is revealed, which is either a scroll, a mummy, a key, a tomb or nothing at all. In order to complete a level, it is necessary to unveil the key and a tombstone. The scroll enables the player to kill/eat one mummy on the level. If a mummy is unveiled, it follows the player to the next level. The difficulty and speed of the game increases as the player progresses through the levels.
The game is primarily for one player but has a limited multiplayer mode in which players can alternate taking a turn to play each level. Whilst, even at the time, it was considered simple in terms of gameplay, graphics and sound, it was for many people one of the better and more addictive early offerings for the Amstrad.
The music played during gameplay is based on the children's song "The Streets of Cairo, or the Poor Little Country Maid".
Ports
The game was also released for the MSX, ZX Spectrum, the Amstrad CPC 464, Tatung Einstein and Camputers Lynx. The ZX Spectrum version was given away in one of several introductory software packs for the computer, this particular pack also including Crazy Golf, Alien Destroyer, Punchy, Treasure Island and Disco Dan. The game was also unofficially ported to the Sega Genesis and Mattel Intelevision.
References
External links
Oh Mummy for Camputers Lynx at Universal Videogame List
Oh Mummy for Amstrad CPC at Time Extension
|
https://en.wikipedia.org/wiki/KCOP-TV
|
KCOP-TV (channel 13) is a television station in Los Angeles, California, United States, serving as the West Coast flagship of MyNetworkTV. It is owned and operated by Fox Television Stations alongside Fox outlet KTTV (channel 11). Both stations share studios at the Fox Television Center located in West Los Angeles, while KCOP-TV's transmitter is located atop Mount Wilson.
History
Early history
Channel 13 first signed on the air on September 17, 1948, as KLAC-TV (standing for Los Angeles, California), and adopted the moniker "Lucky 13". It was originally co-owned with local radio station KLAC (570 AM). Operating as an independent station early on, it began running some programming from the DuMont Television Network in 1949 after KTLA (channel 5) ended its affiliation with the network after a one-year tenure. One of KLAC-TV's earlier stars was veteran actress Betty White, who starred in Al Jarvis's Make-Believe Ballroom (later Hollywood on Television) from 1949 to 1952, and then her own sitcom, Life with Elizabeth from 1952 to 1956. Television personality Regis Philbin and actor/director Leonard Nimoy once worked behind the scenes at channel 13, and Oscar Levant had his own show on the station from 1958 to 1960.
On December 23, 1953, the now-defunct Copley Press (publishers of the San Diego Union-Tribune) purchased KLAC-TV and changed its call letters to the current KCOP, which reflected their ownership. A Bing Crosby-led group purchased the station in June 1957. In 1959, the NAFI Corporation, which would later merge with Chris-Craft Boats to become Chris-Craft Industries, bought channel 13. NAFI/Chris-Craft would be channel 13's longest-tenured owner, running it for over 40 years.
For most of its first 46 years on the air, channel 13 was a typical general entertainment independent station. It was usually the third or fourth highest-rated independent in Southern California, trading the #3 spot with KHJ-TV (channel 9, now KCAL-TV). The station carried Operation Pr
|
https://en.wikipedia.org/wiki/Molecular%20logic%20gate
|
A molecular logic gate is a molecule that performs a logical operation based on one or more physical or chemical inputs and a single output. The field has advanced from simple logic systems based on a single chemical or physical input to molecules capable of combinatorial and sequential operations such as arithmetic operations (i.e. moleculators and memory storage algorithms). Molecular logic gates work with input signals based on chemical processes and with output signals based on spectroscopic phenomena.
Logic gates are the fundamental building blocks of electrical circuits. They can be used to construct digital architectures with varying degrees of complexity by a cascade of a few to several million logic gates. Logic gates are essentially physical devices that produce a singular binary output after performing logical operations based on Boolean functions on one or more binary inputs. The concept of molecular logic gates, extending the applicability of logic gates to molecules, aims to convert chemical systems into computational units. Over the past three decades, the field has evolved to realize several practical applications in molecular electronics, biosensing, DNA computing, nanorobotics, and cell imaging, among others.
Working principle
For logic gates with a single input, there are four possible output patterns. When the input is 0, the output can be either a 0 or 1. When the input is 1, the output can again be 0 or 1. The four output bit patterns that can arise corresponds to a specific logic type: PASS 0, YES, NOT, and PASS 1. PASS 0 always outputs 0, whatever the input. PASS 1 always outputs 1, whatever the input. YES outputs a 1 when the input is 1, and NOT is the inverse of YES – it outputs a 0 when the input is 1.
AND, OR, XOR, NAND, NOR, XNOR, and INH are two-input logic gates. The AND, OR, and XOR gates are fundamental logic gates, and the NAND, NOR, and XNOR gates are complementary to AND, OR, and XOR gates, respectively. An INHIBIT (INH) gat
|
https://en.wikipedia.org/wiki/Ecological%20indicator
|
Ecological indicators are used to communicate information about ecosystems and the impact human activity has on ecosystems to groups such as the public or government policy makers. Ecosystems are complex and ecological indicators can help describe them in simpler terms that can be understood and used by non-scientists to make management decisions. For example, the number of different beetle taxa found in a field can be used as an indicator of biodiversity.
Many different types of indicators have been developed. They can be used to reflect a variety of aspects of ecosystems, including biological, chemical and physical. Due to this variety, the development and selection of ecological indicators is a complex process.
Using ecological indicators is a pragmatic approach since direct documentation of changes in ecosystems as related to management measures, is cost and time intensive. For example, it would be expensive and time-consuming to count every bird, plant and animal in a newly restored wetland to see if the restoration was a success. Instead, a few indicator species can be monitored to determine the success of the restoration.
"It is difficult and often even impossible to characterize the functioning of a complex system, such as an eco-agrosystem, by means of direct measurements. The size of the system, the complexity of the interactions involved, or the difficulty and cost of the measurements needed are often crippling"
The terms ecological indicator and environmental indicator are often used interchangeably. However, ecological indicators are actually a sub-set of environmental indicators. Generally, environmental indicators provide information on pressures on the environment, environmental conditions and societal responses. Ecological indicators refer only to ecological processes; however, sustainability indicators are seen as increasingly important for managing humanity's coupled human-environmental systems.
Ecological indicators play an important ro
|
https://en.wikipedia.org/wiki/IMP-16
|
The IMP-16, by National Semiconductor, was the first multi-chip 16-bit microprocessor, released in 1973. It consisted of five PMOS integrated circuits: four identical RALU chips, short for register and ALU, providing the data path, and one CROM, Control and ROM, providing control sequencing and microcode storage. The IMP-16 is a bit-slice processor; each RALU chip provides a 4-bit slice of the register and arithmetic that work in parallel to produce a 16-bit word length.
Each RALU chip stores its own 4 bits of the program counter, several registers, the ALU, a 16-word LIFO stack, and status flags. There were four 16-bit accumulators, two of which could be used as index registers. The instruction set architecture was similar to that of the Data General Nova. The chip set could be extended with the CROM chip (IMP-16A / 522D) that implemented 16-bit multiply and divide routines. The chipset was driven by a two-phase 715 kHz non-overlapping clock that had a +5 to -12 voltage swing. An integral part of the architecture was a 16-bit input mux that provided various condition bits from the ALUs such as zero, carry, overflow along with general purpose inputs.
The microprocessor was used in the IMP-16P microcomputer and Jacquard Systems' J100 but saw little other use. The IMP-16 was later superseded by the PACE and INS8900 single-chip 16-bit microprocessors, which had a similar architecture but were not binary compatible. It was also used in the Aston Martin Lagonda, thanks to National Semiconductor's chairman Peter Sprague being a major shareholder in Aston Martin at the time.
References
External links
IMP-16C board at the Selectric Typewriter Museum
IMP-16
16-bit microprocessors
Computers using bit-slice designs
|
https://en.wikipedia.org/wiki/Virtual%20leased%20line
|
Virtual leased lines (VLL), also referred to as virtual private wire service (VPWS) or EoMPLS (Ethernet over MPLS), is a way to provide Ethernet-based point to point communication over Multiprotocol Label Switching (MPLS) or Internet Protocol networks. VLL uses the pseudo-wire encapsulation for transporting Ethernet traffic over an MPLS tunnel across an MPLS backbone. VLL also describes a point to point bonded connection using the broadband bonding technology.
Types
There are 5 types of VLLs:
Epipes: Emulates a point-to-point Ethernet service. VLAN-tagged Ethernet frames are supported. Interworking with other Layer 2 technologies is also supported.
Apipes: Emulates a point-to-point ATM (Asynchronous Transfer Mode) service. Several subtypes are provided to support different ATM service types.
Fpipes: Emulates point-to-point Frame Relay circuit. Some features for interworking with ATM are also supported.
Ipipes: Provides IP interworking capabilities between different Layer 2 technologies.
Cpipes: Emulates a point-to-point time-division multiplexing (TDM) circuit.
See also
Virtual Extensible LAN
Virtual Private LAN Service
References
External links
Layer 2 Virtual Private Networks (l2vpn) working group homepage
Pseudo Wire Emulation Edge to Edge (pwe3) working group homepage
MPLS networking
Network protocols
|
https://en.wikipedia.org/wiki/Cartan%20subgroup
|
In the theory of algebraic groups, a Cartan subgroup of a connected linear algebraic group over a (not necessarily algebraically closed) field is the centralizer of a maximal torus. Cartan subgroups are smooth (equivalently reduced), connected and nilpotent. If is algebraically closed, they are all conjugate to each other.
Notice that in the context of algebraic groups a torus is an algebraic group
such that the base extension (where is the algebraic closure of ) is isomorphic to the product of a finite number of copies of the . Maximal such subgroups have in the theory of algebraic groups a role that is similar to that of maximal tori in the theory of Lie groups.
If is reductive (in particular, if it is semi-simple), then a torus is maximal if and only if it is its own centraliser and thus Cartan subgroups of are precisely the maximal tori.
Example
The general linear groups are reductive. The diagonal subgroup is clearly a torus (indeed a split torus, since it is product of n copies of already before any base extension), and it can be shown to be maximal. Since is reductive, the diagonal subgroup is a Cartan subgroup.
See also
Borel subgroup
Algebraic group
Algebraic torus
References
Algebraic geometry
Linear algebraic groups
|
https://en.wikipedia.org/wiki/Cooperation%20%28evolution%29
|
In evolution, cooperation is the process where groups of organisms work or act together for common or mutual benefits. It is commonly defined as any adaptation that has evolved, at least in part, to increase the reproductive success of the actor's social partners. For example, territorial choruses by male lions discourage intruders and are likely to benefit all contributors.
This process contrasts with intragroup competition where individuals work against each other for selfish reasons. Cooperation exists not only in humans but in other animals as well. The diversity of taxa that exhibits cooperation is quite large, ranging from zebra herds to pied babblers to African elephants. Many animal and plant species cooperate with both members of their own species and with members of other species.
In animals
Cooperation in animals appears to occur mostly for direct benefit or between relatives. Spending time and resources assisting a related individual may at first seem destructive to an organism's chances of survival but is actually beneficial over the long-term. Since relatives share part of the helper's genetic make-up, enhancing each individual's chance of survival may actually increase the likelihood that the helper's genetic traits will be passed on to future generations.
However, some researchers, such as ecology professor Tim Clutton-Brock, assert that cooperation is a more complex process. They state that helpers may receive more direct, and less indirect, gains from assisting others than is commonly reported. These gains include protection from predation and increased reproductive fitness. Furthermore, they insist that cooperation may not solely be an interaction between two individuals but may be part of the broader goal of unifying populations.
Prominent biologists, such as Charles Darwin, E. O. Wilson, and W. D. Hamilton, have found the evolution of cooperation fascinating because natural selection favors those who achieve the greatest reproductive succes
|
https://en.wikipedia.org/wiki/S3%20Chrome
|
S3 Graphics' Chrome series of graphics accelerators arrived in 2004 with the DeltaChrome line of chips. They were supplied as discrete, mobile, or integrated graphics.
Overview
In 2004 after the S3 Graphics company spun off their joint-venture with VIA, VIA attempted to re-launch the S3 Graphics brand with a new line of video cards under the name 'Chrome'. The Chrome range featured low power requirements and high-definition output making it attractive for small form factor scenarios and OEM systems. Unfortunately by the time Chrome released, the rapid progression of 3D gaming performance between rivals NVIDIA and ATI Technologies made S3's offerings uncompetitive in the lucrative high end consumer market.
The Chrome series supports Direct3D 9 with full pixel shader 2.0 support, excluding the unreleased Savage XP/AlphaChrome and early UniChrome. Later GPUs in the series offer Direct3D 10, 10.1, and 11 support, depending on the GPU.
S3's AcceleRAM technology allowed system RAM to be used to supplement the video card's RAM, and is similar to ATI's HyperMemory and NVIDIA's TurboCache. Chrome also introduced MultiChrome technology, allowing multiple matched Chrome cards to be used simultaneously in a system to increase graphics performance, similar to ATI CrossFire and NVIDIA's SLI.
Product Families
AlphaChrome
Unreleased - the first of the 'Chrome' product line, previously titled Savage XP and codenamed Zoetrope.
DeltaChrome
DeltaChrome added support for Shader Model 2.0, making it S3's first released DirectX 9 product. Other features included the introduction of the Chromotion Video Engine, and dual 400 MHz DACs for multi monitor support.
GammaChrome
GammaChrome is the first native PCI Express product line by S3 Graphics. It was originally announced on 2004-3-18 , but the product was not released until 2005-3-9. Marketed as 3rd generation DirectX 9 products competing against GeForce 6600 and Radeon X600, there is little change between it and the previous ge
|
https://en.wikipedia.org/wiki/Tape-automated%20bonding
|
Tape-automated bonding (TAB) is a process that places bare semiconductor chips (dies) like integrated circuits onto a flexible circuit board (FPC) by attaching them to fine conductors in a polyamide or polyimide (like trade names Kapton or UPILEX) film carrier. This FPC with the die(s) (TAB inner lead bonding, ILB) can be mounted on the system or module board or assembled inside a package (TAB outer lead bonding, OLB). Typically the FPC includes from one to three conductive layers and all inputs and outputs of the semiconductor die are connected simultaneously during the TAB bonding. Tape automated bonding is one of the methods needed for achieving chip-on-flex (COF) assembly and it is one of the first roll-to-roll processing (also called R2R, reel-to-reel) type methods in the electronics manufacturing.
Process
The TAB mounting is done such that the bonding sites of the die, usually in the form of bumps or balls made of gold, solder or anisotropic conductive material, are connected to fine conductors on the tape, which provide the means of connecting the die to the package or directly to external circuits. The bumps or balls can locate either on the die or on the TAB tape. TAB compliant metallizations systems are:
Al pads on the die < - > gold plated Cu on tape areas (thermosonic bonding)
Al covered with Au on pads on the die < - > Au or Sn bumped tape areas (gang bonding)
Al pads with Au bumps on the die < - > Au or Sn plated tape areas (gang bonding)
Al pads with solder bumps on the die < - > Au, Sn or solder plated tape areas (gang bonding)
Sometimes the tape on which the die is bonded already contains the actual application circuit of the die. The film is moved to the target location, and the leads are cut and joining the chip takes place as necessary. There are several joining methods used with TAB: thermocompression bonding (with help of a pressure, sometimes called as a gang bonding), thermosonic bonding etc. The bare chip may then be encapsulate
|
https://en.wikipedia.org/wiki/Reflex%20receiver
|
A reflex radio receiver, occasionally called a reflectional receiver, is a radio receiver design in which the same amplifier is used to amplify the high-frequency radio signal (RF) and low-frequency audio (sound) signal (AF). It was first invented in 1914 by German scientists Wilhelm Schloemilch and Otto von Bronk, and rediscovered and extended to multiple tubes in 1917 by Marius Latour and William H. Priess. The radio signal from the antenna and tuned circuit passes through an amplifier, is demodulated in a detector which extracts the audio signal from the radio carrier, and the resulting audio signal passes again through the same amplifier for audio amplification before being applied to the earphone or loudspeaker. The reason for using the amplifier for "double duty" was to reduce the number of active devices, vacuum tubes or transistors, required in the circuit, to reduce the cost. The economical reflex circuit was used in inexpensive vacuum tube radios in the 1920s, and was revived again in simple portable tube radios in the 1930s.
How it works
The block diagram shows the general form of a simple reflex receiver. The receiver functions as a tuned radio frequency (TRF) receiver. The radio frequency (RF) signal from the tuned circuit (bandpass filter) is amplified, then passes through the high pass filter to the demodulator, which extracts the audio frequency (AF) (modulation) signal from the carrier wave. The audio signal is added back into the input of the amplifier, and is amplified again. At the output of the amplifier the audio is separated from the RF signal by the low pass filter and is applied to the earphone. The amplifier could be a single stage or multiple stages. It can be seen that since each active device (tube or transistor) is used to amplify the signal twice, the reflex circuit is equivalent to an ordinary receiver with double the number of active devices.
The reflex receiver should not be confused with a regenerative receiver, in wh
|
https://en.wikipedia.org/wiki/Behavioral%20operations%20management
|
Behavioral operations management (often called behavioral operations) examines and takes into consideration human behaviors and emotions when facing complex decision problems. It relates to the behavioral aspects of the use of operations research and operations management. In particular, it focuses on understanding behavior in, with and beyond models. The general purpose is to make better use and improve the use of operations theories and practice, so that the benefits received from the potential improvements to operations approaches in practice, that arise from recent findings in behavioral sciences, are realized. Behavioral operations approaches have heavily influenced supply chain management research among others.
Overview
Operations management involves a wide range of problem–solving skills aiming to help individuals or organizations to make more rational decisions as well as improving their efficiency. However, operations management often assumes that agents involved in the process or operating system, such as employees, consumers and suppliers, make fully rational decisions. Their decisions are not affected by their emotions as well as their surroundings and that they are able to react and distinguish between different types of information. In reality, this is not always true; human behavior has an important role in decision making and worker motivation, and therefore should be considered in the study of operations. This has led to the rise of behavioral operations management, which is defined as the study of impacts that human behavior has on operations, design and business interactions in different organizations. Behavioral operations management aims to understand the decision making of managers and tries to make improvements to the supply chain using the insight obtained. Behavioral operations management includes knowledge from a number of fields, such as economics, behavioral science, psychology and other social sciences. Traditional operations managemen
|
https://en.wikipedia.org/wiki/Network%20Professional%20Association
|
Established in 1991, the non-profit Network Professional Association (NPA) is a professional association for computer network professionals.
The NPA offers a Certified Network Professional CNP credential and provides advocacy for workers in the field. Members receive a certificate of membership, quarterly journal publications, chapters and programs, and opportunities to volunteer and publish.
Description
The NPA sponsors local chapters, a certification designation, an opportunity to publish, promotion of industry events and conferences and affinity programs to provide personal goods, opportunities and discounts to NPA professionals.
Each NPA chapter draws its members from a defined geographic area.
Certified Network Professional Program
The Network Professional Association introduced the Certified Network Professional (CNP) designation in 1994. Previously, IT networking practitioners had no professional designation. The NPA, through the volunteer efforts of its members, is involved in initiatives related to setting standards within the IT networking profession: the professional credentialing/certification of individual IT practitioners (the CNP) and maintaining the code of ethics and accountability for the profession. The CNP was updated and re-released to the community in October 2005.
Awards
The Network Professional Association announced Awards for Professionalism in 2002. The Distinguished Fellows membership class recognizes sustained lifelong excellence in the field. The NPA received support for the awards from many partners, Network Computing magazine, Network World Magazine, Interop, National Seminars, Pearson Technology Group, Microsoft, and Novell. Award recipients are recognized for valuable contributions, their continued focus on computer networking and professionalism, and the respect of their peers. An international industry pane of judges reviews submissions and make recommendations for recognition. The awards are presented at the Interop Las Ve
|
https://en.wikipedia.org/wiki/Infinitism
|
Infinitism is the view that knowledge may be justified by an infinite chain of reasons. It belongs to epistemology, the branch of philosophy that considers the possibility, nature, and means of knowledge.
Epistemological infinitism
Since Gettier, "knowledge" is no longer widely accepted as meaning "justified true belief" only. However, some epistemologists still consider knowledge to have a justification condition. Traditional theories of justification (foundationalism and coherentism) and indeed some philosophers consider an infinite regress not to be a valid justification. In their view, if A is justified by B, B by C, and so forth, then either
The chain must end with a link that requires no independent justification (a foundation),
The chain must come around in a circle in some finite number of steps (the belief may be justified by its coherence), or
Our beliefs must not be justified after all (as is posited by philosophical skeptics).
Infinitism, the view, for example, of Peter D. Klein, challenges this consensus, referring back to work of Paul Moser (1984) and John Post (1987). In this view, the evidential ancestry of a justified belief must be infinite and non-repeating, which follows from the conjunction of two principles that Klein sees as having straightforward intuitive appeal: "The Principle of Avoiding Circularity" and "The Principle of Avoiding Arbitrariness."
The Principle of Avoiding Circularity (PAC) is stated as follows: "For all x, if a person, S, has a justification for x, then for all y, if y is in the evidential ancestry of x for S, then x is not in the evidential ancestry of y for S." PAC says that the proposition to be justified cannot be a member of its own evidential ancestry, which is violated by coherence theories of justification.
The Principle of Avoiding Arbitrariness (PAA) is stated as follows: "For all x, if a person, S, has a justification for x, then there is some reason, r1, available to S for x; and there is some reaso
|
https://en.wikipedia.org/wiki/Human%20Protein%20Reference%20Database
|
The Human Protein Reference Database (HPRD) is a protein database accessible through the Internet. It is closely associated with the premier Indian Non-Profit research organisation Institute of Bioinformatics (IOB), Bangalore, India. This database is a collaborative output of IOB and the Pandey Lab of Johns Hopkins University.
Overview
The HPRD is a result of an international collaborative effort between the Institute of Bioinformatics in Bangalore, India and the Pandey lab at Johns Hopkins University in Baltimore, USA. HPRD contains manually curated scientific information pertaining to the biology of most human proteins. Information regarding proteins involved in human diseases is annotated and linked to Online Mendelian Inheritance in Man (OMIM) database. The National Center for Biotechnology Information provides link to HPRD through its human protein databases (e.g. Entrez Gene, RefSeq protein pertaining to genes and proteins.
This resource depicts information on human protein functions including protein–protein interactions, post-translational modifications, enzyme-substrate relationships and disease associations. Protein annotation information that is catalogued was derived through manual curation using published literature by expert biologists and through bioinformatics analyses of the protein sequence. The protein–protein interaction and subcellular localization data from HPRD have been used to develop a human protein interaction network.
Highlights of HPRD as follows:
From 10,000 protein–protein interactions (PPIs) annotated for 3,000 proteins in 2003, HPRD has grown to over 36,500 unique PPIs annotated for 25,000 proteins including 6,360 isoforms by the end of 2007.
More than 50% of molecules annotated in HPRD have at least one PPI and 10% have more than 10 PPIs.
Experiments for PPIs are broadly grouped into three categories namely in vitro, in vivo and yeast two hybrid (Y2H). Sixty percent of PPIs annotated in HPRD are supported by a single expe
|
https://en.wikipedia.org/wiki/Gordon%20Pask
|
Andrew Gordon Speedie Pask (28 June 1928 – 29 March 1996) was a British cybernetician, inventor and polymath who made during his lifetime multiple contributions to cybernetics, educational psychology, educational technology, epistemology, chemical computing, architecture, and the performing arts. During his life he gained three doctorate degrees. He was an avid writer, with more than two hundred and fifty publications which included a variety of journal articles, books, periodicals, patents, and technical reports (many of which can be found at the main Pask archive at the University of Vienna). He also worked as an academic and researcher for a variety of educational settings, research institutes, and private stakeholders including but not limited to the University of Illinois, Concordia University, the Open University, Brunel University and the Architectural Association School of Architecture. He is known for the development of conversation theory.
Biography
Early life and education: 1928-1958
Pask was born in Derby, England, on June 28, 1928, to his parents Percy and Mary Pask. His father was "a partner in Pask, Cornish and Smart, a wholesale fruit business in Covent Garden". He had two older siblings: Alfred, who trained as an engineer before becoming a Methodist minister, and Edgar, a professor of anesthetics. His family moved to the Isle of Wight shortly after his birth. He was educated at Rydal Penrhos. According to Andrew Pickering and G. M. Furtado Cardoso Lopes, school taught Pask to "be a gangster" and he was noted for having designed bombs during his time at Rydal Penrhos which was delivered to a government ministry in relation to the war effort during the second world war. He later went on to complete two diplomas in Geology and Mining Engineering from Liverpool Polytechnic and Bangor University respectively.
Pask later attended Cambridge University around 1949 to study for a bachelor's degree, where he met his future associate and business partner R
|
https://en.wikipedia.org/wiki/Cameleon%20%28protein%29
|
Cameleon is an engineered protein based on variant of green fluorescent protein used to visualize calcium levels in living cells. It is a genetically encoded calcium sensor created by Roger Y. Tsien and coworkers. The name is a conflation of CaM (the common abbreviation of calmodulin) and chameleon to indicate the fact that the sensor protein undergoes a conformation change and radiates at an altered wavelength upon calcium binding to the calmodulin element of the Cameleon. Cameleon was the first genetically encoded calcium sensor that could be used for ratiometric measurements and the first to be used in a transgenic animal to record activity in neurons and muscle cells. Cameleon and other genetically-encoded calcium indicators (GECIs) have found many applications in neuroscience and other fields of biology. It was created by fusing BFP, calmodulin, calmodulin-binding peptide M13 and EGFP.
Mechanism
The DNA encoding cameleon fusion protein must be either stably or transiently introduced into the cell of interest. Protein made by the cell according to this DNA information then serves as a fluorescent indicator of calcium concentration. In the presence of calcium, Ca2+ binds to M13, which enables calmodulin to wrap around the M13 domain. This brings the two GFP-variant proteins closer to each other, which increases FRET efficiency between them.
References
Sensors
Engineered proteins
Fluorescent proteins
Cell imaging
Calcium signaling
|
https://en.wikipedia.org/wiki/Executable%20UML
|
Executable UML (xtUML or xUML) is both a software development method and a highly abstract software language. It was described for the first time in 2002 in the book "Executable UML: A Foundation for Model-Driven Architecture". The language "combines a subset of the UML (Unified Modeling Language) graphical notation with executable semantics and timing rules." The Executable UML method is the successor to the Shlaer–Mellor method.
Executable UML models "can be run, tested, debugged, and measured for performance.", and can be compiled into a less abstract programming language to target a specific implementation. Executable UML supports model-driven architecture (MDA) through specification of platform-independent models, and the compilation of the platform-independent models into platform-specific models.
Overview
Executable UML is a higher level of abstraction than third-generation programming languages. This allows developers to develop at the level of abstraction of the application. The Executable UML aims for separation of concerns. This is supposed to increase ease of reuse and to lower the cost of software development. This also enables Executable UML domains to be cross-platform. That means it is not tied to any specific programming language, platform or technology.
Executable UML also allows for translation of platform-independent models (PIM) into platform-specific models (PSM). The Executable UML method enables valuing the model as intellectual property, since the model is a fully executable solution for the problem space.
Actions are specified in action language. This means that the automatic generation of implementation code from Executable UML models can be output in an optimized form.
Executable UML is intended to serve as executable code as well as documentation. The models are a graphical, executable specification of the problem space that is compiled into a target implementation. They are also intended to be human-readable.
Executable UML buil
|
https://en.wikipedia.org/wiki/Pentagonal%20tiling
|
In geometry, a pentagonal tiling is a tiling of the plane where each individual piece is in the shape of a pentagon.
A regular pentagonal tiling on the Euclidean plane is impossible because the internal angle of a regular pentagon, 108°, is not a divisor of 360°, the angle measure of a whole turn. However, regular pentagons can tile the hyperbolic plane with four pentagons around each vertex (or more) and sphere with three pentagons; the latter produces a tiling that is topologically equivalent to the dodecahedron.
Monohedral convex pentagonal tilings
Fifteen types of convex pentagons are known to tile the plane monohedrally (i.e. with one type of tile). The most recent one was discovered in 2015. This list has been shown to be complete by (result subject to peer-review). showed that there are only eight edge-to-edge convex types, a result obtained independently by .
Michaël Rao of the École normale supérieure de Lyon claimed in May 2017 to have found the proof that there are in fact no convex pentagons that tile beyond these 15 types. As of 11 July 2017, the first half of Rao's proof had been independently verified (computer code available) by Thomas Hales, a professor of mathematics at the University of Pittsburgh. As of December 2017, the proof was not yet fully peer-reviewed.
Each enumerated tiling family contains pentagons that belong to no other type; however, some individual pentagons may belong to multiple types. In addition, some of the pentagons in the known tiling types also permit alternative tiling patterns beyond the standard tiling exhibited by all members of its type.
The sides of length a, b, c, d, e are directly clockwise from the angles at vertices A, B, C, D, E respectively. (Thus,
A, B, C, D, E are opposite to d, e, a, b, c respectively.)
Many of these monohedral tile types have degrees of freedom. These freedoms include variations of internal angles and edge lengths. In the limit, edges may have lengths that approach zero or angles
|
https://en.wikipedia.org/wiki/Netsh
|
In computing, netsh, or network shell, is a command-line utility included in Microsoft's Windows NT line of operating systems beginning with Windows 2000. It allows local or remote configuration of network devices such as the interface.
Overview
A common use of netsh is to reset the TCP/IP stack to default, known-good parameters, a task that in Windows 98 required reinstallation of the TCP/IP adapter.
netsh, among many other things, also allows the user to change the IP address on their machine.
Starting from Windows Vista, one can also edit wireless settings (for example, SSID) using netsh.
netsh can also be used to read information from the IPv6 stack.
The command netsh winsock reset can be used to reset TCP/IP problems when communicating with a networked device.
References
Further reading
External links
Using Netsh from Microsoft TechNet
Netsh Commands for Wireless Local Area Network (WLAN) in Windows Server 2008 R2 (includes Windows 7), from Microsoft TechNet. Topic not covered in "Using netsh".
online tool to build address bind commands
netsh commands supported by Windows Vista, 7 and Server 2008 (output of "netsh ?")
Windows Server 2008 R2 and Windows Server 2008 Netsh Technical Reference (chm)
Windows communication and services
Windows administration
Windows components
|
https://en.wikipedia.org/wiki/Pitchfork%20bifurcation
|
In bifurcation theory, a field within mathematics, a pitchfork bifurcation is a particular type of local bifurcation where the system transitions from one fixed point to three fixed points. Pitchfork bifurcations, like Hopf bifurcations, have two types – supercritical and subcritical.
In continuous dynamical systems described by ODEs—i.e. flows—pitchfork bifurcations occur generically in systems with symmetry.
Supercritical case
The normal form of the supercritical pitchfork bifurcation is
For , there is one stable equilibrium at . For there is an unstable equilibrium at , and two stable equilibria at .
Subcritical case
The normal form for the subcritical case is
In this case, for the equilibrium at is stable, and there are two unstable equilibria at . For the equilibrium at is unstable.
Formal definition
An ODE
described by a one parameter function with satisfying:
(f is an odd function),
has a pitchfork bifurcation at . The form of the pitchfork is given
by the sign of the third derivative:
Note that subcritical and supercritical describe the stability of the outer lines of the pitchfork (dashed or solid, respectively) and are not dependent on which direction the pitchfork faces. For example, the negative of the first ODE above, , faces the same direction as the first picture but reverses the stability.
See also
Bifurcation theory
Bifurcation diagram
References
Steven Strogatz, Non-linear Dynamics and Chaos: With applications to Physics, Biology, Chemistry and Engineering, Perseus Books, 2000.
S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, Springer-Verlag, 1990.
Bifurcation theory
|
https://en.wikipedia.org/wiki/Centre%20%28geometry%29
|
In geometry, a centre (British English) or center (American English) () of an object is a point in some sense in the middle of the object. According to the specific definition of centre taken into consideration, an object might have no centre. If geometry is regarded as the study of isometry groups, then a centre is a fixed point of all the isometries that move the object onto itself.
Circles, spheres, and segments
The centre of a circle is the point equidistant from the points on the edge. Similarly the centre of a sphere is the point equidistant from the points on the surface, and the centre of a line segment is the midpoint of the two ends.
Symmetric objects
For objects with several symmetries, the centre of symmetry is the point left unchanged by the symmetric actions. So the centre of a square, rectangle, rhombus or parallelogram is where the diagonals intersect, this is (among other properties) the fixed point of rotational symmetries. Similarly the centre of an ellipse or a hyperbola is where the axes intersect.
Triangles
Several special points of a triangle are often described as triangle centres:
the circumcentre, which is the centre of the circle that passes through all three vertices;
the centroid or centre of mass, the point on which the triangle would balance if it had uniform density;
the incentre, the centre of the circle that is internally tangent to all three sides of the triangle;
the orthocentre, the intersection of the triangle's three altitudes; and
the nine-point centre, the centre of the circle that passes through nine key points of the triangle.
For an equilateral triangle, these are the same point, which lies at the intersection of the three axes of symmetry of the triangle, one third of the distance from its base to its apex.
A strict definition of a triangle centre is a point whose trilinear coordinates are f(a,b,c) : f(b,c,a) : f(c,a,b) where f is a function of the lengths of the three sides of the triangle, a, b, c such that:
f
|
https://en.wikipedia.org/wiki/Negroponte%20switch
|
The Negroponte Switch is an idea developed by Nicholas Negroponte in the 1980s, while at the Media Lab at MIT. He suggested that due to the accidents of engineering history we had ended up with static devices – such as televisions – receiving their content via signals travelling over the airways, while devices that could have been mobile and personal – such as telephones – were receiving their content over static cables. It was his idea that a better use of available communication resource would result if the information, (such as phone calls). going through the cables was to go through the air, and that going through the air (such as TV programmes) would be delivered via cables. Negroponte called this process “trading places”.
At an event organized by Northern Telecom, his co-presenter George Gilder called it the “Negroponte Switch”, and that name stuck from then on. As mobile devices came about, connections were needed for the data network, and bandwidths were required and deliverable in wired or fibre-optic systems growth. It became less sensible to use wireless broadcast to communicate with static installations. At some point the switch took place, as limited radio bandwidth was reallocated to data services for mobile equipment, and television and other media moved to cable.
Influence on internet advocacy
Cory Doctorow, author and Electronic Frontier Foundation activist, described the process of the switch as unwiring. He framed this as a move away from a global internetwork, which passes through many chokepoints where data may be controlled and inspected, toward one which uses available bandwidth frugally by passing communications in a mesh and avoiding chokepoints. He and Charles Stross wrote a short story on the process, called Unwirer.
The description of the switch in terms of a blend of civil liberty and technology was part of an effort to reimplement the Internet in the interests of the users, freedom and democracy.
Influences for change to dig
|
https://en.wikipedia.org/wiki/Hamilton%27s%20principle
|
In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action. It states that the dynamics of a physical system are determined by a variational problem for a functional based on a single function, the Lagrangian, which may contain all physical information concerning the system and the forces acting on it. The variational problem is equivalent to and allows for the derivation of the differential equations of motion of the physical system. Although formulated originally for classical mechanics, Hamilton's principle also applies to classical fields such as the electromagnetic and gravitational fields, and plays an important role in quantum mechanics, quantum field theory and criticality theories.
Mathematical formulation
Hamilton's principle states that the true evolution of a system described by generalized coordinates between two specified states and at two specified times and is a stationary point (a point where the variation is zero) of the action functional
where is the Lagrangian function for the system. In other words, any first-order perturbation of the true evolution results in (at most) second-order changes in . The action is a functional, i.e., something that takes as its input a function and returns a single number, a scalar. In terms of functional analysis, Hamilton's principle states that the true evolution of a physical system is a solution of the functional equation
That is, the system takes a path in configuration space for which the action is stationary, with fixed boundary conditions at the beginning and the end of the path.
Euler–Lagrange equations derived from the action integral
See also more rigorous derivation Euler–Lagrange equation
Requiring that the true trajectory be a stationary point of the action functional is equivalent to a set of differential equations for (the Euler–Lagrange equations), which may be derived as follows.
Let represent the true evolution of the syst
|
https://en.wikipedia.org/wiki/Explosive%20lens
|
An explosive lens—as used, for example, in nuclear weapons—is a highly specialized shaped charge. In general, it is a device composed of several explosive charges. These charges are arranged and formed with the intent to control the shape of the detonation wave passing through them. The explosive lens is conceptually similar to an optical lens, which focuses light waves. The charges that make up the explosive lens are chosen to have different rates of detonation. In order to convert a spherically expanding wavefront into a spherically converging one using only a single boundary between the constituent explosives, the boundary shape must be a paraboloid; similarly, to convert a spherically diverging front into a flat one, the boundary shape must be a hyperboloid, and so on. Several boundaries can be used to reduce aberrations (deviations from intended shape) of the final wavefront.
Invention
As mentioned by Hans Bethe, the invention of the explosive lens device was contributed and designed by John von Neumann.
Use in nuclear weapons
In a nuclear weapon, an array of explosive lenses is used to change the several approximately spherical diverging detonation waves into a single spherical converging one. The converging wave is then used to collapse the various shells (tamper, reflector, pusher, etc.) and finally compresses the core (pit) of fissionable material to a prompt critical state. They are usually machined from a plastic bonded explosive and an inert insert, called a wave-shaper, which is often a dense foam or plastic, though many other materials can be used. Other, mainly older explosive lenses do not include a wave shaper, but employ two explosive types that have significantly different velocities of detonation (VoD), which are in the range from 5 to 9 km/s. The use of the low- and high-speed explosives again results in a spherical converging detonation wave to compress the physics package. The original Gadget device used in the Trinity test and Fat Man dro
|
https://en.wikipedia.org/wiki/KM3
|
KM3 or Kernel Meta Meta Model is a neutral computer language to write metamodels and to define Domain Specific Languages. KM3 has been defined at INRIA and is available under the Eclipse platform.
References
KM3: a DSL for Metamodel Specification Jouault, F, and Bézivin, J (2006). In: Proceedings of 8th IFIP International Conference on Formal Methods for Open Object-Based Distributed Systems, LNCS 4037, Bologna, Italy, pages 171-185.
ADT Download
Eclipse GMT site
softwarefactories.com article
Softmetaware.com article
uio.no article
softmetaware.com article
trese.cs.utwente.nl présentation
bis.uni-leipzig.de présentation
Related Concepts
Model-driven architecture (MDA is an OMG Trademark),
Model Driven Engineering (MDE is not an OMG Trademark)
Domain Specific Language (DSL)
Domain-specific modelling (DSM)
Model-based testing (MBT)
Meta-modeling
ATL
XMI
OCL
MTL
MOF
Object-oriented analysis and design (OOAD)
Kermeta
External links
KM3 @ Eclipse.
Specification languages
|
https://en.wikipedia.org/wiki/Tire%20Science%20and%20Technology
|
Tire Science and Technology is a quarterly peer-reviewed scientific journal that publishes original research and reviews on experimental, analytical, and computational aspects of tires. Since 1978, the Tire Society has published the journal. The current editor-in-chief is Michael Kaliske (Dresden University of Technology).
History
The journal was founded in 1973 and was originally published by a committee of the American Society for Testing and Materials until 1977, when the Tire Society was incorporated for the purpose of continuing the journal.
Content
Topics of interest to journal readers include adhesion, aerospace, aging, agriculture, automotive, composite materials, constitutive modeling, contact mechanics, cord mechanics, curing, design theories, durability, elastomers, finite element analysis, force and moment behavior, groove wander, heat build up, hydroplaning, impact, manufacturing, mechanics, military, noise, pavement, performance evaluation, racing, rolling resistance, snow and ice, soil, standing waves, stiffness, strength, traction, vehicle dynamics, vibration, and wear.
Past Editors
1977 –1982: Dan Livingston (Goodyear Tire and Rubber Company)
1983 – 1994: Raouf Ridha (Goodyear Tire and Rubber Company)
1995 – 1999: Jozef DeEskinazi (Continental)
2000 – 2007: Farhad Tabaddor (Michelin)
2008 – 2009: William V. Mars (Cooper Tire)
2010 – present: Michael Kaliske (TU Dresden)
External links
References
Tires
Engineering journals
Materials science journals
Academic journals published by learned and professional societies
Academic journals established in 1973
English-language journals
Quarterly journals
|
https://en.wikipedia.org/wiki/Wolfram%20code
|
Wolfram code is a widely used numbering system for one-dimensional cellular automaton rules, introduced by Stephen Wolfram in a 1983 paper and popularized in his book A New Kind of Science.
The code is based on the observation that a table specifying the new state of each cell in the automaton, as a function of the states in its neighborhood, may be interpreted as a k-digit number in the S-ary positional number system, where S is the number of states that each cell in the automaton may have, k = S2n + 1 is the number of neighborhood configurations, and n is the radius of the neighborhood. Thus, the Wolfram code for a particular rule is a number in the range from 0 to SS − 1, converted from S-ary to decimal notation. It may be calculated as follows:
List all the S2n + 1 possible state configurations of the neighbourhood of a given cell.
Interpreting each configuration as a number as described above, sort them in descending numerical order.
For each configuration, list the state which the given cell will have, according to this rule, on the next iteration.
Interpret the resulting list of states again as an S-ary number, and convert this number to decimal. The resulting decimal number is the Wolfram code.
The Wolfram code does not specify the size (nor shape) of the neighbourhood, nor the number of states — these are assumed to be known from context. When used on their own without such context, the codes are often assumed to refer to the class of elementary cellular automata, two-state one-dimensional cellular automata with a (contiguous) three-cell neighbourhood, which Wolfram extensively investigates in his book. Notable rules in this class include rule 30, rule 110, and rule 184. Rule 90 is also interesting because it creates Pascal's triangle modulo 2. A code of this type suffixed by an R, such as "Rule 37R", indicates a second-order cellular automaton with the same neighborhood structure.
While in a strict sense every Wolfram code in the valid range def
|
https://en.wikipedia.org/wiki/Replay%20Professional
|
Replay Professional was a sound sampling product for the Atari ST. This was released in 1988.
It consisted of a cartridge which interfaced an analog to digital converter (with 10, 12 and 14 bit variants) and software.
It included a suite of offline DSP functions Fast Fourier transform, a range of filters, so called fast (IRR) and slow (FIR) filters], MIDI sequencing and a drum machine.
Compact discs were a relatively new consumer product at that time, and the front cover used CD-like artwork, although no CD media was included and the programs themselves came on three 3.5 inch floppy disks.
External links
- the file format used for the samples
Atari ST Replay 16: Atari Mania
Gallery
Atari ST software
|
https://en.wikipedia.org/wiki/Dannie%20Heineman%20Prize%20for%20Mathematical%20Physics
|
Dannie Heineman Prize for Mathematical Physics is an award given each year since 1959 jointly by the American Physical Society and American Institute of Physics. It is established by the Heineman Foundation in honour of Dannie Heineman. As of 2010, the prize consists of US$10,000 and a certificate citing the contributions made by the recipient plus travel expenses to attend the meeting at which the prize is bestowed.
Past Recipients
Source: American Physical Society
2023 Nikita Nekrasov
2022 Antti Kupiainen and Krzysztof Gawędzki
2021 Joel Lebowitz
2020 Svetlana Jitomirskaya
2019 T. Bill Sutherland, Francesco Calogero and Michel Gaudin
2018 Barry Simon
2017 Carl M. Bender
2016 Andrew Strominger and Cumrun Vafa
2015 Pierre Ramond
2014 Gregory W. Moore
2013 Michio Jimbo and Tetsuji Miwa
2012 Giovanni Jona-Lasinio
2011 Herbert Spohn
2010 Michael Aizenman
2009 Carlo Becchi, Alain Rouet, Raymond Stora and Igor Tyutin
2008 Mitchell Feigenbaum
2007 Juan Maldacena and Joseph Polchinski
2006 Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen
2005 Giorgio Parisi
2004 Gabriele Veneziano
2003 Yvonne Choquet-Bruhat and James W. York
2002 Michael B. Green and John Henry Schwarz
2001 Vladimir Igorevich Arnold
2000 Sidney R. Coleman
1999 Barry M. McCoy, Tai Tsun Wu and Alexander B. Zamolodchikov
1998 Nathan Seiberg and Edward Witten
1997 Harry W. Lehmann
1996 Roy J. Glauber
1995 Roman W. Jackiw
1994 Richard Arnowitt, Stanley Deser and Charles W. Misner
1993 Martin C. Gutzwiller
1992 Stanley Mandelstam
1991 Thomas C.Spencer and Jürg Fröhlich
1990 Yakov Sinai
1989 John S. Bell
1988 Julius Wess and Bruno Zumino
1987 Rodney Baxter
1986 Alexander M. Polyakov
1985 David P. Ruelle
1984 Robert B. Griffiths
1983 Martin D. Kruskal
1982 John Clive Ward
1981 Jeffrey Goldstone
1980 James Glimm and Arthur Jaffe
1979 Gerard 't Hooft
1978 Elliott Lieb
1977 Steven Weinberg
1976 Stephen Hawking
1975 Ludwig D. Faddeev
1974 Subrahmanyan Chandrasekhar
1973 Kenneth G. Wilson
1972 James D.
|
https://en.wikipedia.org/wiki/Comparison%20of%20geographic%20information%20systems%20software
|
This is a comparison of notable GIS software. To be included on this list, the software must have a linked existing article.
License, source, & operating system support
Pure server
Map servers
Map caches
Pure web client
Libraries
See also
Open Source Geospatial Foundation (OSGeo)
Geographic information system software
GIS Live DVD
References
GIS software
GIS
|
https://en.wikipedia.org/wiki/Rest%20frame
|
In special relativity, the rest frame of a particle is the frame of reference (a coordinate system attached to physical markers) in which the particle is at rest.
The rest frame of compound objects (such as a fluid, or a solid made of many vibrating atoms) is taken to be the frame of reference in which the average momentum of the particles which make up the substance is zero (the particles may individually have momentum, but collectively have no net momentum). The rest frame of a container of gas, for example, would be the rest frame of the container itself, in which the gas molecules are not at rest, but are no more likely to be traveling in one direction than another. The rest frame of a river would be the frame of an unpowered boat, in which the mean velocity of the water is zero. This frame is also called the center-of-mass frame, or center-of-momentum frame.
The center-of-momentum frame is notable for being the reference frame in which the total energy (total relativistic energy) of a particle or compound object, is also the invariant mass (times the scale-factor speed of light squared). It is also the reference frame in which the object or system has minimum total energy.
In both special relativity and general relativity it is essential to specify the rest frame of any time measurements, as the time that an event occurred is dependent on the rest frame of the observer. For this reason the timings of astronomical events such as supernovae are usually recorded in terms of when the light from the event reached Earth, as the "real time" that the event occurred depends on the rest frame chosen. For example, in the rest frame of a neutrino particle travelling from the Crab Nebula supernova to Earth, the supernova occurred in the 11th Century AD only a short while before the light reached Earth, but in Earth's rest frame the event occurred about 6300 years earlier.
References
See p. 139-140 for discussion of the stress-energy tensor for a perfect fluid such as a
|
https://en.wikipedia.org/wiki/Milnor%27s%20sphere
|
In mathematics, specifically differential and algebraic topology, during the mid 1950's John Milnorpg 14 was trying to understand the structure of -connected manifolds of dimension (since -connected -manifolds are homeomorphic to spheres, this is the first non-trivial case after) and found an example of a space which is homotopy equivalent to a sphere, but was not explicitly diffeomorphic. He did this through looking at real vector bundles over a sphere and studied the properties of the associated disk bundle. It turns out, the boundary of this bundle is homotopically equivalent to a sphere , but in certain cases it is not diffeomorphic. This lack of diffeomorphism comes from studying a hypothetical cobordism between this boundary and a sphere, and showing this hypothetical cobordism invalidates certain properties of the Hirzebruch signature theorem.
See also
Exotic sphere
Oriented cobordism
References
Differential topology
Algebraic topology
Topology
|
https://en.wikipedia.org/wiki/Crack%20%28password%20software%29
|
Crack is a Unix password cracking program designed to allow system administrators to locate users who may have weak passwords vulnerable to a dictionary attack. Crack was the first standalone password cracker for Unix systems and the first to introduce programmable dictionary generation as well.
Crack began in 1990 when Alec Muffett, a Unix system administrator at the University of Wales Aberystwyth, was trying to improve Dan Farmer's 'pwc' cracker in COPS. Muffett found that by re-engineering the memory management, he got a noticeable performance increase. This led to a total rewrite which became "Crack v2.0" and further development to improve usability.
Public Releases
The first public release of Crack was version 2.7a, which was posted to the Usenet newsgroups alt.sources and alt.security on 15 July 1991. Crack v3.2a+fcrypt, posted to comp.sources.misc on 23 August 1991, introduced an optimised version of the Unix crypt() function but was still only really a faster version of what was already available in other packages.
The release of Crack v4.0a on 3 November 1991, however, introduced several new features that made it a formidable tool in the system administrator's arsenal.
Programmable dictionary generator
Network distributed password cracking
Crack v5.0a released in 2000 did not introduce any new features, but instead concentrated on improving the code and introducing more flexibility, such as the ability to integrate other crypt() variants such as those needed to attack the MD5 password hashes used on more modern Unix, Linux and Windows NT systems. It also bundled Crack v6 - a minimalist password cracker and Crack v7 - a brute force password cracker.
Legal issues arising from using Crack
Randal L. Schwartz, a notable Perl programming expert, in 1995 was prosecuted for using Crack on the password file of a system at Intel, a case the verdict of which was eventually expunged.
Crack was also used by Kevin Mitnick when hacking into Sun Microsystems
|
https://en.wikipedia.org/wiki/Professor%20of%20Mathematics%20%28Glasgow%29
|
The Chair of Mathematics in the University of Glasgow in Scotland was established in 1691. Previously, under James VI's Nova Erectio, the teaching of Mathematics had been the responsibility of the Regents.
List of Mathematics Professors
George Sinclair MA (1691-1696)
Robert Sinclair MA MD (1699)
Robert Simson MA MD (1711)
Rev Prof James Williamson FRSE MA DD (1761)
James Millar MA (1796)
James Thomson MA LLD (1832)
Hugh Blackburn MA (1849)
William Jack MA LLD (1879)
George Alexander Gibson MA LLD (1909)
Thomas Murray MacRobert MA DSc LLD (1927)
Robert Alexander Rankin MA PhD DSc FRSE (1954-1982)
Robert Winston Keith Odoni BSc PhD FRSE (1989-2001)
Peter Kropholler (2003-2013)
Michael Wemyss (2016-)
References
Who, What and Where: The History and Constitution of the University of Glasgow. Compiled by Michael Moss, Moira Rankin and Lesley Richmond)
https://www.universitystory.gla.ac.uk/biography/?id=WH1773&type=P
https://www.maths.gla.ac.uk/~mwemyss/
See also
List of Professorships at the University of Glasgow
Mathematics
Glasgow
1691 establishments in Scotland
Mathematics education in the United Kingdom
|
https://en.wikipedia.org/wiki/Yamaha%20YM2151
|
The Yamaha YM2151, also known as OPM (FM Operator Type-M) is an eight-channel, four-operator sound chip. It was Yamaha's first single-chip FM synthesis implementation, being created originally for some of the Yamaha DX series of keyboards (DX21, DX27, and DX100). Yamaha also used it in some of their budget-priced electric pianos, such as the YPR-7, -8, and -9.
Uses
The YM2151 was used in many arcade game system boards, starting with Atari's Marble Madness in 1984, then Sega arcade system boards from 1985, and then arcade games from Konami, Capcom, Data East, Irem, and Namco, as well as Williams pinball machines, with its heaviest use in the mid-to-late 1980s. It was also used in Sharp's X1 and X68000 home computers, as well as the modern hobbyist Commander X16 8-bit computer.
The chip was used in the Yamaha SFG-01 and SFG-05 FM Sound Synthesizer units. These are expansion units for Yamaha MSX computers and were already built into some machines such as the Yamaha CX5M. Later SFG-05 modules contain the YM2164 (OPP), an almost identical chip with only minor changes to control registers. The SFGs were followed by the Yamaha FB-01, a standalone version powered exclusively by the YM2164.
Technical details
The YM2151 was paired with either a YM3012 stereo DAC or a YM3014 monophonic DAC so that the output of its FM tone generator could be supplied to speakers as analog audio.
See also
Yamaha YM2164
Yamaha YM2612
References
External links
Yamaha YM2151 OPM Application Manual
YM2151
Video game music technology
|
https://en.wikipedia.org/wiki/Binomial%20number
|
In mathematics, specifically in number theory, a binomial number is an integer which can be obtained by evaluating a homogeneous polynomial containing two terms. It is a generalization of a Cunningham number.
Definition
A binomial number is an integer obtained by evaluating a homogeneous polynomial containing two terms, also called a binomial. The form of this binomial is , with and . However, since is always divisible by , when studying the numbers generated from the version with the negative sign, they are usually divided by first. Binomial numbers formed this way form Lucas sequences. Specifically:
and
Binomial numbers are a generalization of a Cunningham numbers, and it will be seen that the Cunningham numbers are binomial numbers where . Other subsets of the binomial numbers are the Mersenne numbers and the repunits.
Factorization
The main reason for studying these numbers is to obtain their factorizations. Aside from algebraic factors, which are obtained by factoring the underlying polynomial (binomial) that was used to define the number, such as difference of two squares and sum of two cubes, there are other prime factors (called primitive prime factors, because for a given they do not factorize with ) which occur seemingly at random, and it is these which the number theorist is looking for.
Some binomial numbers' underlying binomials have Aurifeuillian factorizations, which can assist in finding prime factors. Cyclotomic polynomials are also helpful in finding factorizations.
The amount of work required in searching for a factor is considerably reduced by applying Legendre's theorem. This theorem states that all factors of a binomial number are of the form if is even or if it is odd.
Observation
Some people write "binomial number" when they mean binomial coefficient, but this usage is not standard and is deprecated.
See also
Cunningham project
Notes
References
External links
Binomial Number at MathWorld
Number theory
|
https://en.wikipedia.org/wiki/ALCOR
|
ALCOR (ALGOL Converter, acronym) is an early computer language definition created by the ALCOR Group, a consortium of universities, research institutions and manufacturers in Europe and the United States which was founded in 1959 and which had 60 members in 1966. The group had the aim of a common compiler specification for a subset of ALGOL 60 after the ALGOL meeting in Copenhagen in 1958.
In addition to its programming application, as the name Algol is also an astronomical reference, to the star Algol, so too, Alcor is a reference to the star Alcor. This star is the fainter companion of the 2nd magnitude star Zeta Ursae Majoris. This was sometimes ironized as being a bad omen for the future of the language.
In Europe, a high level machine architecture for ALGOL 60 was devised which was emulated on various real computers, among them the Siemens 2002 and the IBM 7090. An ALGOL manual was published which provided a detailed introduction of all features of the language with many program snippets, and four appendixes:
Revised Report on the Algorithmic Language ALGOL 60
Report on Subset ALGOL 60 (IFIP)
Report on Input-Output Procedures for ALGOL 60
An early "standard" character set for representing ALGOL 60 code on paper and paper tape. This character set introduced the characters "×", ";", "[", "]", and "⏨" into the CCITT-2 code, the first two replacing "?" and the BEL control character, the others taking unused code points.
References
Baumann, R. (1961) Baumann, R. "ALGOL Manual of the ALCOR Group, Pts. 1, 2 & 3" Elektronische Rechenanlagen No. 5 (Oct. 1961), 206–212; No. 6 (Dec. 1961), 259–265; No. 2 (Apr. 1962); (in German)
Papertape, punched card, magnetic tape coding schemes Computer Museum, University of Amsterdam, the Netherlands
External links
ALCOR in The Encyclopedia of Computer Languages
The ALCOR Project, Klaus Samelson, Friedrich L. Bauer, 1962.
Algol programming language family
Systems programming languages
Procedural programming languages
C
|
https://en.wikipedia.org/wiki/Unit%20process
|
A ''unit process'' is one or more grouped operations in a manufacturing system that can be defined and separated from others.
In life-cycle assessment (LCA) and ISO 14040, a unit process is defined as "smallest element considered in the life cycle inventory analysis for which input and output data are quantified".
See also
Unit operation
References
|
https://en.wikipedia.org/wiki/Hermite%27s%20identity
|
In mathematics, Hermite's identity, named after Charles Hermite, gives the value of a summation involving the floor function. It states that for every real number x and for every positive integer n the following identity holds:
Proofs
Proof by algebraic manipulation
Split into its integer part and fractional part, . There is exactly one with
By subtracting the same integer from inside the floor operations on the left and right sides of this inequality, it may be rewritten as
Therefore,
and multiplying both sides by gives
Now if the summation from Hermite's identity is split into two parts at index , it becomes
Proof using functions
Consider the function
Then the identity is clearly equivalent to the statement for all real . But then we find,
Where in the last equality we use the fact that for all integers . But then has period . It then suffices to prove that for all . But in this case, the integral part of each summand in is equal to 0. We deduce that the function is indeed 0 for all real inputs .
References
Mathematical identities
Articles containing proofs
|
https://en.wikipedia.org/wiki/Nanjing%20Metro
|
The Nanjing Metro is a rapid transit system serving the urban and suburban districts of Nanjing, the capital city of Jiangsu Province in the People's Republic of China.
Proposals for a metro system serving Nanjing first began in 1984, with approval by the State Planning Commission granted in 1994. Construction began on the initial 16-station Line 1 in 1999, and opened in 2005. The system has 13 lines and 218 stations running on of track. It is operated and maintained by the Nanjing Metro Group Company. Future expansion plans include 30 lines set to open within the next few years, with several more awaiting approval to begin construction.
History
Early proposals
In 1984 the first serious proposal for construction of a subway appeared in the Municipal People's Congress. In April 1986, the Nanjing Integrated Transport Planning group was established to research on how to implement a subway system in Nanjing. In December 1986 the team published the "Nanjing Metro Initial Phase". The phase consists of a north–south line, east–west line and a diagonal Northwest to Southeast line. The three lines meet in the city center forming a triangle. A revision of the "Nanjing City Master Plan" in 1993 added another line through the urban core, and three light metro lines connecting Nanjing's suburbs in Pukou and the at time proposed new airport. In addition a suburban railway to Longtan was proposed. A 1999 report on "Nanjing city rapid rail transit network planning" further proposed six subway lines, two subway extensions and three light metro lines.
In 1994, the State Planning Commission approved the preparatory work for the subway only to have the entire metro project postponed in 1995 amid a national freeze on new metro projects.
Major changes were made to "Nanjing Urban Rail Transit Network Planning" in 2003. The new master plan consisted of 13 lines, of which nine are subway lines and four are light metro lines. The new Line 6 will be a loop line connecting all the urban
|
https://en.wikipedia.org/wiki/Complex%20logarithm
|
In mathematics, a complex logarithm is a generalization of the natural logarithm to nonzero complex numbers. The term refers to one of the following, which are strongly related:
A complex logarithm of a nonzero complex number , defined to be any complex number for which . Such a number is denoted by . If is given in polar form as , where and are real numbers with , then is one logarithm of , and all the complex logarithms of are exactly the numbers of the form for integers . These logarithms are equally spaced along a vertical line in the complex plane.
A complex-valued function , defined on some subset of the set of nonzero complex numbers, satisfying for all in . Such complex logarithm functions are analogous to the real logarithm function , which is the inverse of the real exponential function and hence satisfies for all positive real numbers . Complex logarithm functions can be constructed by explicit formulas involving real-valued functions, by integration of , or by the process of analytic continuation.
There is no continuous complex logarithm function defined on all of . Ways of dealing with this include branches, the associated Riemann surface, and partial inverses of the complex exponential function. The principal value defines a particular complex logarithm function that is continuous except along the negative real axis; on the complex plane with the negative real numbers and 0 removed, it is the analytic continuation of the (real) natural logarithm.
Problems with inverting the complex exponential function
For a function to have an inverse, it must map distinct values to distinct values; that is, it must be injective. But the complex exponential function is not injective, because for any complex number and integer , since adding to has the effect of rotating counterclockwise radians. So the points
equally spaced along a vertical line, are all mapped to the same number by the exponential function. This means that the ex
|
https://en.wikipedia.org/wiki/Gnits%20standards
|
The Gnits standards are a collection of standards and recommendations for programming, maintaining, and distributing software. They are published by a group of GNU project maintainers who call themselves "Gnits", which is short for "GNU nit-pickers". As such, they represent advice, not Free Software Foundation or GNU policy, but parts of the Gnits' standards have seen widespread adoption among free software programmers in general.
The Gnits standards are extensions to, refinements of, and annotations for the GNU Standards. However, they are in no way normative in GNU; GNU maintainers are not required to follow them. Nevertheless, maintainers and programmers often find in Gnits standards good ideas on the way to follow GNU Standards themselves, as well as tentative, non-official explanations about why some GNU standards were decided the way they are. There are very few discrepancies between Gnits and GNU standards, and they are always well noted as such.
The standards address aspects of software architecture, program behaviour, human–computer interaction, C programming, documentation, and software releases.
As of 2008, the Gnits standards carry a notice that they are moribund and no longer actively maintained, and points readers to the manuals of Gnulib, Autoconf, and Automake, which are said to cover many of the same topics.
See also
GNU Autotools
GNU coding standards
External links
Gnits Standards
Gnits Standards (mirror)
Effect of Gnits on automake options
Computer standards
GNU Project
Computer programming
Free software culture and documents
|
https://en.wikipedia.org/wiki/Polycide
|
Polycide is a silicide formed over polysilicon. Widely used in DRAMs. In a polycide MOSFET transistor process, the silicide is formed only over the polysilicon film as formation occurs prior to any polysilicon etch. Polycide processes contrast with salicide processes in which silicide is formed after the polysilicon etch. Thus, with a salicide process, silicide is formed over both the polysilicon gate and the exposed monocrystalline terminal regions of the transistor in a self-aligned fashion.
Semiconductor device fabrication
Silicon
|
https://en.wikipedia.org/wiki/Acid%20phosphatase
|
Acid phosphatase (EC 3.1.3.2, systematic name phosphate-monoester phosphohydrolase (acid optimum)) is an enzyme that frees attached phosphoryl groups from other molecules during digestion. It can be further classified as a phosphomonoesterase. It is stored in lysosomes and functions when these fuse with endosomes, which are acidified while they function; therefore, it has an acid pH optimum. This enzyme is present in many animal and plant species.
Different forms of acid phosphatase are found in different organs, and their serum levels are used to evaluate the success of the surgical treatment of prostate cancer. In the past, they were also used to diagnose this type of cancer.
It's also used as a cytogenetic marker to distinguish the two different lineages of acute lymphoblastic leukemia (ALL) : B-ALL (a leukemia of B lymphocytes) is acid-phosphatase negative , T-ALL (originating instead from T Lymphocytes) is acid-phosphatase positive .
Acid phosphatase catalyzes the following reaction at an optimal acidic pH (below 7):
a phosphate monoester + H2O = an alcohol + phosphate
Phosphatase enzymes are also used by soil microorganisms to access organically bound phosphate nutrients. An assay on the rates of activity of these enzymes may be used to ascertain biological demand for phosphates in the soil.
Some plant roots, especially cluster roots, exude carboxylates that perform acid phosphatase activity, helping to mobilise phosphorus in nutrient-deficient soils.
Certain bacteria, such as Nocardia, can degrade this enzyme and utilize it as a carbon source.
Bone acid phosphatase
Tartrate-resistant acid phosphatase may be used as a biochemical marker of osteoclast function during the process of bone resorption.
Genes
The following genes encode the polypeptide components for various acid phosphatase isoenzymes:
ACP1
ACP2
ACPP (ACP3), prostatic acid phosphatase
ACP5, tartrate-resistant acid phosphatase
ACP6
ACPT, testicular acid phosphatase
Tissue acid phosphatase,
|
https://en.wikipedia.org/wiki/UniVBE
|
UniVBE (short for Universal VESA BIOS Extensions) is a software driver that allows DOS applications written to the VESA BIOS standard to run on almost any display device made in the last 15 years or so.
The UniVBE driver was written by SciTech Software and is also available in their product called SciTech Display Doctor.
The primary benefit is increased compatibility and performance with DOS games. Many video cards have sub-par implementations of the VESA standards, or no support at all. UNIVBE replaces the card's built-in support. Many DOS games include a version of UNIVBE because VESA issues were so widespread.
According to SciTech Software Inc, SciTech Display Doctor is licensed by IBM as the native graphics driver solution for OS/2.
History
The software started out as The Universal VESA TSR (UNIVESA), written by Kendall Bennett. It was renamed to Universal VESA BIOS Extensions (UniVBE) in version 4.2 at the request of VESA organisation, and is no longer freeware.
In version 5.1, VBE/Core 2.0 support was added.
In version 5.2, it was renamed to Scitech Display Doctor. However, UniVBE continued to be the name used for the actual driver.
Version 6 included support of VBE/Core 3.0, VBE/SCI.
Version 6.5 introduced the ability to use Scitech Display Doctor as wrapper video driver.
Version 7 supports VESA/MCCS and included Scitech GLDirect, an OpenGL emulator. This version was also ported to OS/2 and Linux (as version 1.0). However, the proposed product was never widely available. Only pre-releases were available to the public. In the Windows SDD prerelease, it included DOS UniVBE driver 7.20 beta, the Scitech Nucleus Graphics driver, GLDirect 2.0 and 3.0 beta. SDD 7 was first released on OS/2 on February 28, 2002, followed by a Windows beta on March 1, 2002.
SciTech Display Doctor 7.1 marked the final release of SDD, which was available on OS/2, among other operating systems. However, the Scitech Nucleus Graphics engine lived on as SciTech SNAP (System Neu
|
https://en.wikipedia.org/wiki/Algorithmic%20art
|
Algorithmic art or algorithm art is art, mostly visual art, in which the design is generated by an algorithm. Algorithmic artists are sometimes called algorists.
Overview
Algorithmic art, also known as computer-generated art, is a subset of generative art (generated by an autonomous system) and is related to systems art (influenced by systems theory). Fractal art is an example of algorithmic art.
For an image of reasonable size, even the simplest algorithms require too much calculation for manual execution to be practical, and they are thus executed on either a single computer or on a cluster of computers. The final output is typically displayed on a computer monitor, printed with a raster-type printer, or drawn using a plotter. Variability can be introduced by using pseudo-random numbers. There is no consensus as to whether the product of an algorithm that operates on an existing image (or on any input other than pseudo-random numbers) can still be considered computer-generated art, as opposed to computer-assisted art.
History
Roman Verostko argues that Islamic geometric patterns are constructed using algorithms, as are Italian Renaissance paintings which make use of mathematical techniques, in particular linear perspective and proportion.
Some of the earliest known examples of computer-generated algorithmic art were created by Georg Nees, Frieder Nake, A. Michael Noll, Manfred Mohr and Vera Molnár in the early 1960s. These artworks were executed by a plotter controlled by a computer, and were therefore computer-generated art but not digital art. The act of creation lay in writing the program, which specified the sequence of actions to be performed by the plotter. Sonia Landy Sheridan established Generative Systems as a program at the School of the Art Institute of Chicago in 1970 in response to social change brought about in part by the computer-robot communications revolution. Her early work with copier and telematic art focused on the differences betwee
|
https://en.wikipedia.org/wiki/Sergeant%20Stubby
|
Sergeant Stubby (1916 – March 16, 1926) was a dog and the unofficial mascot of the 102nd Infantry Regiment and was assigned to the 26th (Yankee) Division in World War I. He served for 18 months and participated in 17 battles and four offensives on the Western Front. He saved his regiment from surprise mustard gas attacks, found and comforted the wounded, and allegedly once caught a German soldier by the seat of his pants, holding him there until American soldiers found him. His actions were well-documented in contemporary American newspapers.
Stubby has been called the most decorated war dog of the Great War and the only dog to be nominated and promoted to sergeant through combat. Stubby's remains are in the Smithsonian Institution.
Stubby is the subject of the 2018 animated film Sgt. Stubby: An American Hero.
Early life
Stubby was described in contemporaneous news items as a Boston Terrier or "American bull terrier" mutt. Describing him as a dog of "uncertain breed," Ann Bausum wrote that: "The brindle-patterned pup probably owed at least some of his parentage to the evolving family of Boston Terriers, a breed so new that even its name was in flux: Boston Round Heads, American... and Boston Bull Terriers." Stubby was found wandering the grounds of the Yale University campus in New Haven, Connecticut, in July 1917, while members of the 102nd Infantry were training. He hung around as the men drilled and one soldier in particular, Corporal James Robert Conroy (1892–1987), developed a fondness for him. When it came time for the outfit to ship out, Conroy hid Stubby on board the troop ship. As they were getting off the ship in France, he hid Stubby under his overcoat without detection. Upon discovery by Conroy's commanding officer, Stubby saluted him as he had been trained to in camp, and the commanding officer allowed the dog to stay on board.
Military service
Stubby served with the 102nd Infantry Regiment in the trenches in France for 18 months and participated
|
https://en.wikipedia.org/wiki/DISCUS
|
DISCUS, or distributed source coding using syndromes, is a method for distributed source coding. It is a compression algorithm used to compress correlated data sources. The method is designed to achieve the Slepian–Wolf bound by using channel codes.
History
DISCUS was invented by researchers S. S. Pradhan and K. Ramachandran, and first published in their
paper "Distributed source coding using syndromes (DISCUS): design and construction",
published in the IEEE Transactions on Information Theory in 2003.
Variations
Many variations of DISCUS are presented in related literature. One such popular scheme is the Channel Code Partitioning scheme, which is an a-priori scheme, to reach the Slepian–Wolf bound. Many papers illustrate simulations and experiments on channel code partitioning using the turbo codes, Hamming codes and irregular repeat-accumulate codes.
See also
Modulo-N code is a simpler technique for compressing correlated data sources.
Distributed source coding
External links
"Distributed source coding using syndromes (DISCUS): design and construction" by Pradhan, S.S. and Ramchandran, K.
"DISCUS: Distributed Compression for Sensor Networks"
Distributed Source Coding can also be implemented using Convolutional Codes or using Turbo Codes
Information theory
Wireless sensor network
|
https://en.wikipedia.org/wiki/Flow%20stress
|
In materials science the flow stress, typically denoted as Yf (or ), is defined as the instantaneous value of stress required to continue plastically deforming a material - to keep it flowing. It is most commonly, though not exclusively, used in reference to metals. On a stress-strain curve, the flow stress can be found anywhere within the plastic regime; more explicitly, a flow stress can be found for any value of strain between and including yield point () and excluding fracture (): .
The flow stress changes as deformation proceeds and usually increases as strain accumulates due to work hardening, although the flow stress could decrease due to any recovery process. In continuum mechanics, the flow stress for a given material will vary with changes in temperature, , strain, , and strain-rate, ; therefore it can be written as some function of those properties:
The exact equation to represent flow stress depends on the particular material and plasticity model being used. Hollomon's equation is commonly used to represent the behavior seen in a stress-strain plot during work hardening:
Where
is flow stress,
is a strength coefficient,
is the plastic strain, and
is the strain hardening exponent. Note that this is an empirical relation and does not model the relation at other temperatures or strain-rates (though the behavior may be similar).
Generally, raising the temperature of an alloy above 0.5 Tm results in the plastic deformation mechanisms being controlled by strain-rate sensitivity, whereas at room temperature metals are generally strain-dependent. Other models may also include the effects of strain gradients. Independent of test conditions, the flow stress is also affected by: chemical composition, purity, crystal structure, phase constitution, microstructure, grain size, and prior strain.
The flow stress is an important parameter in the fatigue failure of ductile materials. Fatigue failure is caused by crack propagation in materials under a varying
|
https://en.wikipedia.org/wiki/Thomas%20Larcom
|
Major-General Sir Thomas Aiskew Larcom, Bart, PC FRS (22 April 1801 – 15 June 1879) was a leading official in the early Irish Ordnance Survey. He later became a poor law commissioner, census commissioner and finally executive head of the British administration in Ireland as under-secretary to the Lord-Lieutenant of Ireland, a position the government of the day was eager for him to take.
Born in Gosport, Hampshire, Larcom received his education at the Royal Military Academy and was commissioned in the Royal Engineers in 1820. He began his career with the Ordnance Survey of England in 1824 before being transferred to Ireland. With the rank of lieutenant he led the day-to-day operations of Survey headquarters by 1828 under Lt-Colonel Thomas Colby and established a meteorological observatory in Dublin. At the completion of the Survey's six-inch maps in 1846, Larcom joined the Irish Board of Works. In this role he was involved in the establishment of the Queen's University of Ireland.
The longest-serving under-secretary (1853–1868), Larcom had a distinguished career in his adopted country and acted with an impartiality that won him respect from all parties. In 1868 he was admitted to the Irish Privy Council and created a baronet.
Arms
Bibliography
Thomas Colby (1837), Ordnance Survey of the County of Londonderry (Dublin)
J.A. Lawson, "Manuscript life of Sir Thomas Larcom" (undated)
Montagu Burrows (1892), "Larcom, Thomas Aiskew", Dictionary of National Biography, 1885-1900, vol. 32
"A century of Irish Government", Edinburgh Review, no. 336 (1879)
"Obituary memoir of Sir T. A. Larcom", Proceedings of the Royal Society, no. 198 (1879)
Footnotes
References
Kidd, Charles, Williamson, David (editors). Debrett's Peerage and Baronetage (1990 edition). New York: St Martin's Press, 1990.
1801 births
1879 deaths
Baronets in the Baronetage of the United Kingdom
Royal Engineers officers
Ordnance Survey
Surveying
Members of the Privy Council of Ireland
Under-Secretaries f
|
https://en.wikipedia.org/wiki/Throw%20%28projector%29
|
In video projection terminology, throw is the distance between a video projector lens and the screen on which it shines. It is given as a ratio (called throw ratio), which describes the relationship between the distance to the screen and the width of the screen (assuming the image is to fill the screen fully). Throw ratio is a characteristic of the lens of the projector (although "projector throw" and "lens throw" are often used synonymously). Some projectors (typically larger, more expensive ones) are able to accept a variety of lenses, while lower cost projectors tend to have a permanent lens that is not designed to be changed. Some lenses are fixed at a specific throw ratio, while most are adjustable (zoomable) to a range of throw ratios. Distance to the screen is measured from the front of the lens.
Formula
Distance (D), Width (W), Throw Ratio (R)
If the distance and width are known, calculate the throw ratio using the formula: R = D / W
If the screen width and throw ratio are known, calculate the distance using the equivalent formula: D = W x R
Although it is often stated as a single value (or range of values), throw ratio is a comparison of D : W. To reduce this to a single number (as is typically seen in projector/lens specifications), start by dividing both sides by W, leaving us with D / W : 1. The 1 on the right means "for each one unit of width of the screen, how many units away should the projector be?" In practice the 1 is often assumed (omitted) when listing this specification.
Examples (fixed lenses)
A video projector (lens) with a throw ratio of 2.0 (or "2.0 : 1") would need to be positioned at a distance that is twice the width of the screen. So if the screen is 60" wide, the projector needs to be 120" from the screen.
A video projector (lens) with a throw ratio of 0.4 or less would be positioned relatively close to the screen, and would be considered a "short throw projector".
A video projector that must be positioned very far from the sc
|
https://en.wikipedia.org/wiki/Kaoani
|
Kaoani comes from the Japanese and . Kaoanis are small animated smilies that usually bounce up and down to look like they are floating. Kaoani originate in Japan and are also known as puffs, anime blobs, anikaos or anime emoticons.
Kaoani can take the form of animals, foodstuffs such as rice balls, colorful blobs, cartoon characters, etc. Many are animated to be performing a certain task, such as dancing, laughing, or cheering.
The file format for kaoanis is usually GIF, since it supports animations. However, it is also possible to make them in the APNG format, which is an animated PNG image. Kaoanis are mostly used on internet forums, MySpace profiles, blogs and instant messaging software to show moods or as avatars.
See also
Kaomoji
Emoticon
Smiley
Anime and manga terminology
Internet culture
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.