source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Faddeev%20equations
|
The Faddeev equations, named after their inventor Ludvig Faddeev, are equations that describe, at once, all the possible exchanges/interactions in a system of three particles in a fully quantum mechanical formulation. They can be solved iteratively.
In general, Faddeev equations need as input a potential that describes the interaction between two individual particles. It is also possible to introduce a term in the equation in order to take also three-body forces into account.
The Faddeev equations are the most often used non-perturbative formulations of the quantum-mechanical three-body problem.
Unlike the three body problem in classical mechanics, the quantum three body problem is uniformly soluble.
In nuclear physics, the off the energy shell nucleon-nucleon interaction has been studied by analyzing (n,2n) and (p,2p) reactions on deuterium targets, using the Faddeev Equations. The nucleon-nucleon interaction is expanded (approximated) as a series of separable potentials. The Coulomb interaction between two protons is a special problem, in that its expansion in separable potentials does not converge, but this is handled by matching the Faddeev solutions to long range Coulomb solutions, instead of to plane waves.
Separable potentials are interactions that do not preserve a particle's location. Ordinary local potentials can be expressed as sums of separable potentials. The physical nucleon-nucleon interaction, which involves exchange of mesons, is not expected to be either local or separable.
References
L.D. Faddeev, S.P. Merkuriev, Quantum Scattering Theory for Several Particle Systems, Springer, August 31, 1993, .
Quantum mechanics
Nuclear physics
Equations
|
https://en.wikipedia.org/wiki/Airhitch
|
Airhitch was a user-run system for hitchhiking on commercial airliners. It was started by Robert Segelbaum in 1969. People, who travel in this way, generally refer to themselves as Airhitchers. Most airhitchers fly between the United States and Western Europe. Before Airhitch migrated to internet based communication in the 1990s, it operated out of North American offices including New York, Los Angeles (Santa Monica), San Francisco, Seattle, Montreal, and Vancouver, and numerous European cities including Paris, Amsterdam, Berlin, Bonn, and Rome. Airhitch is a non-commercial system/method/process. Today, virtually, all exchange of Airhitch information takes place in an AIM chat room. In the chat room, Airhitchers use their real names, while the staff use screen names such as Airhitch01, and Airhitch08.
Since 2006, the cost and availability of "Airhitchable" flights has been questionable. Before the website (airhitch.org) and business went defunct in 2009, the majority of "AHers", opt to help each other find conventional airfare to their destinations.
The Airhitching process
Registration: Airhitchers formally commit to participation in the Airhitch system by sending an online registration to the staff.
Flight-Briefing: With the guidance of the staff, Airhitchers explore the best possibilities for catching rides on flights and the procedural mechanics of how to take advantage of them.
Decision: Airhitchers decide which flight they will attempt to board.
Boarding: Airhitchers go to the airport and attempt to board the aircraft.
Controversy
Around the year 2000, there were two websites with very similar names claiming the name Air Hitch, and both denied any affiliation with the other. At the time, the system used prepaid vouchers. Due to the similar websites and general nature of online reviews, it was not clear which website was part of the legitimate airhitch system.
Some online user reviews talked of invalid vouchers and scam tactics. Including invalid mailin
|
https://en.wikipedia.org/wiki/Precision%20rectifier
|
The precision rectifier is a configuration obtained with an operational amplifier in order to have a circuit behave like an ideal diode and rectifier. It is very useful for high-precision signal processing. With the help of a precision rectifier the high-precision signal processing can be done very easily.
The op-amp-based precision rectifier should not be confused with the power MOSFET-based active rectification ideal diode.
Basic circuit
The basic circuit implementing such a feature is shown on the right, where can be any load. When the input voltage is negative, there is a negative voltage on the diode, so it works like an open circuit, no current flows through the load, and the output voltage is zero. When the input is positive, it is amplified by the operational amplifier, which switches the diode on. Current flows through the load and, because of the feedback, the output voltage is equal to the input voltage.
The actual threshold of the super diode is very close to zero, but is not zero. It equals the actual threshold of the diode, divided by the gain of the operational amplifier.
This basic configuration has a problem, so it is not commonly used. When the input becomes (even slightly) negative, the operational amplifier runs open-loop, as there is no feedback signal through the diode. For a typical operational amplifier with high open-loop gain, the output saturates. If the input then becomes positive again, the op-amp has to get out of the saturated state before positive amplification can take place again. This change generates some ringing and takes some time, greatly reducing the frequency response of the circuit.
Improved circuit
An alternative version is given on the right. In this case, when the input is greater than zero, D1 is off, and D2 is on, so the output is zero because the other end of is connected to the virtual ground and there is no current through . When the input is less than zero, D1 is on and D2 is off, so the output is like th
|
https://en.wikipedia.org/wiki/Negative%20impedance%20converter
|
The negative impedance converter (NIC) is an active circuit which injects energy into circuits in contrast to an ordinary load that consumes energy from them. This is achieved by adding or subtracting excessive varying voltage in series to the voltage drop across an equivalent positive impedance. This reverses the voltage polarity or the current direction of the port and introduces a phase shift of 180° (inversion) between the voltage and the current for any signal generator. The two versions obtained are accordingly a negative impedance converter with voltage inversion (VNIC) and a negative impedance converter with current inversion (INIC). The basic circuit of an INIC and its analysis is shown below.
Basic circuit and analysis
INIC is a non-inverting amplifier (the op-amp and the voltage divider , on the figure) with a resistor () connected between its output and input. The op-amp output voltage is
The current going from the operational amplifier output through resistor toward the source is , and
So the input experiences an opposing current that is proportional to , and the circuit acts like a resistor with negative resistance
In general, elements , , and need not be pure resistances (i.e., they may be capacitors, inductors, or impedance networks).
Application
By using an NIC as a negative resistor, it is possible to let a real generator behave (almost) like an ideal generator, (i.e., the magnitude of the current or of the voltage generated does not depend on the load).
An example for a current source is shown in the figure on the right. The current generator and the resistor within the dotted line is the Norton representation of a circuit comprising a real generator and is its internal resistance. If an INIC is placed in parallel to that internal resistance, and the INIC has the same magnitude but inverted resistance value, there will be and in parallel. Hence, the equivalent resistance is
That is, the combination of the real generator and the I
|
https://en.wikipedia.org/wiki/Ingredient
|
In a general sense, an ingredient is a substance which forms part of a mixture. In cooking, recipes specify which ingredients are used to prepare a dish. Many commercial products contain secret ingredients purported to make them better than competing products. In the pharmaceutical industry, an active ingredient is the ingredient in a formulation which invokes biological activity.
National laws usually require prepared food products to display a list of ingredients and specifically require that certain additives be listed. Law typically requires that ingredients be listed according to their relative weight within the product.
Artificial ingredient
An artificial ingredient usually refers to an ingredient which is artificial or human-made, such as:
Artificial flavour
Food additive
Food colouring
Preservative
Sugar substitute, artificial sweetener
See also
Fake food
Bill of materials
Software Bill of Materials
Active Ingredient
Secret Ingredient
References
|
https://en.wikipedia.org/wiki/Polysorbate
|
Polysorbates are a class of emulsifiers used in some pharmaceuticals and food preparation. They are commonly used in oral and topical pharmaceutical dosage forms. They are also often used in cosmetics to solubilize essential oils into water-based products. Polysorbates are oily liquids derived from ethoxylated sorbitan (a derivative of sorbitol) esterified with fatty acids. Common brand names for polysorbates include Kolliphor, Scattics, Alkest, Canarcel, Tween, and Kotilen.
Examples
Polysorbate 20 (polyoxyethylene (20) sorbitan monolaurate)
Polysorbate 40 (polyoxyethylene (20) sorbitan monopalmitate)
Polysorbate 60 (polyoxyethylene (20) sorbitan monostearate)
Polysorbate 80 (polyoxyethylene (20) sorbitan monooleate)
The number following the 'polysorbate' part is related to the type of major fatty acid associated with the molecule. Monolaurate is indicated by 20, monopalmitate is indicated by 40, monostearate by 60, and monooleate by 80. The number 20 following the 'polyoxyethylene' part refers to the total number of oxyethylene (–CH2CH2O–) groups found in the molecule.
See also
Sorbitan monolaurate
Sorbitan monostearate
Sorbitan tristearate
Sorbitan monooleate
References
External links
Non-ionic surfactants
Food additives
Colloidal chemistry
E-number additives
|
https://en.wikipedia.org/wiki/Gr%C3%A9goire%20de%20Saint-Vincent
|
Grégoire de Saint-Vincent - in Latin : Gregorius a Sancto Vincentio, in Dutch : Gregorius van St-Vincent - (8 September 1584 Bruges – 5 June 1667 Ghent) was a Flemish Jesuit and mathematician. He is remembered for his work on quadrature of the hyperbola.
Grégoire gave the "clearest early account of the summation of geometric series." He also resolved Zeno's paradox by showing that the time intervals involved formed a geometric progression and thus had a finite sum.
Life
Gregoire was born in Bruges 8 September 1584. After reading philosophy in Douai, he entered the Society of Jesus 21 October 1605. His talent was recognized by Christopher Clavius in Rome. Gregoire was sent to Louvain in 1612, and was ordained a priest 23 March 1613. Gregoire began teaching in association with François d'Aguilon in Antwerp from 1617 to 20. Moving to Louvain in 1621, he taught mathematics there until 1625. That year he became obsessed with squaring the circle and requested permission from Mutio Vitelleschi to publish his method. But Vitelleschi deferred to Christoph Grienberger, the mathematician in Rome.
On 9 September 1625, Gregoire set out for Rome to confer with Grienberger, but without avail. He returned to the Netherlands in 1627, and the following year was sent to Prague to serve in the house of Emperor Ferdinand II. After an attack of apoplexy, he was assisted there by Theodorus Moretus. When the Saxons raided Prague in 1631, Gregoire left and some of his manuscripts were lost in the mayhem. Others were returned to him in 1641 through Rodericus de Arriaga.
From 1632 Gregoire resided with The Society in Ghent and served as a mathematics teacher.
The mathematical thinking of Sancto Vincentio underwent a clear evolution during his stay in Antwerp. Starting from the problem of trisection of the angle and the determination of the two mean proportional, he made use of infinite series, the logarithmic property of the hyperbola, limits, and the related method of exhaustion. Sancto
|
https://en.wikipedia.org/wiki/The%20New%20Dinosaurs
|
The New Dinosaurs: An Alternative Evolution is a 1988 speculative evolution book written by Scottish geologist and palaeontologist Dougal Dixon and illustrated by several illustrators including Amanda Barlow, Peter Barrett, John Butler, Jeane Colville, Anthony Duke, Andy Farmer, Lee Gibbons, Steve Holden, Philip Hood, Martin Knowelden, Sean Milne, Denys Ovenden and Joyce Tuhill. The book also features a foreword by Desmond Morris. The New Dinosaurs explores a hypothetical alternate Earth, complete with animals and ecosystems, where the Cretaceous-Paleogene extinction event never occurred, leaving non-avian dinosaurs and other Mesozoic animals an additional 65 million years to evolve and adapt over the course of the Cenozoic to the present day.
The New Dinosaurs is Dixon's second work on speculative evolution, following After Man (1981). Like After Man, The New Dinosaurs uses its own fictional setting and hypothetical wildlife to explain natural processes with fictitious examples, in this case the concept of zoogeography and biogeographic realms. It was followed by another speculative evolution work by Dixon in 1990, Man After Man.
Although criticised by some palaeontologists upon its release, several of Dixon's hypothetical dinosaurs bear a coincidental resemblance in both appearance and behaviour to dinosaurs that were discovered after the book's publication. As a general example, many of the fictional dinosaurs are depicted with feathers, something that was not yet widely accepted when the book was written.
Summary
The New Dinosaurs explores an imagined alternate version of the present-day Earth as Dixon imagines it would have been if the Cretaceous-Paleogene extinction event had never occurred. As in Dixon's previous work, After Man, ecology and evolutionary theory are applied to create believable creatures, all of which have their own binomial names and text describing their behaviour and interactions with other contemporary animals. Most of these animals r
|
https://en.wikipedia.org/wiki/Wireless%20security
|
Wireless security is the prevention of unauthorized access or damage to computers or data using wireless networks, which include Wi-Fi networks. The term may also refer to the protection of the wireless network itself from adversaries seeking to damage the confidentiality, integrity, or availability of the network. The most common type is Wi-Fi security, which includes Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA). WEP is an old IEEE 802.11 standard from 1997. It is a notoriously weak security standard: the password it uses can often be cracked in a few minutes with a basic laptop computer and widely available software tools. WEP was superseded in 2003 by WPA, a quick alternative at the time to improve security over WEP. The current standard is WPA2; some hardware cannot support WPA2 without firmware upgrade or replacement. WPA2 uses an encryption device that encrypts the network with a 256-bit key; the longer key length improves security over WEP. Enterprises often enforce security using a certificate-based system to authenticate the connecting device, following the standard 802.11X.
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2. Certification began in June 2018, and WPA3 support has been mandatory for devices which bear the "Wi-Fi CERTIFIED™" logo since July 2020.
Many laptop computers have wireless cards pre-installed. The ability to enter a network while mobile has great benefits. However, wireless networking is prone to some security issues. Hackers have found wireless networks relatively easy to break into, and even use wireless technology to hack into wired networks. As a result, it is very important that enterprises define effective wireless security policies that guard against unauthorized access to important resources. Wireless Intrusion Prevention Systems (WIPS) or Wireless Intrusion Detection Systems (WIDS) are commonly used to enforce wireless security policies.
The risks to users of wireless technology
|
https://en.wikipedia.org/wiki/Thexder
|
is a run and gun video game from Game Arts, originally released for the NEC PC-8801 in 1985. It was ported to many systems, including the Famicom.
Gameplay
In Thexder, the player controls a fighter robot that is able to transform into a jet and shoot lasers.
Release
The game was originally released in 1985 for the NEC PC-8801 platform in Japan. Game Arts licensed Thexder to Square in order to develop a conversion for the Nintendo Entertainment System (NES) game console. In 1987, Game Arts also developed a Thexder conversion for the MSX platform. The game was licensed to Sierra Entertainment for release in the United States. Sierra ported the game to multiple platforms, including the IBM PC, Tandy Color Computer 3, Apple II, Apple IIGS, Apple Macintosh, and Tandy 1000. In 1988, Activision released the game in Europe on the Commodore Amiga.
Reception
Thexder quickly became a best-selling hit, selling over 500,000 copies in Japan by 1987. The PC-8801 platform was only popular in Japan and, despite home market success, Thexder garnered little attention abroad initially. With the conversion for the MSX (the best-selling platform in Brazil and many Eastern European countries), it became an international hit. It became the company's best-selling title of 1987. By 1990, the game had sold over one million copies worldwide.
Compute! praised the Apple IIGS version of Thexder as the computer's "first true arcade game" with "excellent play value for your dollar". In 1988, The Games Machine gave the Amiga version a 74% score. In 1991, Dragon gave the Macintosh and PC/MS-DOS versions of the game each 4 out of 5 stars. The game went on to sell over one million copies worldwide, becoming Game Arts' biggest-selling title of 1987. Thexder is considered an important breakthrough title for the run-and-gun shooter game genre, paving the way for titles such as Contra and Metal Slug.
Other games in the series
References
External links
Thexder
Official website (D4Enterprise
|
https://en.wikipedia.org/wiki/Via%20%28electronics%29
|
A via (Latin, 'path' or 'way') is an electrical connection between two or more metal layers, and are commonly used in printed circuit boards (PCB). Essentially a via is a small drilled hole that goes through two or more adjacent layers; the hole is plated with metal (often copper) that forms an electrical connection through the insulating layers.
Vias are important for PCB manufacturing. This is because the vias are drilled with certain tolerances and may be fabricated off their designated locations, so some allowance for errors in drill position must be made prior to manufacturing or else the manufacturing yield can decrease due to non-conforming boards (according to some reference standard) or even due to failing boards. In addition, regular through hole vias are considered fragile structures as they are long and narrow; the manufacturer must ensure that the vias are plated properly throughout the barrel and this in turn causes several processing steps.
In printed circuit boards
In printed circuit board (PCB) design, a via consists of two pads in corresponding positions on different copper layers of the board, that are electrically connected by a hole through the board. The hole is made conductive by electroplating, or is lined with a tube or a rivet. High-density multilayer PCBs may have microvias: blind vias are exposed only on one side of the board, while buried vias connect internal layers without being exposed on either surface. Thermal vias carry heat away from power devices and are typically used in arrays of about a dozen.
A via consists of:
Barrel — conductive tube filling the drilled hole
Pad — connects each end of the barrel to the component, plane, or trace
Antipad — clearance hole between barrel and metal layer to which it is not connected
A via, sometimes called PTV or plated-through-via, should not be confused with a plated through hole (PTH). Via is used as an interconnection between copper layers on a PCB while the PTH is generally made l
|
https://en.wikipedia.org/wiki/Robert%20Brunner
|
Robert Brunner (born 1958) is an American industrial designer. Brunner was the Director of Industrial Design for Apple Computer from 1989 to 1996, and is a founder and current partner at Ammunition Design Group.
Biography
Brunner received a Bachelor of Science degree in Industrial Design from San José State University in 1981.
After working as a designer and project manager at several high technology companies, Brunner went on to co-found Lunar Design in 1984. In 1989, Brunner accepted the position of Director of Industrial Design at Apple Computer, where he provided design and direction for all Apple product lines, including the PowerBook. He was succeeded by Jonathan Ive in 1997. Brunner claims that while with Apple, he hired Ive three times.
In January 1996, he became a partner in the San Francisco office of Pentagram. In 2006, Brunner partnered Alex Siow, founder of San Francisco-based Zephyr Ventilation, to launch outdoor grill design firm Fuego. Emblematic of his relationship with Siow, he designed the Arc Collection of modern range-hoods for Zephyr Ventilation.
By mid-2007 Brunner left Pentagram to start Ammunition Design Group. In 2008, former MetaDesign leaders Brett Wickens and Matt Rolandson joined Ammunition LLC as partners. In 2008, Brunner collaborated with Jimmy Iovine and Dr. Dre to launch Beats by Dre, and is responsible for the design of the company's lines of headphones and speakers including Beats Studio, Powerbeats, Mixr, Solo and Solo Pro as well as the Pill wireless speaker, among others.
Brunner's work has been widely published in North America, Europe, Asia and Australia. His product designs have won 23 IDSA Awards from the Industrial Designers Society of America and Business Week, including 6 best of category awards. His work is included in the permanent collections of the Museum of Modern Art (MoMA), Cooper Hewitt, Smithsonian Design Museum, Indianapolis Museum of Art (IMA), and the San Francisco Museum of Modern Art (SFMoMA).
See
|
https://en.wikipedia.org/wiki/Normal%20height
|
Normal heights (symbol or ; SI unit metre, m) is a type of height above sea level introduced by Mikhail Molodenskii.
The normal height of a point is computed as the quotient of a point's geopotential number (i.e. its geopotential difference with that of sea level), by the average, normal gravity computed along the plumb line of the point. (More precisely, along the ellipsoidal normal, averaging over the height range from 0 — on the reference ellipsoid — to ; the procedure is thus recursive.)
Normal heights are thus dependent upon the reference ellipsoid chosen. The Soviet Union and many other Eastern European countries have chosen a height system based on normal heights, determined by precise geodetic levelling. Normal gravity values are easier to compute compared to actual gravity, as one does not have to know the Earth's crust density. This is an advantage of normal heights compared to orthometric heights.
The reference surface that normal heights are measured from is called the quasi-geoid (or quasigeoid), a representation of mean sea level similar to the geoid and close to it, but lacking the physical interpretation of an equipotential surface. The geoid undulation with respect to the reference ellipsoid:
finds an analogue in the so-called height anomaly, :
The maximum geoid–quasigeoid separation (GQS), , is on the order of 5 meters in the Himalayas.
Alternatives to normal heights include orthometric heights (geoid-based) and dynamic heights.
See also
Physical geodesy
References
Vertical position
Geodesy
|
https://en.wikipedia.org/wiki/Rvachev%20function
|
In mathematics, an R-function, or Rvachev function, is a real-valued function whose sign does not change if none of the signs of its arguments change; that is, its sign is determined solely by the signs of its arguments.
Interpreting positive values as true and negative values as false, an R-function is transformed into a "companion" Boolean function (the two functions are called friends). For instance, the R-function ƒ(x, y) = min(x, y) is one possible friend of the logical conjunction (AND). R-functions are used in computer graphics and geometric modeling in the context of implicit surfaces and the function representation. They also appear in certain boundary-value problems, and are also popular in certain artificial intelligence applications, where they are used in pattern recognition.
R-functions were first proposed by () in 1963, though the name, "R-functions", was given later on by Ekaterina L. Rvacheva-Yushchenko, in memory of their father, Logvin Fedorovich Rvachev ().
See also
Function representation
Slesarenko function (S-function)
Notes
References
Meshfree Modeling and Analysis, R-Functions (University of Wisconsin)
Pattern Recognition Methods Based on Rvachev Functions (Purdue University)
Shape Modeling and Computer Graphics with Real Functions
Non-classical logic
Real analysis
Types of functions
|
https://en.wikipedia.org/wiki/Corpus%20Agrimensorum%20Romanorum
|
The Corpus Agrimensorum Romanorum (Corpus of Roman Land Surveyors) is a Roman book on land surveying which collects works by Siculus Flaccus, Frontinus, Agennius Urbicus, Hyginus Gromaticus and other writers, known as the Gromatici or Agrimensores ("land surveyors"). The work is preserved in various manuscripts, of which the oldest is the 6th or 7th-century Codex Arcerianus.
Contents and authors
The Corpus consists of a number of texts with different contents, composed at different dates. The Codex Arcerianus alone contains 33 separate works, most of which are the writings of the Agrimensores. These writings were clearly written as textbooks or manuals for working land surveyors. The most important authors in the collection are Frontinus (1st century AD), Agennius Urbicus (5th or 6th century), Hyginus Gromaticus, Siculus Flaccus (2nd century), and Marcus Junius Nipsus (2nd century).
Another important component of the work are the Libri Coloniarum ("Books of Colonies"), lists of surveyed areas of countryside and cities in Italy between Etruria and Sicily, mostly in southern Italy. Possibly, these were areas that were subject to land surveys, although they had already been occupied under the arcifinalis law (i.e. land survey and distribution at point of conquest). The process is much debated among historians.
A third sub-set of works in the corpus are writings which deal with the mathematical and geometric aspects of land surveying. The most important of these are the Expositio et ratio omnium formarum (Explanation and Calculation of All Shapes) by Balbus and a mathematical work by Epaphrodites and Vitruvius Rufus
Various other texts are also bundled into the Corpus, including:
Extracts from Euclid's Elements
Extracts from Columella's De re rustica
The Lex Mamila Roscia Peducaea Alliena Fabia, part of a Roman law on setting and protecting land boundaries. In particular, the law imposes a fine of 5,000 sesterces for moving a boundary stone. The date of the law i
|
https://en.wikipedia.org/wiki/Drag%20count
|
A drag count is a dimensionless unit used by aerospace engineers. 1 drag count is equal to a of 0.0001.
As the drag forces present on automotive vehicles are smaller than for aircraft, 1 drag count is commonly referred to as 0.001 of .
Definition
A drag count is defined as:
where:
is the drag force, which is by definition the force component in the direction of the flow velocity,
is the mass density of the fluid,
is the speed of the object relative to the fluid, and
is the reference area.
The drag coefficient is used to compare the solutions of different geometries by means of a dimensionless number. A drag count is more user-friendly than the drag coefficient, as the latter is usually much less than 1. A drag count of 200 to 400 is typical for an airplane at cruise. A reduction of one drag count on a subsonic civil transport airplane means about more in payload.
Notes
References
See also
Drag coefficient
Zero-lift drag coefficient
Drag (physics)
Equations
Force
|
https://en.wikipedia.org/wiki/Sieve%20%28mail%20filtering%20language%29
|
Sieve is a programming language that can be used for email filtering. It owes its creation to the CMU Cyrus Project, creators of Cyrus IMAP server.
The language is not tied to any particular operating system or mail architecture. It requires the use of RFC-2822–compliant messages, but otherwise generalizes to other systems that meet these criteria. The current version of Sieve's base specification is outlined in RFC 5228, published in January 2008.
Language
Sieve is a data-driven programming language, similar to earlier email filtering languages such as procmail and maildrop, and earlier line-oriented languages such as sed and AWK: it specifies conditions to match and actions to take on matching.
This differs from general-purpose programming languages while this is highly limited – the base standard has no variables, and no loops (but does allow conditional branching), preventing runaway programs which limits the language to simple filtering operations. Although extensions have been devised to extend the language to include variables and, limited loops, the language is still highly restricted, and thus suitable for running user-devised programs as part of the mail system.
There are also a significant number of restrictions on the grammar of the language, in order to reduce the complexity of parsing the language, but the language also supports the use of multiple methods for comparing localized strings, and is fully Unicode-aware.
While Sieve was originally conceived as tool external to SMTP, serendipitously extends it in order to allow rejection at the SMTP protocol level.
Use
The Sieve scripts may be generated by a GUI-based rules editor or they may be entered directly using a text editor.
The scripts are transferred to the mail server in a server-dependent way. The ManageSieve protocol (defined in ) allows users to manage their Sieve scripts on a remote server. Mail servers with local users may allow the scripts to be stored in e.g. a file in the users' h
|
https://en.wikipedia.org/wiki/Blau%20space
|
Blau space consists of the multidimensional coordinate system, created by considering the set of socio-demographic variables as dimensions. All socio-demographic characteristics are potential elements of Blau space, including continuous characteristics such as age, years of education, income, occupational prestige, geographic location, and so forth. In addition, categorical measures of socio-demographic characteristics such as race, sex, religion, birthplace, and others are Blau dimensions. "Blau space" is a theoretical construct which was developed by Miller McPherson and named after Peter Blau. It was later elaborated by McPherson and Ranger-Moore.
The organizing force in Blau space is the homophily principle, which argues that the flow of information from person to person is a declining function of distance in Blau space. Persons located at great distance in Blau space are very unlikely to interact, which creates the conditions for social differences in any characteristic that is transmitted through social communication. The homophily principle thus localizes communication in Blau space, leading to the development of social niches for human activity and social organization.
References
Coordinate systems
|
https://en.wikipedia.org/wiki/Fluid%20mechanics
|
Fluid mechanics is the branch of physics concerned with the mechanics of fluids (liquids, gases, and plasmas) and the forces on them.
It has applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical, and biomedical engineering, as well as geophysics, oceanography, meteorology, astrophysics, and biology.
It can be divided into fluid statics, the study of fluids at rest; and fluid dynamics, the study of the effect of forces on fluid motion.
It is a branch of continuum mechanics, a subject which models matter without using the information that it is made out of atoms; that is, it models matter from a macroscopic viewpoint rather than from microscopic. Fluid mechanics, especially fluid dynamics, is an active field of research, typically mathematically complex. Many problems are partly or wholly unsolved and are best addressed by numerical methods, typically using computers. A modern discipline, called computational fluid dynamics (CFD), is devoted to this approach. Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow.
Brief history
The study of fluid mechanics goes back at least to the days of ancient Greece, when Archimedes investigated fluid statics and buoyancy and formulated his famous law known now as the Archimedes' principle, which was published in his work On Floating Bodies—generally considered to be the first major work on fluid mechanics. Iranian scholar Abu Rayhan Biruni and later Al-Khazini applied experimental scientific methods to fluid mechanics. Rapid advancement in fluid mechanics began with Leonardo da Vinci (observations and experiments), Evangelista Torricelli (invented the barometer), Isaac Newton (investigated viscosity) and Blaise Pascal (researched hydrostatics, formulated Pascal's law), and was continued by Daniel Bernoulli with the introduction of mathematical fluid dynamics in Hydrodynamica (1739).
|
https://en.wikipedia.org/wiki/Level-spacing%20distribution
|
In mathematical physics, level spacing is the difference between consecutive elements in some set of real numbers. In particular, it is the difference between consecutive energy levels or eigenvalues of a matrix or linear operator.
Mathematical physics
|
https://en.wikipedia.org/wiki/Z-Wave
|
Z-Wave is a wireless communications protocol used primarily for residential and commercial building automation. It is a mesh network using low-energy radio waves to communicate from device to device, allowing for wireless control of smart home devices, such as smart lights, security systems, thermostats, sensors, smart door locks, and garage door openers. The Z-Wave brand and technology are owned by Silicon Labs. Over 300 companies involved in this technology are gathered within the Z-Wave Alliance.
Like other protocols and systems aimed at the residential, commercial, MDU and building markets, a Z-Wave system can be controlled from a smart phone, tablet, or computer, and locally through a smart speaker, wireless keyfob, or wall-mounted panel with a Z-Wave gateway or central control device serving as both the hub or controller. Z-Wave provides the application layer interoperability between home control systems of different manufacturers that are a part of its alliance. There is a growing number of interoperable Z-Wave products; over 1,700 in 2017, over 2,600 by 2019, and over 4,000 by 2022.
History
The Z-Wave protocol was developed by Zensys, a Danish company based in Copenhagen, in 1999. That year, Zensys introduced a consumer light-control system, which evolved into Z-Wave as a proprietary system on a chip (SoC) home automation protocol on an unlicensed frequency band in the 900 MHz range. Its 100 series chip set was released in 2003, and its 200 series was released in May 2005, with the ZW0201 chip offering high performance at a low cost. Its 500 series chip, also known as Z-Wave Plus, was released in March 2013, with four times the memory, improved wireless range, improved battery life, an enhanced S2 security framework, and the SmartStart setup feature. Its 700 series chip was released in 2019, with the ability to communicate up to 100 meters directly from point-to-point, or 800 meters across an entire Z-Wave network, an extended battery life of up to 10 year
|
https://en.wikipedia.org/wiki/Physical%20symbol%20system
|
A physical symbol system (also called a formal system) takes physical patterns (symbols), combining them into structures (expressions) and manipulating them (using processes) to produce new expressions.
The physical symbol system hypothesis (PSSH) is a position in the philosophy of artificial intelligence formulated by Allen Newell and Herbert A. Simon. They wrote:
This claim implies both that human thinking is a kind of symbol manipulation (because a symbol system is necessary for intelligence) and that machines can be intelligent (because a symbol system is sufficient for intelligence).
The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing more than reckoning"), Leibniz (who attempted to create a logical calculus of all human ideas), Hume (who thought perception could be reduced to "atomic impressions") and even Kant (who analyzed all experience as controlled by formal rules). The latest version is called the computational theory of mind, associated with philosophers Hilary Putnam and Jerry Fodor.
Examples
Examples of physical symbol systems include:
Formal logic: the symbols are words like "and", "or", "not", "for all x" and so on. The expressions are statements in formal logic which can be true or false. The processes are the rules of logical deduction.
Algebra: the symbols are "+", "×", "x", "y", "1", "2", "3", etc. The expressions are equations. The processes are the rules of algebra, that allow one to manipulate a mathematical expression and retain its truth.
Chess: the symbols are the pieces, the processes are the legal chess moves, the expressions are the positions of all the pieces on the board.
A computer running a program: the symbols and expressions are data structures, the process is the program that changes the data structures.
The physical symbol system hypothesis claims that both of these are also examples of physical symbol systems:
Intelligent human thought: the symbols are encoded in our brains. The expressi
|
https://en.wikipedia.org/wiki/Piezophile
|
A piezophile (from Greek "piezo-" for pressure and "-phile" for loving) is an organism with optimal growth under high hydrostatic pressure i.e. an organism that has its maximum rate of growth at a hydrostatic pressure equal to or above 10 MPa (= 99 atm = 1,450 psi), when tested over all permissible temperatures. Originally, the term barophile was used for these organisms, but since the prefix "baro-" stands for weight, the term piezophile was given preference. Like all definitions of extremophiles, the definition of piezophiles is anthropocentric, and humans consider that moderate values for hydrostatic pressure are those around 1 atm (= 0.1 MPa = 14.7 psi), whereas those "extreme" pressures are the normal living conditions for those organisms. Hyperpiezophiles are organisms that have their maximum growth rate above 50 MPa (= 493 atm = 7,252 psi).
Though the high hydrostatic pressure has deleterious effects on organisms growing at atmospheric pressure, these organisms which are solely found at high pressure habitats at deep sea in fact need high pressures for their optimum growth. Often their growth is able to continue at much higher pressures (such as 100MPa) compared to those organisms which normally grow at low pressures.
The first obligate piezophile found was a psychrophilic bacteria called Colwellia marinimaniae strain M-41. It was isolated from a decaying amphipod Hirondellea gigas from the bottom of Mariana Trench. The first thermophilic piezophilic archaea Pyrococcus yayanosii strain CH1 was isolated from the Ashadze site, a deep sea hydrothermal vent. Strain MT-41 has an optimal growth pressure at 70MPa at 2 °C and strain CH1 has a optimal growth pressure at 52MPa at 98 °C. They are unable to grow at pressures lower than or equal to 20MPa, and both can grow at pressures above 100MPa.The current record for highest hydrostatic pressure where growth was observed is 140MPa shown by Colwellia marinimaniae MTCD1. The term "obligate piezophile" refers to organ
|
https://en.wikipedia.org/wiki/Aerospace%20physiology
|
Aerospace physiology is the study of the effects of high altitudes on the body, such as different pressures and levels of oxygen. At different altitudes the body may react in different ways, provoking more cardiac output, and producing more erythrocytes. These changes cause more energy waste in the body, causing muscle fatigue, but this varies depending on the level of the altitude.
Effects of altitude
The physics that affect the body in the sky or in space are different from the ground. For example, barometric pressure is different at different heights. At sea level barometric pressure is 760 mmHg; at 3.048 m above sea level, barometric pressure is 523 mmHg, and at 15.240 m, the barometric pressure is 87 mmHg. As the barometric pressure decreases, atmospheric partial pressure decreases also. This pressure is always below 20% of the total barometric pressure. At sea level, alveolar partial pressure of oxygen is 104 mmHg, reaching 6000 meters above the sea level. This pressure will decrease up to 40 mmHg in a non-acclimated person, but in an acclimated person, it will decrease as much as 52 mmHg. This is because alveolar ventilation will increase more in the acclimated person. Aviation physiology can also include the effect in humans and animals exposed for long periods of time inside pressurized cabins.
The other main issue with altitude is hypoxia, caused by both the lack of barometric pressure and the decrease in oxygen as the body rises. With exposure at higher altitudes, alveolar carbon dioxide partial pressure (PCO2) decreases from 40 mmHg (sea level) to lower levels. With a person acclimated to sea level, ventilation increases about five times and the carbon dioxide partial pressure decreases up to 6 mmHg. In an altitude of 3040 meters, arterial saturation of oxygen elevates to 90%, but over this altitude arterial saturation of oxygen decreases rapidly as much as 70% (6000 m), and decreases more at higher altitudes.
g-forces
g-forces are mostly experience
|
https://en.wikipedia.org/wiki/Ring%20of%20sets
|
In mathematics, there are two different notions of a ring of sets, both referring to certain families of sets.
In order theory, a nonempty family of sets is called a ring (of sets) if it is closed under union and intersection. That is, the following two statements are true for all sets and ,
implies and
implies
In measure theory, a nonempty family of sets is called a ring (of sets) if it is closed under union and relative complement (set-theoretic difference). That is, the following two statements are true for all sets and ,
implies and
implies
This implies that a ring in the measure-theoretic sense always contains the empty set. Furthermore, for all sets and ,
which shows that a family of sets closed under relative complement is also closed under intersection, so that a ring in the measure-theoretic sense is also a ring in the order-theoretic sense.
Examples
If is any set, then the power set of (the family of all subsets of ) forms a ring of sets in either sense.
If is a partially ordered set, then its upper sets (the subsets of with the additional property that if belongs to an upper set U and , then must also belong to ) are closed under both intersections and unions. However, in general it will not be closed under differences of sets.
The open sets and closed sets of any topological space are closed under both unions and intersections.
On the real line , the family of sets consisting of the empty set and all finite unions of half-open intervals of the form , with is a ring in the measure-theoretic sense.
If is any transformation defined on a space, then the sets that are mapped into themselves by are closed under both unions and intersections.
If two rings of sets are both defined on the same elements, then the sets that belong to both rings themselves form a ring of sets.
Related structures
A ring of sets in the order-theoretic sense forms a distributive lattice in which the intersection and union operations correspond to the
|
https://en.wikipedia.org/wiki/Polyphenism
|
A polyphenic trait is a trait for which multiple, discrete phenotypes can arise from a single genotype as a result of differing environmental conditions. It is therefore a special case of phenotypic plasticity.
There are several types of polyphenism in animals, from having sex determined by the environment to the castes of honey bees and other social insects. Some polyphenisms are seasonal, as in some butterflies which have different patterns during the year, and some Arctic animals like the snowshoe hare and Arctic fox, which are white in winter. Other animals have predator-induced or resource polyphenisms, allowing them to exploit variations in their environment. Some nematode worms can develop either into adults or into resting dauer larvae according to resource availability.
Definition
A polyphenism is the occurrence of several phenotypes in a population, the differences between which are not the result of genetic differences. For example, crocodiles possess a temperature-dependent sex determining polyphenism, where sex is the trait influenced by variations in nest temperature.
When polyphenic forms exist at the same time in the same panmictic (interbreeding) population they can be compared to genetic polymorphism. With polyphenism, the switch between morphs is environmental, but with genetic polymorphism the determination of morph is genetic. These two cases have in common that more than one morph is part of the population at any one time. This is rather different from cases where one morph predictably follows another during, for instance, the course of a year. In essence the latter is normal ontogeny where young forms can and do have different forms, colours and habits to adults.
The discrete nature of polyphenic traits differentiates them from traits like weight and height, which are also dependent on environmental conditions but vary continuously across a spectrum. When a polyphenism is present, an environmental cue causes the organism to develop along
|
https://en.wikipedia.org/wiki/Microsoft%20Analysis%20Services
|
Microsoft SQL Server Analysis Services (SSAS) is an online analytical processing (OLAP) and data mining tool in Microsoft SQL Server. SSAS is used as a tool by organizations to analyze and make sense of information possibly spread out across multiple databases, or in disparate tables or files. Microsoft has included a number of services in SQL Server related to business intelligence and data warehousing. These services include Integration Services, Reporting Services and Analysis Services. Analysis Services includes a group of OLAP and data mining capabilities and comes in two flavors multidimensional and tabular, where the difference between the two is how the data is presented. In a tabular model, the information is arranged in two-dimensional tables which can thus be more readable for a human. A multidimensional model can contain information with many degrees of freedom, and must be unfolded to increase readability by a human.
History
In 1996, Microsoft began its foray into the OLAP Server business by acquiring the OLAP software technology from Canada-based Panorama Software.
Just over two years later, in 1998, Microsoft released OLAP Services as part of SQL Server 7. OLAP Services supported MOLAP, ROLAP, and HOLAP architectures, and it used OLE DB for OLAP as the client access API and MDX as a query language. It could work in client-server mode or offline mode with local cube files.
In 2000, Microsoft released Analysis Services 2000. It was renamed from "OLAP Services" due to the inclusion of data mining services. Analysis Services 2000 was considered an evolutionary release, since it was built on the same architecture as OLAP Services and was therefore backward compatible with it. Major improvements included more flexibility in dimension design through support of parent child dimensions, changing dimensions, and virtual dimensions. Another feature was a greatly enhanced calculation engine with support for unary operators, custom rollups, and cell calculation
|
https://en.wikipedia.org/wiki/SHEEP%20%28symbolic%20computation%20system%29
|
SHEEP is one of the earliest interactive symbolic computation systems. It is specialized for computations with tensors, and was designed for the needs of researchers working with general relativity and other theories involving extensive tensor calculus computations.
SHEEP is a freeware package (copyrighted, but free for educational and research use).
The name "SHEEP" is pun on the Lisp Algebraic Manipulator or LAM on which SHEEP is based. The package was written by Inge Frick, using earlier work by Ian Cohen and Ray d'Inverno, who had written ALAM - Atlas LISP Algebraic Manipulation in earlier (designed in 1970). SHEEP was an interactive computer package whereas LAM and ALAM were batch processing languages.
Jan E. Åman wrote an important package in SHEEP to carry out the Cartan-Karlhede algorithm. A more recent version of SHEEP, written by Jim Skea, runs under Cambridge Lisp, which is also used for REDUCE.
See also
GRTensorII
Notes
External links
SHEEP download directory at Queen Mary, University of London
Some sources of info on Sheep
Review article by M.A.H.MacCallum in "Workshop on Dynamical Spacetimes and Numerical Relativity" edited by Joan Centrella
Tensors
|
https://en.wikipedia.org/wiki/Camouflet
|
A camouflet, in military science, is an artificial cavern created by an explosion. If the explosion reaches the surface then it is called a crater.
The term was originally defined as a countermine dug by defenders to prevent the undermining of a fortress's walls during a siege. The defenders would dig a tunnel under the attackers' tunnel. An explosive charge would be detonated to create a camouflet that would collapse the attackers' tunnel.
More recently, the term has been used to describe the effects of very large bombs like the Grand Slam bomb, which are designed to penetrate next to a large target structure and create a camouflet to undermine the foundations of the structure. It has been observed that it is more efficient to penetrate ground next to the target than to hit the target directly.
A camouflet set describes a system used in the British Army for cratering tracks and other routes. A tube is driven into the ground using a manual post driver. The end of the tube is a disposable steel point. A small charge connected to a detonator is lowered down the tube. The tube is then removed, and the hole tamped. The charge is then blown, leaving a void and a hole to the surface. This void is then filled with a much larger charge, which is also tamped, and then blown when required to create a crater as an obstacle. A refinement was introduced in the 1980s, with the use of a shaped charge to create the initial hole.
Because of the presence of high levels of toxic fumes from the explosive, including carbon monoxide, and the weakness of the soft earth overlying the cavern, camouflets are extremely hazardous to bomb disposal personnel.
See also
Bangalore torpedo
Canadian pipe mine
Flame fougasse
Special Atomic Demolition Munition
References
Siege tactics
Military engineering
Military strategy
Bomb disposal
Strategic bombing
|
https://en.wikipedia.org/wiki/Superparasitism
|
Superparasitism is a form of parasitism in which the host (typically an insect larva such as a caterpillar) is attacked more than once by a single species of parasitoid. Multiparasitism or coinfection, on the other hand, occurs when the host has been parasitized by more than one species. Host discrimination, whereby parasitoids can identify a host with parasites from an unparasitized host, is present in certain species of parasitoids and is used to avoid superparasitism and thus competition from other parasites.
Superparasitism can result in transmission of viruses, and viruses may influence a parasitoid's behavior in favor of infecting already infected hosts, as is the case with Leptopilina boulardi.
Examples
One example of superparasitism is seen in Rhagoletis juglandis, also known as the walnut husk fly. During oviposition, female flies lacerate the tissue of the inner husk of the walnut and creative a cavity for her eggs. The female flies oviposit and reinfest the same walnuts and even the same oviposition sites created by conspecifics.
References
Parasitology
|
https://en.wikipedia.org/wiki/DJ%20mixer
|
A DJ mixer is a type of audio mixing console used by disc jockeys (DJs) to control and manipulate multiple audio signals. Some DJs use the mixer to make seamless transitions from one song to another when they are playing records at a dance club. Hip hop DJs and turntablists use the DJ mixer to play record players like a musical instrument and create new sounds. DJs in the disco, house music, electronic dance music and other dance-oriented genres use the mixer to make smooth transitions between different sound recordings as they are playing. The sources are typically record turntables, compact cassettes, CDJs, or DJ software on a laptop. DJ mixers allow the DJ to use headphones to preview the next song before playing it to the audience. Most low- to mid-priced DJ mixers can only accommodate two turntables or CD players, but some mixers (such as the ones used in larger nightclubs) can accommodate up to four turntables or CD players. DJs and turntablists in hip hop music and nu metal use DJ mixers to create beats, loops and so-called scratching sound effects.
Description
DJ mixers are usually much smaller than other mixing consoles used in sound reinforcement systems and sound recording. Whereas a typical nightclub mixer will have 24 inputs and a professional recording studio's huge mixer may have 48, 72 or even 96 inputs, a typical DJ mixer may have only two to four inputs. The key feature that differentiates a DJ mixer from other types of larger audio mixers is the ability to redirect (cue) the sounds of a non-playing source to headphones, so the DJ can find the desired part of a song or track.
A crossfader has the same engineering design as fader, in that it is a sliding control, but unlike faders, which are usually vertical, crossfaders are usually horizontal. To understand the function of a crossfader, one can think of the crossfader in three key positions. For a DJ mixer that has two sound sources connected, such as two record turntables, when the crossfader is
|
https://en.wikipedia.org/wiki/Topological%20algebra
|
In mathematics, a topological algebra is an algebra and at the same time a topological space, where the algebraic and the topological structures are coherent in a specified sense.
Definition
A topological algebra over a topological field is a topological vector space together with a bilinear multiplication
,
that turns into an algebra over and is continuous in some definite sense. Usually the continuity of the multiplication is expressed by one of the following (non-equivalent) requirements:
joint continuity: for each neighbourhood of zero there are neighbourhoods of zero and such that (in other words, this condition means that the multiplication is continuous as a map between topological spaces or
stereotype continuity: for each totally bounded set and for each neighbourhood of zero there is a neighbourhood of zero such that and , or
separate continuity: for each element and for each neighbourhood of zero there is a neighbourhood of zero such that and .
(Certainly, joint continuity implies stereotype continuity, and stereotype continuity implies separate continuity.) In the first case is called a "topological algebra with jointly continuous multiplication", and in the last, "with separately continuous multiplication".
A unital associative topological algebra is (sometimes) called a topological ring.
History
The term was coined by David van Dantzig; it appears in the title of his doctoral dissertation (1931).
Examples
1. Fréchet algebras are examples of associative topological algebras with jointly continuous multiplication.
2. Banach algebras are special cases of Fréchet algebras.
3. Stereotype algebras are examples of associative topological algebras with stereotype continuous multiplication.
Notes
External links
References
Topological vector spaces
Algebras
|
https://en.wikipedia.org/wiki/Data%20farming
|
Data farming is the process of using designed computational experiments to “grow” data, which can then be analyzed using statistical and visualization techniques to obtain insight into complex systems. These methods can be applied to any computational model.
Data farming differs from Data mining, as the following metaphors indicate:
Miners seek valuable nuggets of ore buried in the earth, but have no control over what is out there or how hard it is to extract the nuggets from their surroundings. ... Similarly, data miners seek to uncover valuable nuggets of information buried within massive amounts of data. Data-mining techniques use statistical and graphical measures to try to identify interesting correlations or clusters in the data set.
Farmers cultivate the land to maximize their yield. They manipulate the environment to their advantage using irrigation, pest control, crop rotation, fertilizer, and more. Small-scale designed experiments let them determine whether these treatments are effective. Similarly, data farmers manipulate simulation models to their advantage, using large-scale designed experimentation to grow data from their models in a manner that easily lets them extract useful information. ...the results can reveal root cause-and-effect relationships between the model input factors and the model responses, in addition to rich graphical and statistical views of these relationships.
A NATO modeling and simulation task group has documented the data farming process in the Final Report of MSG-088.
Here, data farming uses collaborative processes in combining rapid scenario prototyping, simulation modeling, design of experiments, high performance computing, and analysis and visualization in an iterative loop-of-loops .
History
The science of Design of Experiments (DOE) has been around for over a century, pioneered by R.A. Fisher for agricultural studies. Many of the classic experiment designs can be used in simulation studies. However, computation
|
https://en.wikipedia.org/wiki/Ground%20bounce
|
In electronic engineering, ground bounce is a phenomenon associated with transistor switching where the gate voltage can appear to be less than the local ground potential, causing the unstable operation of a logic gate.
Description
Ground bounce is usually seen on high density VLSI where insufficient precautions have been taken to supply a logic gate with a sufficiently low impedance connection (or sufficiently high capacitance) to ground. In this phenomenon, when the base of an NPN transistor is turned on, enough current flows through the emitter-collector circuit that the silicon in the immediate vicinity of the emitter-ground connection is pulled partially high, sometimes by several volts, thus raising the local ground, as perceived at the gate, to a value significantly above true ground. Relative to this local ground, the base voltage can go negative, thus shutting off the transistor. As the excess local charge dissipates, the transistor turns back on, possibly causing a repeat of the phenomenon, sometimes up to a half-dozen bounces.
Ground bounce is one of the leading causes of "hung" or metastable gates in modern digital circuit design. This happens because the ground bounce puts the input of a flip flop effectively at voltage level that is neither a one nor a zero at clock time, or causes untoward effects in the clock itself. A similar voltage sag phenomenon may be seen on the collector side, called supply voltage sag (or VCC sag), where VCC is pulled unnaturally low. As a whole, ground bounce is a major issue in nanometer range technologies in VLSI.
Ground bounce can also occur when the circuit board has poorly designed ground paths. Improper ground or VCC can lead to local variations in the ground level between various components. This is most commonly seen in circuit boards that have ground and VCC paths on the surfaces of the board.
Reduction
Ground bounce may be reduced by placing a 10–30-ohm resistor in series to each of the switching outputs
|
https://en.wikipedia.org/wiki/Starlink%20Project
|
The Starlink Project, referred to by users as Starlink and by developers as simply The Project, was a UK astronomical computing project which supplied general-purpose data reduction software. Until the late 1990s, it also supplied computing hardware and system administration personnel to UK astronomical institutes. In the former respect, it was analogous to the US IRAF project.
The project was formally started in 1980, though the funding had been agreed, and some work begun, a year earlier. It was closed down when its funding was withdrawn by the Particle Physics and Astronomy Research Council in 2005. In 2006, the Joint Astronomy Centre released its own updated version of Starlink and took over maintenance; the task was passed again in mid-2015 to the East Asian Observatory. The latest version was released on 2018 July 19.
Part of the software is relicensed under the GNU GPL while some of it remain under the original custom licence.
History
From its beginning, the project aimed to cope with the ever-increasing data volumes which astronomers had to handle. A 1982 paper exclaimed that astronomers were returning from observing runs (a week or so of observations at a remote telescope) with more than 10 Gigabits of data on tape; at the end of its life the project was rolling out libraries to handle data of more than 4 Gigabytes per single image.
The project provided centrally-purchased (and thus discounted) hardware, professional system administrators, and the developers to write astronomical data-reduction applications for the UK astronomy community and beyond. At its peak size in the late 1980s and early 1990s, the project had a presence at around 30 sites, located at most of the UK universities with an astronomy department, plus facilities at the Joint Astronomy Centre, the home of UKIRT and the James Clerk Maxwell Telescope in Hawaii. The number of active developers fluctuated between five and more than a dozen.
By 1982, the project had a staff of 17, servin
|
https://en.wikipedia.org/wiki/DRTE%20Computer
|
The DRTE Computer was a transistorized computer built at the Defence Research Telecommunications Establishment (DRTE), part of the Canadian Defence Research Board. It was one of the earlier fully transistorized machines, running in prototype form in 1957, and fully developed form in 1960. Although the performance was quite good, equal to that of contemporary machines like the PDP-1, no commercial vendors ever took up the design, and the only potential sale to the Canadian Navy's Pacific Naval Laboratories, fell through. The machine is currently part of the Canadian national science and technology collection housed at the Canada Science and Technology Museum.
Transistor research
In the early 1950s transistors had not yet replaced vacuum tubes in most electronics. Tubes varied widely in their actual characteristics from tube to tube even of the same model. Engineers had developed techniques to ensure that the overall circuit was not overly sensitive to these changes so they could be replaced without causing trouble. The same techniques had not yet been developed for transistor-based systems, they were simply too new. While smaller circuits could be "hand tuned" to work, larger systems using many transistors were not well understood. At the same time transistors were still expensive; a tube cost about $0.75 while a similar transistor cost about $8. This limited the amount of experimentation most companies were able to perform.
DRTE was originally formed to improve communications systems, and to this end, they started a research program into using transistors in complex circuits in a new Electronics Lab under the direction of Norman Moody. Between 1950 and 1960, the Electronics Lab became a major center of excellence in the field of transistors, and through an outreach program, the Electronic Component Research and Development Committee, were able to pass on their knowledge to visiting engineers from major Canadian electronics firms who were entering the transistor fi
|
https://en.wikipedia.org/wiki/Static%20library
|
In computer science, a static library or statically-linked library is a set of routines, external functions and variables which are resolved in a caller at compile-time and copied into a target application by a compiler, linker, or binder, producing an object file and a stand-alone executable. This executable and the process of compiling it are both known as a static build of the program. Historically, libraries could only be static. Static libraries are either merged with other static libraries and object files during building/linking to form a single executable or loaded at run-time into the address space of their corresponding executable at a static memory offset determined at compile-time/link-time.
Advantages and disadvantages
There are several advantages to statically linking libraries with an executable instead of dynamically linking them. The most significant advantage is that the application can be certain that all its libraries are present and that they are the correct version. This avoids dependency problems, known colloquially as DLL Hell or more generally dependency hell. Static linking can also allow the application to be contained in a single executable file, simplifying distribution and installation.
With static linking, it is enough to include those parts of the library that are directly and indirectly referenced by the target executable (or target library). With dynamic libraries, the entire library is loaded, as it is not known in advance which functions will be invoked by applications. Whether this advantage is significant in practice depends on the structure of the library.
In static linking, the size of the executable becomes greater than in dynamic linking, as the library code is stored within the executable rather than in separate files. But if library files are counted as part of the application then the total size will be similar, or even smaller if the compiler eliminates the unused symbols.
Environment specific
On Microsoft Windows
|
https://en.wikipedia.org/wiki/Operating%20system%20abstraction%20layer
|
An operating system abstraction layer (OSAL) provides an application programming interface (API) to an abstract operating system making it easier and quicker to develop code for multiple software or hardware platforms.
OS abstraction layers deal with presenting an abstraction of the common system functionality that is offered by any Operating system by the means of providing meaningful and easy to use Wrapper functions that in turn encapsulate the system functions offered by the OS to which the code needs porting. A well designed OSAL provides implementations of an API for several real-time operating systems (such as vxWorks, eCos, RTLinux, RTEMS). Implementations may also be provided for non real-time operating systems, allowing the abstracted software to be developed and tested in a developer friendly desktop environment.
In addition to the OS APIs, the OS Abstraction Layer project may also provide a hardware abstraction layer, designed to provide a portable interface to hardware devices such as memory, I/O ports, and non-volatile memory. To facilitate the use of these APIs, OSALs generally include a directory structure and build automation (e.g., set of makefiles) to facilitate building a project for a particular OS and hardware platform.
Implementing projects using OSALs allows for development of portable embedded system software that is independent of a particular real-time operating system. It also allows for embedded system software to be developed and tested on desktop workstations, providing a shorter development and debug time.
Implementations
TnFOX
MapuSoft Technologies - provides a commercial OS Abstraction implementation allowing software to support multiple RTOS operating systems.
ClarinoxSoftFrame – middleware which provides OS abstraction targeting wireless embedded device and system development. It comprises wireless protocol stacks, development tools and memory management techniques in addition to the support of desktop and a range of r
|
https://en.wikipedia.org/wiki/Point%20groups%20in%20three%20dimensions
|
In geometry, a point group in three dimensions is an isometry group in three dimensions that leaves the origin fixed, or correspondingly, an isometry group of a sphere. It is a subgroup of the orthogonal group O(3), the group of all isometries that leave the origin fixed, or correspondingly, the group of orthogonal matrices. O(3) itself is a subgroup of the Euclidean group E(3) of all isometries.
Symmetry groups of geometric objects are isometry groups. Accordingly, analysis of isometry groups is analysis of possible symmetries. All isometries of a bounded (finite) 3D object have one or more common fixed points. We follow the usual convention by choosing the origin as one of them.
The symmetry group of an object is sometimes also called its full symmetry group, as opposed to its proper symmetry group, the intersection of its full symmetry group with E+(3), which consists of all direct isometries, i.e., isometries preserving orientation. For a bounded object, the proper symmetry group is called its rotation group. It is the intersection of its full symmetry group with SO(3), the full rotation group of the 3D space. The rotation group of a bounded object is equal to its full symmetry group if and only if the object is chiral.
The point groups that are generated purely by a finite set of reflection mirror planes passing through the same point are the finite Coxeter groups, represented by Coxeter notation.
The point groups in three dimensions are heavily used in chemistry, especially to describe the symmetries of a molecule and of molecular orbitals forming covalent bonds, and in this context they are also called molecular point groups.
3D isometries that leave origin fixed
The symmetry group operations (symmetry operations) are the isometries of three-dimensional space R3 that leave the origin fixed, forming the group O(3). These operations can be categorized as:
The direct (orientation-preserving) symmetry operations, which form the group SO(3):
The identity op
|
https://en.wikipedia.org/wiki/Virtual%20museum
|
A virtual museum is a digital entity that draws on the characteristics of a museum, in order to complement, enhance, or augment the museum experience through personalization, interactivity, and richness of content. Virtual museums can perform as the digital footprint of a physical museum, or can act independently, while maintaining the authoritative status as bestowed by the International Council of Museums (ICOM) in its definition of a museum. In tandem with the ICOM mission of a physical museum, the virtual museum is also committed to public access; to both the knowledge systems embedded in the collections and the systematic, and coherent organization of their display, as well as to their long-term preservation.
As with a traditional museum, a virtual museum can be designed around specific objects (such as an art museum or a natural history museum), or can consist of online exhibitions created from primary or secondary resources (as, for example in a science museum). Moreover, a virtual museum can refer to the mobile or World Wide Web offerings of traditional museums (e.g., displaying digital representations of its collections or exhibits); or can be born digital content such as, 3D environments, net art, virtual reality and digital art. Often, discussed in conjunction with other cultural institutions, a museum by definition, is essentially separate from its sister institutions such as a library or an archive. Virtual museums are usually, but not exclusively delivered electronically when they are denoted as online museums, hypermuseum, digital museum, cybermuseums or web museums.
Off-line pioneers (CD-ROM and digital media before 2000)
The following museums were created with digital technology before the web gained any form of popularity or mass usability. CD-ROM and postal mail distribution made these museums available world-wide, before web browsers, fast connections and ubiquitous web usage.
The Australian new media artist Jeffrey Shaw created the world’s
|
https://en.wikipedia.org/wiki/Bank%20Panic
|
is an arcade shooter game developed by Sanritsu Denki and released by Sega in 1984. Bally-Midway manufactured the game in the US. The player assumes the part of an Old West sheriff who must protect a bank and its customers from masked robbers.
Gameplay
Controls consist of a two-position joystick and three buttons to fire at the left, center, and right positions.
The layout of the bank is implicitly a circle with twelve numbered doors and the player in the center. The player can rotate to the left or right using the joystick, viewing three doors at a time, and shoot at a door by pressing the button corresponding to its position on the screen. The doors will open to reveal one of the following:
A customer, who will make a deposit by dropping a bag of money onto the counter.
A robber, who will attempt to shoot the player.
A young boy wearing a stack of hats, which the player can rapidly shoot to gain a deposit or bonus time.
The level ends once every door has received at least one deposit. If a customer makes a deposit at a door where a bank teller is sitting, the player earns bonus points.
The status of each door is indicated by a row of numbered boxes across the top of the screen, with a red dollar sign representing a door with a completed deposit. A bar gauge above each box shows how close a person is to reaching that door. The disappearance of a dollar sign indicates that a robber has just stolen a deposit; the player must then turn to that door and shoot the robber to recover it.
At random intervals, a bomb will be placed on one of the doors and a rapid timer will count down from 99. The player must move to that door and destroy the bomb with gunfire. Shooting a customer, being shot by a robber, failing to destroy a bomb, or failing to complete the level before the overall timer runs out (shown by a bar at the bottom of the screen) costs the player one life.
Some robbers will wear white boots; these robbers need to be shot twice to be eliminated. At tim
|
https://en.wikipedia.org/wiki/Browser%20sniffing
|
Browser sniffing (also known as browser detection) is a set of techniques used in websites and web applications in order to determine the web browser a visitor is using, and to serve browser-appropriate content to the visitor. It is also used to detect mobile browsers and send them mobile-optimized websites. This practice is sometimes used to circumvent incompatibilities between browsers due to misinterpretation of HTML, Cascading Style Sheets (CSS), or the Document Object Model (DOM). While the World Wide Web Consortium maintains up-to-date central versions of some of the most important Web standards in the form of recommendations, in practice no software developer has designed a browser which adheres exactly to these standards; implementation of other standards and protocols, such as SVG and XMLHttpRequest, varies as well. As a result, different browsers display the same page differently, and so browser sniffing was developed to detect the web browser in order to help ensure consistent display of content.
Sniffer methods
Client-side sniffing
Web pages can use programming languages such as JavaScript which are interpreted by the user agent, with results sent to the web server. For example:
var isIEBrowser = false;
if (window.ActiveXObject) {
isIEBrowser = true;
}
// Or, shorter:
var isIE = (window.ActiveXObject !== undefined);
This code is run by the client computer, and the results are used by other code to make necessary adjustments on client-side. In this example, the client computer is asked to determine whether the browser can use a feature called ActiveX. Since this feature was proprietary to Microsoft, a positive result will indicate that the client may be running Microsoft's Internet Explorer. This is no longer a reliable indicator since Microsoft's open-source release of the ActiveX code, however, meaning that it can be used by any browser.
Standard Browser detection method
The web server communicates with the client using a communication protoco
|
https://en.wikipedia.org/wiki/158%20%28number%29
|
158 (one hundred [and] fifty-eight) is the natural number following 157 and preceding 159.
In mathematics
158 is a nontotient, since there is no integer with 158 coprimes below it. 158 is a Perrin number, appearing after 68, 90, 119.
158 is the number of digits in the decimal expansion of 100!, the product of all the natural numbers up to and including 100.
In the military
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy following World War II
was a United States Navy during World War II
was a United States Navy Trefoil-class concrete barge during World War II
was a United States Navy during World War II
was a United States Navy converted yacht patrol vessel during World War I
In music
The song 158 by the Indie-rock band Blackbud
The song "Here We Go" (1998) from The Bouncing Souls’ Tie One On CD includes the lyrics "Me, Shal Pete and Lamar thumbed down the ramp of Exit 158"
In transportation
The Alfa Romeo 158 racecar
The Ferrari 158 racecar produced between 1964 and 1965
The British Rail Class 158 Express Sprinter is a diesel multiple unit (DMU) train, built for British Rail between 1989 and 1992
In other fields
158 is also:
The year AD 158 or 158 BC
One of a number of highways
The atomic number of an element temporarily called unpentoctium.
158 Koronis is a Main belt asteroid
In the Israeli satirical comedy Operation Grandma ("Mivtza Safta", מבצע סבתא), the number 158 is implied to be a classified high-rank officer position (Alon says: "Since you've became 158, you became all that?")
Township 158-30 is a small township in Lake of the Woods County, Minnesota
Edenwold No. 158, Saskatchewan is a rural municipality in Saskatchewan, Canada
John Irving's third novel, The 158-Pound Marriage
Financial Accounting Standards Board summary of statement No. 158 requires an employer to recognize the overfunded or underfunded s
|
https://en.wikipedia.org/wiki/PC-MOS/386
|
PC-MOS/386 is a multi-user, multitasking computer operating system produced by The Software Link (TSL), announced at COMDEX in November 1986 for February 1987 release. PC-MOS/386, a successor to PC-MOS, can run many MS-DOS programs on the host machine or a terminal connected to it. Unlike MS-DOS, PC-MOS/386 is optimized for the Intel 80386 processor; however early versions will run on any x86 computer. PC-MOS/386 used to be proprietary, but it was released as open-source software in 2017.
History
The last commercial version produced was v5.01, compatible with MS-DOS 5. It required a memory management unit (MMU) to support memory protection, so was not compatible with 8086 and 8088 processors.
MMU support for 286 class machines was provided using a proprietary hardware shim inserted between the processor and its socket. 386 machines did not require any special hardware.
Multi-user operation suffered from the limitations of the day including the inability of the processor to schedule and partition running processes. Typically swapping from a foreground to a background process on the same terminal used the keyboard to generate an interrupt and then swap the processes. The cost of RAM (over US$500/Mb in 1987) and the slow and expensive hard disks of the day limited performance.
PC-MOS terminals could be x86 computers running terminal emulation software communicating at 9600 or 19200 baud, connected via serial cables. However, the greatest benefit was reached when using standard, "dumb" terminals which shared the resources of the then central 386-based processor. Speeds above this required specialized hardware boards which increased cost, but the speed was not a serious limitation for interacting with text-based programs.
PC-MOS also figured prominently in the lawsuit Arizona Retail Systems, Inc. v. The Software Link, Inc., where Arizona Retail Systems claimed The Software Link violated implied warranties on PC-MOS. The case is notable because The Software Link argu
|
https://en.wikipedia.org/wiki/Index%20of%20electrical%20engineering%20articles
|
This is an alphabetical list of articles pertaining specifically to electrical and electronics engineering. For a thematic list, please see List of electrical engineering topics. For a broad overview of engineering, see List of engineering topics. For biographies, see List of engineers.
#
866A –
15 kV AC –
2D computer graphics –
3Com –
A
Abrasion (mechanical) –
AC adapter –
AC power plugs and sockets –
AC power –
AC/AC converter –
AC/DC receiver design –
AC/DC conversion –
Active rectification –
Actuator –
Adaptive control –
Adjustable-speed drive –
Advanced Z-transform –
Affinity law –
Agbioeletric –
AIEE –
All American Five –
Alloy –
ALOHAnet –
Alpha–beta transformation –
Altair 8800 –
Alternating current –
Alternator (auto) –
Alternator synchronization--
Alternator –
Altitude –
Aluminium smelting –
AIEE –
Ammeter –
Amorphous metal transformer –
Ampacity –
Ampere –
Ampère's circuital law –
Ampère's force law –
Ampère's law –
Amplidyne –
Amplifier –
Amplitude modulation –
Analog circuit –
Analog filter –
Analog signal processing –
Analog signal –
Analog-to-digital converter –
Annealing (metallurgy) –
Anode –
Antenna (radio) –
Apollo program –
Apparent power –
Apple Computer –
Arc converter –
Arc furnace –
Arc lamp –
Arc welder –
Argon –
Arithmetic mean –
Armature (electrical engineering) –
Artificial heart –
Artificial intelligence –
Artificial neural networks –
Artificial pacemaker –
ASTM –
Asymptotic stability –
Asynchronous circuit –
Audio and video connector –
Audio equipment –
Audio filter –
Audio frequency –
Audio noise reduction –
Audio signal processing –
Audion tube –
Austin transformer –
Automatic gain control –
Automatic transfer switch –
Automation –
Autorecloser –
Autotransformer –
Availability factor –
Avalanche diode –
Average rectified value –
B
Backward-wave oscillator –
Balanced line –
Ball bearing motor –
Balun –
Band-pass filter –
Band-stop filter –
|
https://en.wikipedia.org/wiki/Shotgun%20email
|
Shotgun email refers to an email requesting information or action that only requires the efforts of one person but is sent to multiple people in an effort to guarantee that at least one person will respond. The shotgun email often results in multiple people responding to something already accomplished, and therefore results in a loss of overall productivity. Shotgun emailing is considered poor internet etiquette.
An example would be a person of authority in a business organization sending out an email to five technicians in the information technology department of his company to let them know his printer is broken. One technician responds with an on-site call and fixes the problem. Later in the day, other technicians follow-up to fix the printer that is already back in order. Shotgun emails can also be request for information or other tasks.
The blind shotgun email occurs when the sender uses the blind co-copy feature of an email program to hide the fact that a shotgun email is in use. This is considered particularly deceitful.
Shotgun emails are also considered to be shotgun email marketing, in which companies which is mostly related to sending newsletter information, sometimes supporting missions on helping the poor and such messages like that.
But what is most reported is that scam emails use the method of Shotgun emails, as one must have approached to, such as winning lottery's, getting free trips to countries while you didn't sign up for and many others like that, to get access of what you are doing.
References
See also
Email spam
Netiquette
Email
Internet terminology
Etiquette
|
https://en.wikipedia.org/wiki/Divine%20Proportions%3A%20Rational%20Trigonometry%20to%20Universal%20Geometry
|
Divine Proportions: Rational Trigonometry to Universal Geometry is a 2005 book by the mathematician Norman J. Wildberger on a proposed alternative approach to Euclidean geometry and trigonometry, called rational trigonometry. The book advocates replacing the usual basic quantities of trigonometry, Euclidean distance and angle measure, by squared distance and the square of the sine of the angle, respectively. This is logically equivalent to the standard development (as the replacement quantities can be expressed in terms of the standard ones and vice versa). The author claims his approach holds some advantages, such as avoiding the need for irrational numbers.
The book was "essentially self-published" by Wildberger through his publishing company Wild Egg. The formulas and theorems in the book are regarded as correct mathematics but the claims about practical or pedagogical superiority are primarily promoted by Wildberger himself and have received mixed reviews.
Overview
The main idea of Divine Proportions is to replace distances by the squared Euclidean distance, which Wildberger calls the quadrance, and to replace angle measures by the squares of their sines, which Wildberger calls the spread between two lines. Divine Proportions defines both of these concepts directly from the Cartesian coordinates of points that determine a line segment or a pair of crossing lines. Defined in this way, they are rational functions of those coordinates, and can be calculated directly without the need to take the square roots or inverse trigonometric functions required when computing distances or angle measures.
For Wildberger, a finitist, this replacement has the purported advantage of avoiding the concepts of limits and actual infinity used in defining the real numbers, which Wildberger claims to be unfounded. It also allows analogous concepts to be extended directly from the rational numbers to other number systems such as finite fields using the same formulas for quadrance
|
https://en.wikipedia.org/wiki/Winlogon
|
Winlogon (Windows Logon) is the component of Microsoft Windows operating systems that is responsible for handling the secure attention sequence, loading the user profile on logon, creates the desktops for the window station, and optionally locking the computer when a screensaver is running (requiring another authentication step). In Windows Vista and later operating systems, the roles and responsibilities of Winlogon have changed significantly.
Overview
Winlogon is launched by the Session Manager Subsystem as a part of the booting process of Windows NT.
Before Windows Vista, Winlogon was responsible for starting the Service Control Manager and the Local Security Authority Subsystem Service, but since Vista these have been launched by the Windows Startup Application (wininit.exe).
The first part of the logon process Winlogon conducts is starting the process that shows the user the logon screen. Before Windows Vista this was done by GINA, but starting with Vista this is done by LogonUI. These programs are responsible for getting user credential and passing them to the Local Security Authority Subsystem Service, which authenticates the user.
After control is given back to Winlogon, it creates and opens an interactive window station, WinSta0, and creates three desktops, Winlogon, Default and ScreenSaver. Winlogon switches from the Winlogon desktop to the Default desktop when the shell indicates that it is ready to display something for the user, or after thirty seconds, whichever comes first.
The system switches back to the Winlogon desktop if the user presses Control-Alt-Delete or when a User Account Control prompt is shown. Winlogon now starts the program specified in the Userinit value which defaults to userinit.exe. This value supports multiple executables.
Responsibilities
Window station and desktop protection
Winlogon sets the protection of the window station and corresponding desktops to ensure that each is properly accessible. In general, this means t
|
https://en.wikipedia.org/wiki/Substring
|
In formal language theory and computer science, a substring is a contiguous sequence of characters within a string. For instance, "the best of" is a substring of "It was the best of times". In contrast, "Itwastimes" is a subsequence of "It was the best of times", but not a substring.
Prefixes and suffixes are special cases of substrings. A prefix of a string is a substring of that occurs at the beginning of ; likewise, a suffix of a string is a substring that occurs at the end of .
The substrings of the string "apple" would be:
"a", "ap", "app", "appl", "apple",
"p", "pp", "ppl", "pple",
"pl", "ple",
"l", "le"
"e", ""
(note the empty string at the end).
Substring
A string is a substring (or factor) of a string if there exists two strings and such that . In particular, the empty string is a substring of every string.
Example: The string ana is equal to substrings (and subsequences) of banana at two different offsets:
banana
|||||
ana||
|||
ana
The first occurrence is obtained with b and na, while the second occurrence is obtained with ban and being the empty string.
A substring of a string is a prefix of a suffix of the string, and equivalently a suffix of a prefix; for example, nan is a prefix of nana, which is in turn a suffix of banana. If is a substring of , it is also a subsequence, which is a more general concept. The occurrences of a given pattern in a given string can be found with a string searching algorithm. Finding the longest string which is equal to a substring of two or more strings is known as the longest common substring problem.
In the mathematical literature, substrings are also called subwords (in America) or factors (in Europe).
Prefix
A string is a prefix of a string if there exists a string such that . A proper prefix of a string is not equal to the string itself; some sources in addition restrict a proper prefix to be non-empty. A prefix can be seen as a special case of a substring.
Example: The string b
|
https://en.wikipedia.org/wiki/Brucella%20suis
|
Brucella suis is a bacterium that causes swine brucellosis, a zoonosis that affects pigs. The disease typically causes chronic inflammatory lesions in the reproductive organs of susceptible animals or orchitis, and may even affect joints and other organs. The most common symptom is abortion in pregnant susceptible sows at any stage of gestation. Other manifestations are temporary or permanent sterility, lameness, posterior paralysis, spondylitis, and abscess formation. It is transmitted mainly by ingestion of infected tissues or fluids, semen during breeding, and suckling infected animals.
Since brucellosis threatens the food supply and causes undulant fever, Brucella suis and other Brucella species (B. melitensis, B. abortus, B. ovis, B. canis) are recognized as potential agricultural, civilian, and military bioterrorism agents.
Symptoms and signs
The most frequent clinical sign following B. suis infection is abortion in pregnant females, reduced milk production, and infertility. Cattle can also be transiently infected when they share pasture or facilities with infected pigs, and B. suis can be transmitted by cow's milk.
Swine also develop orchitis (swelling of the testicles), lameness (movement disability), hind limb paralysis, or spondylitis (inflammation in joints).
Cause
Brucella suis is a Gram-negative, facultative, intracellular coccobacillus, capable of growing and reproducing inside of host cells, specifically phagocytic cells. They are also not spore-forming, capsulated, or motile. Flagellar genes, however, are present in the B. suis genome, but are thought to be cryptic remnants because some were truncated and others were missing crucial components of the flagellar apparatus. In mouse models, the flagellum is essential for a normal infectious cycle, where the inability to assemble a complete flagellum leads to severe attenuation of the bacteria.
Brucella suis is differentiated into five biovars (strains), where biovars 1–3 infect wild boar and dome
|
https://en.wikipedia.org/wiki/Guitar%20Pro
|
Guitar Pro is a multitrack editor of guitar and bass tablature and musical scores, possessing a built-in MIDI-editor, a plotter of chords, a player, a metronome and other tools for musicians. It has versions for Windows and Mac OS X (Intel processors only) and is written by the French company Arobas Music.
History
There have been six popular public major releases of the software: versions 3–8. Guitar Pro was initially designed as a tablature editor, but has since evolved into a full-fledged score writer including support for many musical instruments other than guitar.
Until it reached version 4, the software was only available for Microsoft Windows. Later, Guitar Pro 5 (released November 2005) undertook a year-long porting effort and Guitar Pro 5 for the Mac OS X was released in July 2006. On April 5, 2010, Guitar Pro 6, a completely redesigned version, was released. This version also supports Linux, with 32-bit Ubuntu being the officially supported distribution.
On February 6, 2011, the first ever portable release of Guitar Pro (version 6) was made available on the App Store for support with the iPhone, iPod Touch, and iPad running iOS 3.0 or later. An Android version was released on December 17, 2014.
In 2011, a version was made to work with the Fretlight guitar called Guitar Pro 6 Fretlight Ready. The tablature notes being played in Guitar Pro 6 Fretlight Ready show up on the Fretlight guitar's LEDs which are encased within the guitar's fretboard to teach you the song.
In April 2017, Guitar Pro 7 was officially released with new features and dropped Linux support.
Guitar Pro 8 was released in May 2022 with a range of new features, most notably support for Apple Silicon processors.
Background
The software makes use of multiple instrument tracks which follow standard staff notation, but also shows the notes on tablature notation. It gives the musician visual access to keys (banjos, drumkits, etc.) for the song to be composed, and allows live previews of
|
https://en.wikipedia.org/wiki/Audio%20leveler
|
An audio leveler performs an audio process similar to compression, which is used to reduce the dynamic range of a signal, so that the quietest portion of the signal is loud enough to hear and the loudest portion is not too loud.
Levelers work especially well with vocals, as there are huge dynamic differences in the human voice and levelers work in such a way as to sound very natural, letting the character of the sound change with the different levels but still maintaining a predictable and usable dynamic range.
A leveler is different from a compressor in that the ratio and threshold are controlled with a single control.
External links
TLA-100 Tube Levelling Amplifier by Summit Audio
Signal processing
|
https://en.wikipedia.org/wiki/Harry%20Huskey
|
Harry Douglas Huskey (January 19, 1916 – April 9, 2017) was an American computer design pioneer.
Early life and career
Huskey was born in Whittier, in the Smoky Mountains region of North Carolina and grew up in Idaho. He received his bachelor's degree in mathematics and physics at the University of Idaho. He was the first member of his family to attend college. He gained his Master's and then his PhD in 1943 from the Ohio State University on Contributions to the Problem of Geöcze. Huskey taught mathematics to U.S. Navy students at the University of Pennsylvania and then worked part-time on the early ENIAC and EDVAC computers in 1945. This work represented his first formal introduction to computers, according to his obituary in The New York Times.
He visited the National Physical Laboratory (NPL) in the United Kingdom for a year and worked on the Pilot ACE computer with Alan Turing and others. He was also involved with the EDVAC and SEAC computer projects.
Huskey designed and managed the construction of the Standards Western Automatic Computer (SWAC) at the National Bureau of Standards in Los Angeles (1949–1953). He also designed the G-15 computer for Bendix Aviation Corporation, a machine, operable by one person. He had one at his home that is now in the Smithsonian Institution in Washington, D.C.
After five years at the National Bureau of Standards, Huskey joined the faculty of the University of California, Berkeley in 1954 and then University of California, Santa Cruz from 1966. He cofounded the computer and information science program at UC Santa Cruz in 1967. He became director of its computer center. In 1986, UC Santa Cruz named him professor emeritus. While at Berkeley, he supervised the research of pioneering programming language designer Niklaus Wirth, who gained his PhD in 1963. During 1963-1964 Prof. Huskey participated in establishing the Computer Center at IIT Kanpur and convened a meeting there with many pioneers of computing technology. Parti
|
https://en.wikipedia.org/wiki/GParted
|
GParted (acronym of GNOME Partition Editor) is a GTK front-end to GNU Parted and an official GNOME partition-editing application (alongside Disks). GParted is used for creating, deleting, resizing, moving, checking, and copying disk partitions and their file systems. This is useful for creating space for new operating systems, reorganizing disk usage, copying data residing on hard disks, and mirroring one partition with another (disk imaging). It can also be used to format a USB drive.
Background
GParted uses libparted to detect and manipulate devices and partition tables while several (optional) file system tools provide support for file systems not included in libparted. These optional packages will be detected at runtime and do not require a rebuild of GParted. GParted supports the following filesystems: Ext2, Ext3, Ext4, FAT16, FAT32, HFS, HFS+, JFS, Linux-swap, ReiserFS, Reiser4, UFS, XFS, and NTFS.
GParted is written in C++ and uses gtkmm to interface with GTK. The general approach is to keep the GUI as simple as possible and in conformity with the GNOME Human Interface Guidelines.
The GParted project provides a live operating system including GParted which can be written to a Live CD, a Live USB and other media. The operating system is based on Debian. GParted is also available on other Linux live CDs, including recent versions of Puppy, Knoppix, SystemRescueCd and Parted Magic. GParted is preinstalled when booting from "Try Ubuntu" mode on an Ubuntu installation media.
An alternative to this software is GNOME Disks.
Supported features
GParted supports the following operations on file systems (provided that all features were enabled at compile-time and all required tools are present on the system). The 'copy' field indicates whether GParted is capable of cloning the mentioned filesystem.
Cloning with GParted
GParted is capable of cloning by copying and pasting. GParted is not capable of cloning an entire disk, but only one partition at a time. The fi
|
https://en.wikipedia.org/wiki/Sanitation%20Standard%20Operating%20Procedures
|
Sanitation Standard Operating Procedures is the common name, in the United States, given to the sanitation procedures in food production plants which are required by the Food Safety and Inspection Service of the USDA and regulated by 9 CFR part 416 in conjunction with 21 CFR part 178.1010. It is considered one of the prerequisite programs of HACCP.
SSOPs are generally documented steps that must be followed to ensure adequate cleaning of product contact and non-product surfaces. These cleaning procedures must be detailed enough to make certain that adulteration of product will not occur. All HACCP plans require SSOPs to be documented and reviewed periodically to incorporate changes to the physical plant. This reviewing procedure can take on many forms, from annual formal reviews to random reviews, but any review should be done by "responsible educated management". As these procedures can make their way into the public record if there are serious failures, they might be looked at as public documents because they are required by the government. SSOPs, in conjunction with the Master Sanitation Schedule and Pre-Operational Inspection Program, form the entire sanitation operational guidelines for food-related processing and one of the primary backbones of all food industry HACCP plans.
SSOPs can be very simple to extremely intricate depending on the focus. Food industry equipment should be constructed of sanitary design; however, some automated processing equipment by necessity is difficult to clean. An individual SSOP should include:
The equipment or affected area to be cleaned, identified by common name
The tools necessary to prepare the equipment or area to be cleaned
How to disassemble the area or equipment
The method of cleaning and sanitizing
SSOPs can be standalone documents, but they should also serve as work instructions as this will help ensure they are accurate.
Sanitary accessories
To assure thorough sanitation, the use of the following items (and
|
https://en.wikipedia.org/wiki/Monocalcium%20phosphate
|
Monocalcium phosphate is an inorganic compound with the chemical formula Ca(H2PO4)2 ("AMCP" or "CMP-A" for anhydrous monocalcium phosphate). It is commonly found as the monohydrate ("MCP" or "MCP-M"), Ca(H2PO4)2·H2O. Both salts are colourless solids. They are used mainly as superphosphate fertilizers and are also popular leavening agents.
Preparation
Material of relatively high purity, as required for baking, is produced by treating calcium hydroxide with phosphoric acid:
Samples of Ca(H2PO4)2 tend to convert to dicalcium phosphate:
Applications
Use in fertilizers
Superphosphate fertilizers are produced by treatment of "phosphate rock" with acids ("acidulation"). Using phosphoric acid, fluorapatite is converted to Ca(H2PO4)2:
This solid is called triple superphosphate. Several million tons are produced annually for use as fertilizers.
Using sulfuric acid, fluorapatite is converted to a mixture of Ca(H2PO4)2 and CaSO4.
This solid is called single superphosphate.
Residual HF typically reacts with silicate minerals co-mingled with the phosphate ores to produce hexafluorosilicic acid (H2SiF6). The majority of the hexafluorosilicic acid is converted to aluminium fluoride and cryolite for the processing of aluminium. These materials are central to the conversion of aluminium ore into aluminium metal.
When sulfuric acid is used, the product contains phosphogypsum (CaSO4·2H2O) and is called single superphosphate.
Use as leavening agent
Calcium dihydrogen phosphate is used in the food industry as a leavening agent, i.e., to cause baked goods to rise. Because it is acidic, when combined with an alkali ingredient, commonly sodium bicarbonate (baking soda) or potassium bicarbonate, it reacts to produce carbon dioxide and a salt. Outward pressure of the carbon dioxide gas causes the rising effect. When combined in a ready-made baking powder, the acid and alkali ingredients are included in the right proportions such that they will exactly neutralize each other and
|
https://en.wikipedia.org/wiki/Shortest%20common%20supersequence
|
In computer science, the shortest common supersequence of two sequences X and Y is the shortest sequence which has X and Y as subsequences. This is a problem closely related to the longest common subsequence problem. Given two sequences X = < x1,...,xm > and Y = < y1,...,yn >, a sequence U = < u1,...,uk > is a common supersequence of X and Y if items can be removed from U to produce X and Y.
A shortest common supersequence (SCS) is a common supersequence of minimal length. In the shortest common supersequence problem, two sequences X and Y are given, and the task is to find a shortest possible common supersequence of these sequences. In general, an SCS is not unique.
For two input sequences, an SCS can be formed from a longest common subsequence (LCS) easily. For example, the longest common subsequence of X and Y is Z. By inserting the non-LCS symbols into Z while preserving their original order, we obtain a shortest common supersequence U. In particular, the equation holds for any two input sequences.
There is no similar relationship between shortest common supersequences and longest common subsequences of three or more input sequences. (In particular, LCS and SCS are not dual problems.) However, both problems can be solved in time using dynamic programming, where is the number of sequences, and is their maximum length. For the general case of an arbitrary number of input sequences, the problem is NP-hard.
Shortest common superstring
The closely related problem of finding a minimum-length string which is a superstring of a finite set of strings = { 1,2,...,n } is also NP-hard. Several constant factor approximations have been proposed throughout the years, and the current best known algorithm has an approximation factor of 2.475. However, perhaps the simplest solution is to reformulate the problem as an instance of weighted set cover in such a way that the weight of the optimal solution to the set cover instance is less than twice the length of t
|
https://en.wikipedia.org/wiki/Tainter%20gate
|
The Tainter gate is a type of radial arm floodgate used in dams and canal locks to control water flow. It is named for its inventor, Wisconsin structural engineer Jeremiah Burnham Tainter.
Tainter, an employee of lumber firm Knapp, Stout and Co., invented the gate in 1886 for use on the company's dam that forms Lake Menomin in the United States.
Description
A side view of a Tainter gate resembles a slice of pie with the curved part of the piece facing the source or upper pool of water and the tip pointing toward the destination or lower pool. The curved face or skinplate of the gate takes the form of a wedge section of cylinder. The straight sides of the pie shape, the trunnion arms, extend back from each end of the cylinder section and meet at a trunnion which serves as a pivot point when the gate rotates.
Principle
Pressure forces on a submerged body act perpendicular to the body's surface. The design of the Tainter gate results in every pressure force acting through the centre of the imaginary circle of which the gate is a section, so that all resulting pressure force acts through the pivot point of the gate, making construction and design easier.
When a Tainter gate is closed, water bears on the convex (upstream) side. When the gate is rotated, the rush of water passing under the gate helps to open and close the gate. The rounded face, long radial arms and bearings allow it to close with less effort than a flat gate. Tainter gates are usually controlled from above with a chain/gearbox/electric motor assembly.
A critical factor in Tainter gate design is the amount of stress transferred from the skinplate through the radial arms and to the trunnion, with calculations pertaining to the resulting friction encountered when raising or lowering the gate. Some older systems have had to be modified to allow for frictional forces which the original design did not anticipate. In 1995, too much stress during an opening resulted in a gate failure at Folsom Dam in nort
|
https://en.wikipedia.org/wiki/Rogue%20access%20point
|
A rogue access point is a wireless access point that has been installed on a secure network without explicit authorization from a local network administrator, whether added by a well-meaning employee or by a malicious attacker.
Dangers
Although it is technically easy for a well-meaning employee to install a "soft access point" or an inexpensive wireless router—perhaps to make access from mobile devices easier—it is likely that they will configure this as "open", or with poor security, and potentially allow access to unauthorized parties.
If an attacker installs an access point they are able to run various types of vulnerability scanners, and rather than having to be physically inside the organization, can attack remotely—perhaps from a reception area, adjacent building, car park, or with a high gain antenna, even from several miles away.
Prevention and detection
To prevent the installation of rogue access points, organizations can install wireless intrusion prevention systems to monitor the radio spectrum for unauthorized access points.
Presence of a large number of wireless access points can be sensed in airspace of a typical enterprise facility. These include managed access points in the secure network plus access points in the neighborhood. A wireless intrusion prevention system facilitates the job of auditing these access points on a continuous basis to learn whether there are any rogue access points among them.
In order to detect rogue access points, two conditions need to be tested:
whether or not the access point is in the managed access point list
whether or not it is connected to the secure network
The first of the above two conditions is easy to test—compare wireless MAC address (also called as BSSID) of the access point against the managed access point BSSID list. However, automated testing of the second condition can become challenging in the light of following factors: a) Need to cover different types of access point devices such as bridging, N
|
https://en.wikipedia.org/wiki/Bed%20of%20nails%20tester
|
A bed of nails tester is a traditional electronic test fixture used for in-circuit testing. It has numerous pins inserted into holes in an epoxy phenolic glass cloth laminated sheet (G-10) which are aligned using tooling pins to make contact with test points on a printed circuit board and are also connected to a measuring unit by wires. Named by analogy with a real-world bed of nails, these devices contain an array of small, spring-loaded pogo pins; each pogo pin makes contact with one node in the circuitry of the DUT (device under test). By pressing the DUT down against the bed of nails, reliable contact can be quickly and simultaneously made with hundreds or even thousands of individual test points within the circuitry of the DUT. The hold-down force may be provided manually or by means of a vacuum or a mechanical presser, thus pulling the DUT downwards onto the nails.
Devices that have been tested on a bed of nails tester may show evidence of this after the process: small dimples (from the sharp tips of the Pogo pins) can often be seen on many of the soldered connections of the PCB.
Bed of nails fixtures require a mechanical assembly to hold the PCB in place. Fixtures can hold the PCB with either a vacuum or pressing down from the top of the PCB. Vacuum fixtures give better signal reading versus the press-down type. On the other hand, vacuum fixtures are expensive because of their high manufacturing complexity. Moreover, vacuum fixtures cannot be used on bed-of-nails systems that are used in automated production lines, where the board is automatically loaded to the tester by a handling mechanism.
The bed of nails or fixture, as generally termed, is used together with an in-circuit tester. Fixtures with a grid of 0.8 mm for small nails and test point diameter 0.6 mm are theoretically possible without using special constructions. But in mass production, test point diameters of 1.0 mm or higher are normally used to minimise contact failures, leading to lower rema
|
https://en.wikipedia.org/wiki/Bridged%20and%20paralleled%20amplifiers
|
Multiple electronic amplifiers can be connected such that they drive a single floating load (bridge) or a single common load (parallel), to increase the amount of power available in different situations. This is commonly encountered in audio applications.
Overview
Bridged or paralleled modes of working, normally involving audio power amplifiers, are methods of using a two or more identical amplifiers to drive the same load simultaneously. This is possible for sets of mono, stereo and multichannel amplifiers since the amplifier outputs are combined on a per load basis. Depending on the method of combining separate amplifiers, bridging or paralleling, different amplification goals can be served. The result is an amplifier that can be further combined with bridging or paralleling. This approach can be beneficial for driving loads for which using a single-ended amplifier is impossible, impractical or less cost-effective.
Bridged amplifier
A bridge-tied load (BTL), also known as bridged transformerless and bridged mono, is an output configuration for audio amplifiers, a form of impedance bridging used mainly in professional audio & car applications. The two channels of a stereo amplifier are fed the same monaural audio signal, with one channel's electrical polarity reversed. A loudspeaker is connected between the two amplifier outputs, bridging the output terminals. This doubles the available voltage swing at the load compared with the same amplifier used without bridging. The configuration is most often used for subwoofers.
For a given output voltage swing, the lower the impedance the higher the amplifier load. Bridging is used to allow an amplifier to drive low loads into higher power, because power is inversely proportional to impedance and proportional to the square of voltage, according to the equation . This equation also shows that bridging quadruples the theoretical power in an amplifier, however this is true only for low enough loads. For example, for load
|
https://en.wikipedia.org/wiki/SciELO
|
SciELO (Scientific Electronic Library Online) is a bibliographic database, digital library, and cooperative electronic publishing model of open access journals. SciELO was created to meet the scientific communication needs of developing countries and provides an efficient way to increase visibility and access to scientific literature. Originally established in Brazil in 1997, today there are 16 countries in the SciELO network and its journal collections: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Ecuador, Mexico, Paraguay, Peru, Portugal, South Africa, Spain, Uruguay, and Venezuela.
SciELO was initially supported by the São Paulo Research Foundation (FAPESP) and the Brazilian National Council for Scientific and Technological Development (CNPq), along with the Latin American and Caribbean Center on Health Sciences Information (BIREME). SciELO provides a portal that integrates and provides access to all of the SciELO network sites. Users can search across all SciELO collections or limit the search by a single country collection, or browse by subject area, publisher, or journal title.
Database and projects
By October 2015 the database contained:
1,249 journals
39,651 issues (journal numbers)
573,525 research articles
13,005,080 citations (sum of the number of items in each article's reference list)
from different countries, universally accessible for free open access, in full-text format. The SciELO Project's stated aims are to "envisage the development of a common methodology for the preparation, storage, dissemination and evaluation of scientific literature in electronic format". All journals are published by a special software suite which implements a scientific electronic virtual library accessed via several mechanisms, including a table of titles in alphabetic and subject list, subject and author indexes and a search engine.
History
Project's launch timeline:
1997: Beginning of the development of SciELO as a FAPESP supported projec
|
https://en.wikipedia.org/wiki/Dynamic-link%20library
|
Dynamic-link library (DLL) is Microsoft's implementation of the shared library concept in the Microsoft Windows and OS/2 operating systems. These libraries usually have the file extension DLL, OCX (for libraries containing ActiveX controls), or DRV (for legacy system drivers).
The file formats for DLLs are the same as for Windows EXE files – that is, Portable Executable (PE) for 32-bit and 64-bit Windows, and New Executable (NE) for 16-bit Windows. As with EXEs, DLLs can contain code, data, and resources, in any combination.
Data files with the same file format as a DLL, but with different file extensions and possibly containing only resource sections, can be called resource DLLs. Examples of such DLLs include icon libraries, sometimes having the extension ICL, and font files, having the extensions FON and FOT.
Background
The first versions of Microsoft Windows ran programs together in a single address space. Every program was meant to co-operate by yielding the CPU to other programs so that the graphical user interface (GUI) could multitask and be maximally responsive. All operating-system level operations were provided by the underlying operating system: MS-DOS. All higher-level services were provided by Windows Libraries "Dynamic Link Library". The Drawing API, Graphics Device Interface (GDI), was implemented in a DLL called GDI.EXE, the user interface in USER.EXE. These extra layers on top of DOS had to be shared across all running Windows programs, not just to enable Windows to work in a machine with less than a megabyte of RAM, but to enable the programs to co-operate with each other. The code in GDI needed to translate drawing commands to operations on specific devices. On the display, it had to manipulate pixels in the frame buffer. When drawing to a printer, the API calls had to be transformed into requests to a printer. Although it could have been possible to provide hard-coded support for a limited set of devices (like the Color Graphics Adapter display
|
https://en.wikipedia.org/wiki/Blook
|
A blook is a printed book that contains or is based on content from a blog.
The first printed blook was User Interface Design for Programmers, by Joel Spolsky, published by Apress on June 26, 2001, based on his blog Joel on Software. An early blook was written by Tony Pierce in 2002 when he compiled selected posts from his one-year-old blog and turned the collection into a book called "Blook". The name came about when Pierce held a contest, asking his readers to suggest a title for the book. Jeff Jarvis of BuzzMachine won the contest and subsequently invented the term. Pierce went on to publish two other blooks, How To Blog and Stiff.
Print-on-demand publisher Lulu inaugurated the Lulu Blooker Prize for blooks, which was first awarded in 2006. The printed blook phenomenon is not limited to self-publishing. Several popular bloggers have signed book deals with major publishers to write books based on their blogs. However, some publishers are starting to realize that blog popularity does not translate to sales. Blog to book conversions via traditional publishing houses still happen, but the focus has shifted from blog popularity to content quality.
"Blook" was short-listed in 2006 for inclusion in the Oxford English Dictionary and was a runner-up for Word of the Year.
See also
Digital library
List of digital library projects
Dynabook
Elibrary
Expanded Books
Networked book
Webserial
OpenReader Consortium
Project Gutenberg
References
Blogging
Books by type
Documents
Paper products
Web fiction
|
https://en.wikipedia.org/wiki/ISO%208583
|
ISO 8583 is an international standard for financial transaction card originated interchange messaging. It is the International Organization for Standardization standard for systems that exchange electronic transactions initiated by cardholders using payment cards.
ISO 8583 defines a message format and a communication flow so that different systems can exchange these transaction requests and responses. The vast majority of transactions made when a customer uses a card to make a payment in a store (EFTPOS) use ISO 8583 at some point in the communication chain, as do transactions made at ATMs. In particular, the Mastercard, Visa and Verve networks base their authorization communications on the ISO 8583 standard, as do many other institutions and networks.
Although ISO 8583 defines a common standard, it is not typically used directly by systems or networks. It defines many standard fields (data elements) which remain the same in all systems or networks, and leaves a few additional fields for passing network-specific details. These fields are used by each network to adapt the standard for its own use with custom fields and custom usages like Proximity Cards.
Introduction
The ISO 8583 specification has three parts:
Part 1: Messages, data elements, and code values
Part 2: Application and registration procedures for Institution Identification Codes (IIC)
Part 3: Maintenance procedures for the aforementioned messages, data elements and code values
Message format
A card-based transaction typically travels from a transaction-acquiring device, such as a point-of-sale terminal (POS) or an automated teller machine (ATM), through a series of networks, to a card issuing system for authorization against the card holder's account. The transaction data contains information derived from the card (e.g., the card number or card holder details), the terminal (e.g., the terminal number, the merchant number), the transaction (e.g., the amount), together with other data which ma
|
https://en.wikipedia.org/wiki/Salsa20
|
Salsa20 and the closely related ChaCha are stream ciphers developed by Daniel J. Bernstein. Salsa20, the original cipher, was designed in 2005, then later submitted to the eSTREAM European Union cryptographic validation process by Bernstein. ChaCha is a modification of Salsa20 published in 2008. It uses a new round function that increases diffusion and increases performance on some architectures.
Both ciphers are built on a pseudorandom function based on add-rotate-XOR (ARX) operations — 32-bit addition, bitwise addition (XOR) and rotation operations. The core function maps a 256-bit key, a 64-bit nonce, and a 64-bit counter to a 512-bit block of the key stream (a Salsa version with a 128-bit key also exists). This gives Salsa20 and ChaCha the unusual advantage that the user can efficiently seek to any position in the key stream in constant time. Salsa20 offers speeds of around 4–14 cycles per byte in software on modern x86 processors, and reasonable hardware performance. It is not patented, and Bernstein has written several public domain implementations optimized for common architectures.
Structure
Internally, the cipher uses bitwise addition ⊕ (exclusive OR), 32-bit addition mod 232 ⊞, and constant-distance rotation operations <<< on an internal state of sixteen 32-bit words. Using only add-rotate-xor operations avoids the possibility of timing attacks in software implementations. The internal state is made of sixteen 32-bit words arranged as a 4×4 matrix.
The initial state is made up of eight words of key (), two words of stream position (), two words of nonce (essentially additional stream position bits) (), and four fixed words ():
The constant words spell "expand 32-byte k" in ASCII (i.e. the 4 words are "expa", "nd 3", "2-by", and "te k"). This is an example of a nothing-up-my-sleeve number. The core operation in Salsa20 is the quarter-round QR(a, b, c, d) that takes a four-word input and produces a four-word output:
b ^= (a + d) <<< 7;
c ^= (b + a) <
|
https://en.wikipedia.org/wiki/Composition%20operator
|
In mathematics, the composition operator with symbol is a linear operator defined by the rule
where denotes function composition.
The study of composition operators is covered by AMS category 47B33.
In physics
In physics, and especially the area of dynamical systems, the composition operator is usually referred to as the Koopman operator (and its wild surge in popularity is sometimes jokingly called "Koopmania"), named after Bernard Koopman. It is the left-adjoint of the transfer operator of Frobenius–Perron.
In Borel functional calculus
Using the language of category theory, the composition operator is a pull-back on the space of measurable functions; it is adjoint to the transfer operator in the same way that the pull-back is adjoint to the push-forward; the composition operator is the inverse image functor.
Since the domain considered here is that of Borel functions, the above describes the Koopman operator as it appears in Borel functional calculus.
In holomorphic functional calculus
The domain of a composition operator can be taken more narrowly, as some Banach space, often consisting of holomorphic functions: for example, some Hardy space or Bergman space. In this case, the composition operator lies in the realm of some functional calculus, such as the holomorphic functional calculus.
Interesting questions posed in the study of composition operators often relate to how the spectral properties of the operator depend on the function space. Other questions include whether is compact or trace-class; answers typically depend on how the function behaves on the boundary of some domain.
When the transfer operator is a left-shift operator, the Koopman operator, as its adjoint, can be taken to be the right-shift operator. An appropriate basis, explicitly manifesting the shift, can often be found in the orthogonal polynomials. When these are orthogonal on the real number line, the shift is given by the Jacobi operator. When the polynomials are orthogonal o
|
https://en.wikipedia.org/wiki/Network%20bridge
|
A network bridge is a computer networking device that creates a single, aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer (layer 2). If one or more segments of the bridged network are wireless, the device is known as a wireless bridge.
The main types of network bridging technologies are simple bridging, multiport bridging, and learning or transparent bridging.
Transparent bridging
Transparent bridging uses a table called the forwarding information base to control the forwarding of frames between network segments. The table starts empty and entries are added as the bridge receives frames. If a destination address entry is not found in the table, the frame is flooded to all other ports of the bridge, flooding the frame to all segments except the one from which it was received. By means of these flooded frames, a host on the destination network will respond and a forwarding database entry will be created. Both source and destination addresses are used in this process: source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to. Digital Equipment Corporation (DEC) originally developed the technology in the 1980s.
In the context of a two-port bridge, the forwarding information base can be seen as a filtering database. A bridge reads a frame's destination address and decides to either forward or filter. If the bridge determines that the destination host is on another segment on the network, it forwards the frame to that segment. If the destination address belongs to the same segment as the source address, the bridge filters the frame, p
|
https://en.wikipedia.org/wiki/Index%20of%20software%20engineering%20articles
|
This is an alphabetical list of articles pertaining specifically to software engineering.
0–9
2D computer graphics —
3D computer graphics
A
Abstract syntax tree —
Abstraction —
Accounting software —
Ada —
Addressing mode —
Agile software development —
Algorithm —
Anti-pattern —
Application framework —
Application software —
Artificial intelligence —
Artificial neural network —
ASCII —
Aspect-oriented programming —
Assembler —
Assembly language —
Assertion —
Automata theory —
Automotive software —
Avionics software
B
Backward compatibility —
BASIC —
BCPL —
Berkeley Software Distribution —
Beta test —
Boolean logic —
Business software
C
C —
C++ —
C# —
CAD —
Canonical model —
Capability Maturity Model —
Capability Maturity Model Integration —
COBOL —
Code coverage —
Cohesion —
Compilers —
Complexity —
Computation —
Computational complexity theory —
Computer —
Computer-aided design —
Computer-aided manufacturing —
Computer architecture —
Computer bug —
Computer file —
Computer graphics —
Computer model —
Computer multitasking —
Computer programming —
Computer science —
Computer software —
Computer term etymologies —
Concurrent programming —
Configuration management —
Coupling —
Cyclomatic complexity
D
Data structure —
Data-structured language —
Database —
Dead code —
Decision table —
Declarative programming —
Design pattern —
Development stage —
Device driver —
Disassembler —
Disk image —
Domain-specific language
E
EEPROM —
Electronic design automation —
Embedded system —
Engineering —
Engineering model —
EPROM —
Even-odd rule —
Expert system —
Extreme programming
F
FIFO (computing and electronics) —
File system —
Filename extension —
Finite-state machine —
Firmware —
Formal methods —
Forth —
Fortran —
Forward compatibility —
Functional decomposition —
Functional design —
Functional programming
G
Game development —
Game programming —
Game tester —
GIMP Toolkit —
Graphical user interface
H
Hierarchical database —
High-level language —
Hoare logic —
Human–compute
|
https://en.wikipedia.org/wiki/Friedrich%20L.%20Bauer
|
Friedrich Ludwig "Fritz" Bauer (10 June 1924 – 26 March 2015) was a German pioneer of computer science and professor at the Technical University of Munich. He coined the term Software engineering
Life
Bauer earned his Abitur in 1942 and served in the Wehrmacht during World War II, from 1943 to 1945. From 1946 to 1950, he studied mathematics and theoretical physics at Ludwig-Maximilians-Universität in Munich. Bauer received his Doctor of Philosophy (Ph.D.) under the supervision of Fritz Bopp for his thesis Gruppentheoretische Untersuchungen zur Theorie der Spinwellengleichungen ("Group-theoretic investigations of the theory of spin wave equations") in 1952. He completed his habilitation thesis Über quadratisch konvergente Iterationsverfahren zur Lösung von algebraischen Gleichungen und Eigenwertproblemen ("On quadratically convergent iteration methods for solving algebraic equations and eigenvalue problems") in 1954 at the Technical University of Munich. After teaching as a privatdozent at the Ludwig Maximilian University of Munich from 1954 to 1958, he became extraordinary professor for applied mathematics at the University of Mainz. Since 1963, he worked as a professor of mathematics and (since 1972) computer science at the Technical University of Munich. He retired in 1989.
Work
Bauer's early work involved constructing computing machinery (e.g. the logical relay computer STANISLAUS from 1951–1955). In this context, he was the first to propose the widely used stack method of expression evaluation.
Bauer was a member of the committees that developed the imperative computer programming languages ALGOL 58, and its successor ALGOL 60, important predecessors to all modern imperative programming languages. For ALGOL 58, Bauer was with the German Gesellschaft für Angewandte Mathematik und Mechanik (GAMM, Society of Applied Mathematics and Mechanics) which worked with the American Association for Computing Machinery (ACM). For ALGOL 60, Bauer was with the Internation
|
https://en.wikipedia.org/wiki/Shikaku
|
(also anglicised as Divide by Squares or Divide by Box) is a logic puzzle published by Nikoli. As of 2011, two books consisting entirely of Shikaku puzzles has been published by Nikoli.
Rules
Shikaku is played on a rectangular grid. Some of the squares in the grid are numbered. The objective is to divide the grid into rectangular and square pieces such that each piece contains exactly one number, and that number represents the area of the rectangle.
See also
List of Nikoli puzzle types
External links
Nikoli's English-language page on Shikaku
Logic puzzles
|
https://en.wikipedia.org/wiki/Projection%20%28relational%20algebra%29
|
In relational algebra, a projection is a unary operation written as , where is a relation and are attribute names. Its result is defined as the set obtained when the components of the tuples in are restricted to the set – it discards (or excludes) the other attributes.
In practical terms, if a relation is thought of as a table, then projection can be thought of as picking a subset of its columns. For example, if the attributes are (name, age), then projection of the relation {(Alice, 5), (Bob, 8)} onto attribute list (age) yields {5,8} – we have discarded the names, and only know what ages are present.
Projections may also modify attribute values. For example, if has attributes , , , where the values of are numbers, then
is like , but with all -values halved.
Related concepts
The closely related concept in set theory (see: projection (set theory)) differs from that of relational algebra in that, in set theory, one projects onto ordered components, not onto attributes. For instance, projecting onto the second component yields 7.
Projection is relational algebra's counterpart of existential quantification in predicate logic. The attributes not included correspond to existentially quantified variables in the predicate whose extension the operand relation represents. The example below illustrates this point.
Because of the correspondence with existential quantification, some authorities prefer to define projection in terms of the excluded attributes. In a computer language it is of course possible to provide notations for both, and that was done in ISBL and several languages that have taken their cue from ISBL.
A nearly identical concept occurs in the category of monoids, called a string projection, which consists of removing all of the letters in the string that do not belong to a given alphabet.
When implemented in SQL standard the "default projection" returns a multiset instead of a set, and the projection is obtained by the addition of the DISTINCT
|
https://en.wikipedia.org/wiki/Topological%20tensor%20product
|
In mathematics, there are usually many different ways to construct a topological tensor product of two topological vector spaces. For Hilbert spaces or nuclear spaces there is a simple well-behaved theory of tensor products (see Tensor product of Hilbert spaces), but for general Banach spaces or locally convex topological vector spaces the theory is notoriously subtle.
Motivation
One of the original motivations for topological tensor products is the fact that tensor products of the spaces of smooth functions on do not behave as expected. There is an injection
but this is not an isomorphism. For example, the function cannot be expressed as a finite linear combination of smooth functions in We only get an isomorphism after constructing the topological tensor product; i.e.,
This article first details the construction in the Banach space case. is not a Banach space and further cases are discussed at the end.
Tensor products of Hilbert spaces
The algebraic tensor product of two Hilbert spaces A and B has a natural positive definite sesquilinear form (scalar product) induced by the sesquilinear forms of A and B. So in particular it has a natural positive definite quadratic form, and the corresponding completion is a Hilbert space A ⊗ B, called the (Hilbert space) tensor product of A and B.
If the vectors ai and bj run through orthonormal bases of A and B, then the vectors ai⊗bj form an orthonormal basis of A ⊗ B.
Cross norms and tensor products of Banach spaces
We shall use the notation from in this section. The obvious way to define the tensor product of two Banach spaces and is to copy the method for Hilbert spaces: define a norm on the algebraic tensor product, then take the completion in this norm. The problem is that there is more than one natural way to define a norm on the tensor product.
If and are Banach spaces the algebraic tensor product of and means the tensor product of and as vector spaces and is denoted by The algebraic tensor prod
|
https://en.wikipedia.org/wiki/Selection%20%28relational%20algebra%29
|
In relational algebra, a selection (sometimes called a restriction in reference to E.F. Codd's 1970 paper and not, contrary to a popular belief, to avoid confusion with SQL's use of SELECT, since Codd's article predates the existence of SQL) is a unary operation that denotes a subset of a relation.
A selection is written as
or where:
and are attribute names
is a binary operation in the set
is a value constant
is a relation
The selection denotes all tuples in for which holds between the and the attribute.
The selection denotes all tuples in for which holds between the attribute and the value .
For an example, consider the following tables where the first table gives the relation , the second table gives the result of and the third table gives the result of .
More formally the semantics of the selection is defined as
follows:
The result of the selection is only defined if the attribute names that it mentions are in the heading of the relation that it operates upon.
Generalized selection
A generalized selection is a unary operation written as where is a propositional formula that consists of atoms as allowed in the normal selection and, in addition, the logical operators ∧ (and), ∨ (or) and (negation). This selection selects all those tuples in for which holds.
For an example, consider the following tables where the first table gives the relation and the second the result of .
Formally the semantics of the generalized selection is defined as follows:
The result of the selection is only defined if the attribute names that it mentions are in the header of the relation that it operates upon.
The generalized selection is expressible with other basic algebraic operations. A simulation of generalized selection using the fundamental operators is defined by the following rules:
Computer languages
In computer languages it is expected that any truth-valued expression be permitted as the selection condition rather than restrictin
|
https://en.wikipedia.org/wiki/Culture24
|
Culture24, originally the 24 Hour Museum, is a British charity which publishes websites, Culture24, Museum Crush and Show Me, about visual culture and heritage in the United Kingdom, as well as supplying data and support services to other cultural websites including Engaging Places.
It operates independently, and receives government funding.
Organisation
Culture24 is based in Brighton, southern England, and has ten employees. The Culture24 Director is Jane Finnis, who contributed a chapter to Learning to Live: Museums, young people and education and in March 2010 was named as one of 50 "Women to Watch" in the United Kingdom cultural and creative sectors by the Cultural Leadership Programme. Past Culture24 chairman include John Newbigin, who was named as one of Wired Magazine's top 100 people shaping the digital world in May 2010.
The charity was founded in 2001 as the 24 Hour Museum, when the website of the same name became an independent company.
The organisation changed its name to Culture24 in November 2007, and the website followed suit on 11 February 2009. Culture24 is a registered charity and is funded by the UK government through Arts Council England (ACE).
Purpose
The (now defunct) Museums, Libraries and Archives Council was working with Culture24 as one of its partners in furthering the council's digital agenda, specifically helping to deliver:
Culture24 also administered Museums at Night (UK) between 2010 and 2019, the annual weekend of late openings at museums, galleries and heritage sites.
Websites
The main Culture24 website is a guide to museums, public galleries, libraries, archives, heritage sites and science centres. It has a database of over 5,000 cultural institutions, who are able to update the information about their activities. It features daily arts, museum, history and heritage news, and exhibition reviews. News stories are available as RSS newsfeed.
Culture24 also runs a site for children, Show Me, which has online activities related
|
https://en.wikipedia.org/wiki/Broadcasting%20%28networking%29
|
In computer networking, telecommunication and information theory, broadcasting is a method of transferring a message to all recipients simultaneously. Broadcasting can be performed as a high-level operation in a program, for example, broadcasting in Message Passing Interface, or it may be a low-level networking operation, for example broadcasting on Ethernet.
All-to-all communication is a computer communication method in which each sender transmits messages to all receivers within a group. In networking this can be accomplished using broadcast or multicast. This is in contrast with the point-to-point method in which each sender communicates with one receiver.
Addressing methods
There are four principal addressing methods in the Internet Protocol:
Overview
In computer networking, broadcasting refers to transmitting a packet that will be received by every device on the network. In practice, the scope of the broadcast is limited to a broadcast domain.
Broadcasting is the most general communication method and is also the most intensive, in the sense that many messages may be required and many network devices are involved. This is in contrast to unicast addressing in which a host sends datagrams to another single host, identified by a unique address.
Broadcasting may be performed as all scatter in which each sender performs its own scatter in which the messages are distinct for each receiver, or all broadcast in which they are the same.
The MPI message passing method which is the de facto standard on large computer clusters includes the MPI_Alltoall method.
Not all network technologies support broadcast addressing; for example, neither X.25 nor Frame Relay have broadcast capability. The Internet Protocol Version 4 (IPv4), which is the primary networking protocol in use today on the Internet and all networks connected to it, supports broadcast, but the broadcast domain is the broadcasting host's subnet, which is typically small; there is no way to do an Internet-
|
https://en.wikipedia.org/wiki/Air%20preheater
|
An air preheater is any device designed to heat air before another process (for example, combustion in a boiler With the primary objective of increasing the thermal efficiency of the process. They may be used alone or to replace a recuperative heat system or to replace a steam coil.
In particular, this article describes the combustion air preheaters used in large boilers found in thermal power stations producing electric power from e.g. fossil fuels, biomass or waste. For instance, as the Ljungström air preheater has been attributed worldwide fuel savings estimated to 4,960,000,000 tons of oil, "few inventions have been as successful in saving fuel as the Ljungström Air Preheater", marked as the 44th International Historic Mechanical Engineering Landmark by the American Society of Mechanical Engineers.
The purpose of the air preheater is to recover the heat from the boiler flue gas which increases the thermal efficiency of the boiler by reducing the useful heat lost in the flue gas. As a consequence, the flue gases are also conveyed to the flue gas stack (or chimney) at a lower temperature, allowing simplified design of the conveyance system and the flue gas stack. It also allows control over the temperature of gases leaving the stack (to meet emissions regulations, for example). It is installed between the economizer and chimney.
Types
There are two types of air preheaters for use in steam generators in thermal power stations: One is a tubular type built into the boiler flue gas ducting, and the other is a regenerative air preheater. These may be arranged so the gas flows horizontally or vertically across the axis of rotation.
Another type of air preheater is the regenerator used in iron or glass manufacture.
Tubular type
Construction features
Tubular preheaters consist of straight tube bundles which pass through the outlet ducting of the boiler and open at each end outside of the ducting. Inside the ducting, the hot furnace gases pass around the preheater t
|
https://en.wikipedia.org/wiki/Electronic%20speed%20control
|
An electronic speed control (ESC) is an electronic circuit that controls and regulates the speed of an electric motor. It may also provide reversing of the motor and dynamic braking.
Miniature electronic speed controls are used in electrically powered radio controlled models. Full-size electric vehicles also have systems to control the speed of their drive motors.
Function
An electronic speed control follows a speed reference signal (derived from a throttle lever, joystick, or other manual input) and varies the switching rate of a network of field effect transistors (FETs). By adjusting the duty cycle or switching frequency of the transistors, the speed of the motor is changed. The rapid switching of the current flowing through the motor is what causes the motor itself to emit its characteristic high-pitched whine, especially noticeable at lower speeds.
Different types of speed controls are required for brushed DC motors and brushless DC motors. A brushed motor can have its speed controlled by varying the voltage on its armature. (Industrially, motors with electromagnet field windings instead of permanent magnets can also have their speed controlled by adjusting the strength of the motor field current.) A brushless motor requires a different operating principle. The speed of the motor is varied by adjusting the timing of pulses of current delivered to the several windings of the motor.
Brushless ESC systems basically create three-phase AC power, like a variable frequency drive, to run brushless motors. Brushless motors are popular with radio controlled airplane hobbyists because of their efficiency, power, longevity and light weight in comparison to traditional brushed motors. Brushless DC motor controllers are much more complicated than brushed motor controllers.
The correct phase of the current fed to the motor varies with the motor rotation, which is to be taken into account by the ESC: Usually, back EMF from the motor windings is used to detect this
|
https://en.wikipedia.org/wiki/System%20requirements%20specification
|
A System Requirements Specification (SyRS) (abbreviated SysRS to be distinct from a software requirements specification (SRS)) is a structured collection of information that embodies the requirements of a system.
A business analyst (BA), sometimes titled system analyst, is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers.
See also
Business analysis
Business process reengineering
Business requirements
Concept of operations
Data modeling
Information technology
Process modeling
Requirement
Requirements analysis
Software requirements specification
Systems analysis
Use case
References
External links
IEEE Guide for Developing System Requirements Specifications (IEEE Std 1233, 1999 Edition)
IEEE Guide for Developing System Requirements Specifications (IEEE Std 1233, 1998 Edition)
DAU description System/Subsystem Specification, Data Item Description (SSS-DID)
System Requirements Specification for STEWARDS example SRS at USDA
Software engineering
Systems analysis
Systems engineering
ru:Техническое задание
|
https://en.wikipedia.org/wiki/Two-graph
|
In mathematics, a two-graph is a set of (unordered) triples chosen from a finite vertex set X, such that every (unordered) quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the same number of triples of the two-graph. Two-graphs have been studied because of their connection with equiangular lines and, for regular two-graphs, strongly regular graphs, and also finite groups because many regular two-graphs have interesting automorphism groups.
A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs.
Examples
On the set of vertices {1,...,6} the following collection of unordered triples is a two-graph:
123 124 135 146 156 236 245 256 345 346
This two-graph is a regular two-graph since each pair of distinct vertices appears together in exactly two triples.
Given a simple graph G = (V,E), the set of triples of the vertex set V whose induced subgraph has an odd number of edges forms a two-graph on the set V. Every two-graph can be represented in this way. This example is referred to as the standard construction of a two-graph from a simple graph.
As a more complex example, let T be a tree with edge set E. The set of all triples of E that are not contained in a path of T form a two-graph on the set E.
Switching and graphs
A two-graph is equivalent to a switching class of graphs and also to a (signed) switching class of signed complete graphs.
Switching a set of vertices in a (simple) graph means reversing the adjacencies of each pair of vertices, one in the set and the other not in the set: thus the edge set is changed so that an adjacent pair becomes nonadjacent and a nonadjacent pair becomes adjacent. The edges whose endpoints are both in the set, or both not in the set, are not changed. Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence c
|
https://en.wikipedia.org/wiki/Equiangular%20lines
|
In geometry, a set of lines is called equiangular if all the lines intersect at a single point, and every pair of lines makes the same angle.
Equiangular lines in Euclidean space
Computing the maximum number of equiangular lines in n-dimensional Euclidean space is a difficult problem, and unsolved in general, though bounds are known. The maximal number of equiangular lines in 2-dimensional Euclidean space is 3: we can take the lines through opposite vertices of a regular hexagon, each at an angle 120 degrees from the other two. The maximum in 3 dimensions is 6: we can take lines through opposite vertices of an icosahedron. It is known that the maximum number in any dimension is less than or equal to . This upper bound is tight up to a constant factor to a construction by de Caen. The maximum in dimensions 1 through 16 is listed in the On-Line Encyclopedia of Integer Sequences as follows:
1, 3, 6, 6, 10, 16, 28, 28, 28, 28, 28, 28, 28, 28, 36, 40, ... .
In particular, the maximum number of equiangular lines in 7 dimensions is 28. We can obtain these lines as follows. Take the vector (−3,−3,1,1,1,1,1,1) in , and form all 28 vectors obtained by permuting the components of this. The dot product of two of these vectors is 8 if both have a component 3 in the same place or −8 otherwise. Thus, the lines through the origin containing these vectors are equiangular. Moreover, all 28 vectors are orthogonal to the vector (1,1,1,1,1,1,1,1) in , so they lie in a 7-dimensional space. In fact, these 28 vectors and their negatives are, up to rotation and dilation, the 56 vertices of the 321 polytope. In other words, they are the weight vectors of the 56-dimensional representation of the Lie group E7.
Equiangular lines are equivalent to two-graphs. Given a set of equiangular lines, let c be the cosine of the common angle. We assume that the angle is not 90°, since that case is trivial (i.e., not interesting, because the lines are just coordinate axes); thus, c is nonzer
|
https://en.wikipedia.org/wiki/Registered%20state%20change%20notification
|
In Fibre Channel protocol, a registered state change notification (RSCN) is a Fibre Channel fabric's notification sent to all specified nodes in case of any major fabric changes. This allows nodes to immediately gain knowledge about the fabric and react accordingly.
Overview
Implementation of this function is obligatory for each Fibre Channel switch, but is optional for a node. This function belongs to a second level of the protocol, or FC2.
Some events that trigger notifications are:
Nodes joining or leaving the fabric (most common usage)
Switches joining or leaving the fabric
Changing the switch name
The nodes wishing to be notified in such way need to register themselves first at the Fabric Controller, which is a standardized FC virtual address present at each switch.
RSCN and zoning
If a fabric has some zones configured for additional security, notifications do not cross zone boundaries if not needed. Simply, there is no need to notify a node about a change that it cannot see anyway (because it happened in a separate zone).
Example
For example, let's assume there is a fabric with just one node, namely a server's FC-compatible HBA. First it registers itself for notifications. Then a human administrator connects another node, like a disk array, to the fabric. This event is known at first only to a single switch, the one that detected one of its ports going online. The switch, however, has a list of registered nodes (currently containing only the HBA node) and notifies every one of them. As the HBA receives the notification, it chooses to query the nearest switch about current list of nodes. It detects a new disk array and starts to communicate with it on a SCSI level, asking for a list of SCSI LUNs. Then it notifies a server's operating system, that there is a new SCSI target containing some LUNs. The operating system auto-configures those as new block devices, ready for use.
See also
Storage area network
Fibre Channel
Fibre Channel fabric
Fibre Chann
|
https://en.wikipedia.org/wiki/Jim%20Horning
|
James Jay Horning (24 August 1942 – 18 January 2013) was an American computer scientist and ACM Fellow.
Overview
Jim Horning received a PhD in computer science from Stanford University in 1969 for a thesis entitled A Study of Grammatical Inference. He was a founding member, and later chairman, of the Computer Systems Research Group at the University of Toronto, Canada, from 1969 until 1977. He was then a Research Fellow at the Xerox Palo Alto Research Center (PARC) from 1977 until 1984 and a founding member and senior consultant at DEC Systems Research Center (DEC/SRC) from 1984 until 1996. He was founder and director of STAR Lab from 1997 until 2001 at InterTrust Technologies Corp.
Peter G. Neumann reported on 22 January 2013 in the RISKS Digest, Volume 27, Issue 14, that Horning had died on 18 January 2013.
Horning's interests included programming languages, programming methodology, specification, formal methods, digital rights management and computer/network security. A major contribution was his involvement with the Larch approach to formal specification with John Guttag (MIT) et al.
Selected publications
A Compiler Generator (with William M. McKeeman and D. B. Wortman), Prentice Hall (1970). .
References
External links
Home page
Curriculum Vitae
1942 births
2013 deaths
Stanford University alumni
American computer scientists
Academic staff of the University of Toronto
Xerox people
Digital Equipment Corporation people
Fellows of the Association for Computing Machinery
Formal methods people
|
https://en.wikipedia.org/wiki/Trantor%3A%20The%20Last%20Stormtrooper
|
Trantor: The Last Stormtrooper is a video game for the ZX Spectrum, Commodore 64, MSX, Amstrad CPC and Atari ST released by Go! (a label of U.S. Gold) in 1987. A version for MS-DOS was released by KeyPunch Software. It was produced by Probe Software (the team consisted of David Quinn, Nick Bruty and David Perry). It was released in Spain (as "Trantor") by Erbe Software.
The game is a mix between shoot 'em up and a platform game, but it was mostly known for its large and well-animated sprites. Bruty, who had previously produced graphics within tight limits on other projects, decided instead to focus on artwork and keep other aspects of the game simple to fit the constraints of the platforms.
Gameplay
The player controls the titular stormtrooper who is the only survivor of the destruction of his spaceship (hence the title). Gameplay revolves around exploring the play-area and collecting code-letters. The play-area consists of several different floors which can be explored freely via connecting lifts. However, Trantor is up against a very strict time limit.
The levels are infested by various aliens and small flying robots which sap Trantor's strength if he touches them. Fortunately, Trantor is armed with a flamethrower with which to destroy these pests. Unfortunately, fuel for this is limited although he can re-fill this at fuel-points located on many of the floors.
Whenever Trantor finds a code-letter, his timer countdown is reset and then counts down again until he finds another letter. For this reason, much of the gameplay is a race-against-time.
There are also lockers scattered around the floors which contain pick-ups to assist Trantor. These include hamburgers (restore strength) and clocks (resetting the time, as finding a code letter would).
The game ends when Trantor's energy runs out or if the timer reaches zero. The player's performance is shown as a percentage of the game completed, along with a short comment. The comment for nine percent is "Is that
|
https://en.wikipedia.org/wiki/Perfect%20Developer
|
Perfect Developer (PD) is a tool for developing computer programs in a rigorous manner. It is used to develop applications in areas including IT systems and airborne critical systems. The principle is to develop a formal specification and refine the specification to code. Even though the tool is founded on formal methods, the suppliers claim that advanced mathematical knowledge is not a prerequisite.
PD supports the Verified Design by Contract paradigm, which is an extension of Design by contract. In Verified Design by Contract, the contracts are verified by static analysis and automated theorem proving, so that it is certain that they will not fail at runtime.
The Perfect specification language used has an object-oriented style, producing code in programming languages including Java, C# and C++. It has been developed by the UK company Escher Technologies Ltd. They note on their website that their claim is not that the language itself is perfect, but that it can be used to produce code which perfectly implements a precise specification.
See also
JML
Safety Integrity Level
External links
Perfect Developer
Escher Technologies
Defence Standards
Formal methods tools
Formal specification languages
|
https://en.wikipedia.org/wiki/Nord-10
|
Nord-10 was a medium-sized general-purpose 16-bit minicomputer designed for multilingual time-sharing applications and for real-time multi-program systems, produced by Norsk Data. It was introduced in 1973. The later follow up model, Nord-10/S, introduced in 1975, introduced CPU cache, paging, and other miscellaneous improvements.
The CPU had a microprocessor, which was defined in the manual as a portmanteau of microcode processor, not to be confused with the then nascent microprocessor. The CPU additionally contained instructions, operator communication, bootstrap loaders, and hardware test programs, that were implemented in a 1K read-only memory.
The microprocessor also allowed for customer specified instructions to be built in. Nord-10 had a memory management system with hardware paging extending the memory size from 64 to 256K 16-bit words and two independent protecting systems, one acting on each page and one on the mode of instructions. The interrupt system had 16 program levels in hardware, each with its own set of general-purpose registers.
Note: Much of the following information is taken from a document written by Norsk Data introducing the Nord-10. Some information, particularly about the memory system, may be inaccurate for the later Nord-10/S.
Central processor
The central processing unit (CPU) consisted of a total 24 printed circuit boards. The last eight positions in the rack were used for input/output (I/O) devices operated by program control, such as the console teleprinter (teletype), paper punched tape and punched card reader and punch, line printer, display, operator's panel, and a real-time clock.
The Nord-10 had 160 processor registers, of which 128 were available to programs, eight on each of the 16 program levels. Six of those registers were general registers, one was the program counter, and the other contained status information. Floating point arithmetic operations were standard. The instructions could operate on five different forma
|
https://en.wikipedia.org/wiki/Nord-1
|
Nord-1 was Norsk Data's first minicomputer and the first commercially available computer made in Norway.
It was a 16-bit system, developed in 1967 from the Simulation for Automatic Machinery. The first Nord-1 (serial number 2) installed was at the heart of a complete ship system aboard a Japanese-built cargo liner, the Taimyr. The system included bridge control, power management, load condition monitoring, and the first ever computer-controlled, radar-sensed anti-collision system (Automatic Radar Plotting Aid). Taimyr's Nord-1 turned out reliable for the time, with more than a year between failures.
It was probably the first minicomputer to feature floating-point arithmetic equipment as standard, and had an unusually rich complement of hardware registers for its time. It also featured relative addressing, and a fully automatic context switched interrupt system. It was also the first minicomputer to offer virtual memory, offered as an option by 1969. It was succeeded by the Nord-10.
Remaining machines
The Nord-1 has been unusually well-preserved. Approximately 60 machines seem to have been produced, and at the very least ten machines have been preserved, including serial numbers 2, 4, and 5. This may be because the company Norsk Data was already a very large and very rapidly growing corporation by the time many of these machines were decommissioned.
References
Norsk Data minicomputers
1967 establishments in Norway
16-bit computers
|
https://en.wikipedia.org/wiki/John%20Rushby
|
John Rushby (born 1949) is a British computer scientist now based in the United States and working for SRI International. He previously taught and did research for Manchester University and later Newcastle University.
Early life and education
John Rushby was born and brought up in London, where he attended Dartford Grammar School. He studied at Newcastle University in the United Kingdom, gaining his computer science BSc there in 1971 and his PhD in 1977.
Career
From 1974 to 1975, he was a lecturer in the Computer Science Department at Manchester University. From 1979 to 1982, he was a research associate in the Department of Computing Science at the Newcastle University.
Rushby joined SRI International in Menlo Park, California in 1983. Currently he is Program Director for Formal Methods and Dependable Systems in the Computer Science Laboratory at SRI. He developed the Prototype Verification System, which is a theorem prover.
Awards and memberships
Rushby was the recipient of the 2011 Harlan D. Mills Award from the IEEE Computer Society.
References
External links
Official homepage
Personal homepage
Living people
British computer scientists
American computer scientists
Formal methods people
Alumni of Newcastle University
British expatriates in the United States
SRI International people
People educated at Dartford Grammar School
1949 births
|
https://en.wikipedia.org/wiki/Butterfly%20diagram
|
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states.
Most commonly, the term "butterfly" appears in the context of the Cooley–Tukey FFT algorithm, which recursively breaks down a DFT of composite size n = rm into r smaller transforms of size m where r is the "radix" of the transform. These smaller DFTs are then combined via size-r butterflies, which themselves are DFTs of size r (performed m times on corresponding outputs of the sub-transforms) pre-multiplied by roots of unity (known as twiddle factors). (This is the "decimation in time" case; one can also perform the steps in reverse, known as "decimation in frequency", where the butterflies come first and are post-multiplied by twiddle factors. See also the Cooley–Tukey FFT article.)
Radix-2 butterfly diagram
In the case of the radix-2 Cooley–Tukey algorithm, the butterfly is simply a DFT of size-2 that takes two inputs (x0, x1) (corresponding outputs of the two sub-transforms) and gives two outputs (y0, y1) by the formula (not including twiddle factors):
If one draws the data-flow diagram for this pair of operations, the (x0, x1) to (y0, y1) lines cross and resemble the wings of a butterfly, hence the name (see also the illustration at right).
More specifically, a radix-2 decimation-in-time FFT algorithm on n = 2 p inputs with respect to a primitive n-th root of unity relies on O(n log2 n) butterflies of the form:
where k is an integer depending on the part of the transform being computed. Wh
|
https://en.wikipedia.org/wiki/Twiddle%20factor
|
A twiddle factor, in fast Fourier transform (FFT) algorithms, is any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm. This term was apparently coined by Gentleman & Sande in 1966, and has since become widespread in thousands of papers of the FFT literature.
More specifically, "twiddle factors" originally referred to the root-of-unity complex multiplicative constants in the butterfly operations of the Cooley–Tukey FFT algorithm, used to recursively combine smaller discrete Fourier transforms. This remains the term's most common meaning, but it may also be used for any data-independent multiplicative constant in an FFT.
The prime-factor FFT algorithm is one unusual case in which an FFT can be performed without twiddle factors, albeit only for restricted factorizations of the transform size.
For example, W82 is a twiddle factor used in 8-point radix-2 FFT.
References
W. M. Gentleman and G. Sande, "Fast Fourier transforms—for fun and profit," Proc. AFIPS 29, 563–578 (1966).
FFT algorithms
|
https://en.wikipedia.org/wiki/Code%20bloat
|
In computer programming, code bloat is the production of program code (source code or machine code) that is perceived as unnecessarily long, slow, or otherwise wasteful of resources. Code bloat can be caused by inadequacies in the programming language in which the code is written, the compiler used to compile it, or the programmer writing it. Thus, while code bloat generally refers to source code size (as produced by the programmer), it can be used to refer instead to the generated code size or even the binary file size.
Examples
The following JavaScript algorithm has a large number of redundant variables, unnecessary logic and inefficient string concatenation.
// Complex
function TK2getImageHTML(size, zoom, sensor, markers) {
var strFinalImage = "";
var strHTMLStart = '<img src="';
var strHTMLEnd = '" alt="The map"/>';
var strURL = "http://maps.google.com/maps/api/staticmap?center=";
var strSize = '&size='+ size;
var strZoom = '&zoom='+ zoom;
var strSensor = '&sensor='+ sensor;
strURL += markers[0].latitude;
strURL += ",";
strURL += markers[0].longitude;
strURL += strSize;
strURL += strZoom;
strURL += strSensor;
for (var i = 0; i < markers.length; i++) {
strURL += markers[i].addMarker();
}
strFinalImage = strHTMLStart + strURL + strHTMLEnd;
return strFinalImage;
};
The same logic can be stated more efficiently as follows:
// Simplified
const TK2getImageHTML = (size, zoom, sensor, markers) => {
const [ { latitude, longitude } ] = markers;
let url = `http://maps.google.com/maps/api/staticmap?center=${ latitude },${ longitude }&size=${ size }&zoom=${ zoom }&sensor=${ sensor }`;
markers.forEach(marker => url += marker.addMarker());
return `<img src="${ url }" alt="The map" />`;
};
Code density of different languages
The difference in code density between various computer languages is so great that often less memory is needed to hold both a progr
|
https://en.wikipedia.org/wiki/Edge%20disjoint%20shortest%20pair%20algorithm
|
Edge disjoint shortest pair algorithm is an algorithm in computer network routing. The algorithm is used for generating the shortest pair of edge disjoint paths between a given pair of vertices. For an undirected graph G(V, E), it is stated as follows:
Run the shortest path algorithm for the given pair of vertices
Replace each edge of the shortest path (equivalent to two oppositely directed arcs) by a single arc directed towards the source vertex
Make the length of each of the above arcs negative
Run the shortest path algorithm (Note: the algorithm should accept negative costs)
Erase the overlapping edges of the two paths found, and reverse the direction of the remaining arcs on the first shortest path such that each arc on it is directed towards the destination vertex now. The desired pair of paths results.
In lieu of the general purpose Ford's shortest path algorithm valid for negative arcs present anywhere in a graph (with nonexistent negative cycles), Bhandari provides two different algorithms, either one of which can be used in Step 4. One algorithm is a slight modification of the traditional Dijkstra's algorithm, and the other called the Breadth-First-Search (BFS) algorithm is a variant of the Moore's algorithm. Because the negative arcs are only on the first shortest path, no negative cycle arises in the transformed graph (Steps 2 and 3). In a nonnegative graph, the modified Dijkstra algorithm reduces to the traditional Dijkstra's algorithm, and can therefore be used in Step 1 of the above algorithm (and similarly, the BFS algorithm).
The Modified Dijkstra AlgorithmBhandari, Ramesh (1994), “Optimal Diverse Routing in Telecommunication Fiber Networks”, Proc. of IEEE INFOCOM, Toronto, Canada, pp. 1498-1508.
G = (V, E)
d(i) – the distance of vertex i (i∈V) from source vertex A; it is the sum of arcs in a possible
path from vertex A to vertex i. Note that d(A)=0;
P(i) – the predecessor of vertex i on the same path.
Z – the destination vertex
Step 1.
Sta
|
https://en.wikipedia.org/wiki/Mashup%20%28web%20application%20hybrid%29
|
A mashup (computer industry jargon), in web development, is a web page or web application that uses content from more than one source to create a single new service displayed in a single graphical interface. For example, a user could combine the addresses and photographs of their library branches with a Google map to create a map mashup. The term implies easy, fast integration, frequently using open application programming interfaces (open API) and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data.
The term mashup originally comes from creating something by combining elements from two or more sources.
The main characteristics of a mashup are combination, visualization, and aggregation. It is important to make existing data more useful, for personal and professional use. To be able to permanently access the data of other services, mashups are generally client applications or hosted online.
In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions the SOA way, instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0. Mashup composition tools are usually simple enough to be used by end-users. They generally do not require programming skills and rather support visual wiring of GUI widgets, services and components together. Therefore, these tools contribute to a new vision of the Web, where users are able to contribute.
The term "mashup" is not formally defined by any standard-setting body.
History
The broader context of the history of the Web provides a background for the development of mashups. Under the Web 1.0 model, organizations stored consumer data on portals and updated them regularly. They controlled all the consumer data, and the consumer had to use their products and services to get the information.
The advent of Web 2.0 intr
|
https://en.wikipedia.org/wiki/Copper%28I%29%20iodide
|
Copper(I) iodide is the inorganic compound with the formula CuI. It is also known as cuprous iodide. It is useful in a variety of applications ranging from organic synthesis to cloud seeding.
Copper(I) iodide is white, but samples often appear tan or even, when found in nature as rare mineral marshite, reddish brown, but such color is due to the presence of impurities. It is common for samples of iodide-containing compounds to become discolored due to the facile aerobic oxidation of the iodide anion to molecular iodine.
Structure
Copper(I) iodide, like most binary (containing only two elements) metal halides, is an inorganic polymer. It has a rich phase diagram, meaning that it exists in several crystalline forms. It adopts a zinc blende structure below 390 °C (γ-CuI), a wurtzite structure between 390 and 440 °C (β-CuI), and a rock salt structure above 440 °C (α-CuI). The ions are tetrahedrally coordinated when in the zinc blende or the wurtzite structure, with a Cu-I distance of 2.338 Å. Copper(I) bromide and copper(I) chloride also transform from the zinc blende structure to the wurtzite structure at 405 and 435 °C, respectively. Therefore, the longer the copper – halide bond length, the lower the temperature needs to be to change the structure from the zinc blende structure to the wurtzite structure. The interatomic distances in copper(I) bromide and copper(I) chloride are 2.173 and 2.051 Å, respectively. Consistent with its covalency, CuI is a p-type semiconductor.
Preparation
Copper(I) iodide can be prepared by heating iodine and copper in concentrated hydriodic acid.
In the laboratory however, copper(I) iodide is prepared by simply mixing an aqueous solution of potassium iodide and a soluble copper(II) salt such copper sulfate.
Cu2+ + 2I− → CuI + 0.5I2
Reactions
Cuprous iodide, which degrades on standing, can be purified by dissolution into concentrated solution of potassium iodide followed by dilution.
CuI + I− CuI2−
Copper(I) iodide reacts
|
https://en.wikipedia.org/wiki/Internet%20Experiment%20Note
|
An Internet Experiment Note (IEN) is a sequentially numbered document in a series of technical publications issued by the participants of the early development work groups that created the precursors of the modern Internet.
After DARPA began the Internet program in earnest in 1977, the project members were in need of communication and documentation of their work in order to realize the concepts laid out by Bob Kahn and Vint Cerf some years before. The Request for Comments (RFC) series was considered the province of the ARPANET project and the Network Working Group (NWG) which defined the network protocols used on it. Thus, the members of the Internet project decided on publishing their own series of documents, Internet Experiment Notes, which were modeled after the RFCs.
Jon Postel became the editor of the new series, in addition to his existing role of administering the long-standing RFC series. Between March, 1977, and September, 1982, 206 IENs were published. After that, with the plan to terminate support of the Network Control Protocol (NCP) on the ARPANET and switch to TCP/IP, the production of IENs was discontinued, and all further publication was conducted within the existing RFC system.
External links
Internet Experiment Notes index at postel.org
IEN archive at postel.org (plain text)
IEN archive at postel.org (PDF)
IEN index at rfc-editor.org
History of the Internet
Internet Standards
|
https://en.wikipedia.org/wiki/Hazy%20Sighted%20Link%20State%20Routing%20Protocol
|
The Hazy-Sighted Link State Routing Protocol (HSLS) is a wireless mesh network routing protocol being developed by the CUWiN Foundation. This is an algorithm allowing computers communicating via digital radio in a mesh network to forward messages to computers that are out of reach of direct radio contact. Its network overhead is theoretically optimal, utilizing both proactive and reactive link-state routing to limit network updates in space and time. Its inventors believe it is a more efficient protocol to route wired networks as well. HSLS was invented by researchers at BBN Technologies.
Efficiency
HSLS was made to scale well to networks of over a thousand nodes, and on larger networks begins to exceed the efficiencies of the other routing algorithms. This is accomplished by using a carefully designed balance of update frequency, and update extent in order to propagate link state information optimally. Unlike traditional methods, HSLS does not flood the network with link-state information to attempt to cope with moving nodes that change connections with the rest of the network. Further, HSLS does not require each node to have the same view of the network.
Why a link-state protocol?
Link-state algorithms are theoretically attractive because they find optimal routes, reducing waste of transmission capacity. The inventors of HSLS claim that routing protocols fall into three basically different schemes: proactive (such as OLSR), reactive (such as AODV), and algorithms that accept sub-optimal routings. If one graphs them, they become less efficient as they are more purely any single strategy, and the network grows larger. The best algorithms seem to be in a sweet spot in the middle.
The routing information is called a "link state update." The distance that a link-state is copied is the "time to live" and is a count of the number of times it may be copied from one node to the next.
HSLS is said to optimally balance the features of proactive, reactive, and subo
|
https://en.wikipedia.org/wiki/Egon%20B%C3%B6rger
|
Egon Börger (born 13 May 1946) is a German-born computer scientist based in Italy.
Life and work
Börger was born in Bad Laer, Westphalia, Lower Saxony, Germany. Between 1965 and 1971 he studied at the Sorbonne, Paris (France), Université Catholique de Louvain, Institut Supérieur de Philosophie de Louvain and University of Münster (Germany). Between 1972 and 1976, he was at the Università di Salerno in Italy, where he taught the first courses in the newborn Computer Science Degree.
Since 1985 he has held a Chair in computer science at the University of Pisa, Italy. Since September 2010, he has been an elected member of the Academia Europaea.
Egon Börger is a pioneer of applying logical methods in computer science. He is co-founder of the international conference series CSL. He is also one of the founders of the Abstract State Machines (ASM) formal method for accurate and controlled design and analysis of computer-based systems and cofounder of the series of international ASM workshops, which in 2008 merged with the regular meetings of the B and Z User Groups to form the international ABZ conference.
Börger contributed to the theoretical foundations of the method and initiated its industrial applications in a variety of fields, in particular programming languages, System architecture, requirements and software (re-)engineering, control systems, protocols, web services.
To this date, he is one of the leading scientists in ASM-based modeling and verification technology, which he has crucially shaped by his activities. In 2007, he received the Humboldt Research Award.
Festschrifts were produced for Börger's 60th and 75th birthdays.
Selected publications
Egon Börger and Robert Stärk, Abstract State Machines: A Method for High-Level System Design and Analysis, Springer-Verlag, 2003. ()
Egon Börger Computability, Complexity, Logic (North-Holland, Amsterdam 1989, translated from the German original from 1985, Italian Translation Bollati-Borighieri 1989)
Egon Börge
|
https://en.wikipedia.org/wiki/Duxelles
|
Duxelles () is a French cuisine term that refers to a mince of mushrooms, onions, herbs (such as thyme or parsley), and black pepper, sautéed in butter and reduced to a paste. Cream is sometimes used, and some recipes add a dash of madeira or sherry.
It is a basic preparation used in stuffings and sauces (notably, Beef Wellington) or as a garnish. It can also be filled into a pocket of raw pastry and baked as a savory tart.
The flavor depends on the mushrooms used. For example, wild porcini mushrooms have a much stronger flavor than white or brown mushrooms.
Duxelles is said to have been created by the 17th-century French chef François Pierre La Varenne (1615–1678) and to have been named after his employer, Nicolas Chalon du Blé, marquis d'Uxelles, maréchal de France.
Some classical cookbooks call for dehydrated mushrooms. According to Auguste Escoffier, dehydration enhances flavor and prevents water vapor from building up pressure that could cause a pastry to crack or even explode.
See also
Sautéed mushrooms
List of mushroom dishes
References
External links
Mushroom Duxelles: Intense and Refined
Mushroom Duxelle
Culinary terminology
Food ingredients
French cuisine
Mushroom dishes
|
https://en.wikipedia.org/wiki/Delay-tolerant%20networking
|
Delay-tolerant networking (DTN) is an approach to computer network architecture that seeks to address the technical issues in heterogeneous networks that may lack continuous network connectivity. Examples of such networks are those operating in mobile or extreme terrestrial environments, or planned networks in space.
Recently, the term disruption-tolerant networking has gained currency in the United States due to support from DARPA, which has funded many DTN projects. Disruption may occur because of the limits of wireless radio range, sparsity of mobile nodes, energy resources, attack, and noise.
History
In the 1970s, spurred by the decreasing size of computers, researchers began developing technology for routing between non-fixed locations of computers. While the field of ad hoc routing was inactive throughout the 1980s, the widespread use of wireless protocols reinvigorated the field in the 1990s as mobile ad hoc networking (MANET) and vehicular ad hoc networking became areas of increasing interest.
Concurrently with (but separate from) the MANET activities, DARPA had funded NASA, MITRE and others to develop a proposal for the Interplanetary Internet (IPN). Internet pioneer Vint Cerf and others developed the initial IPN architecture, relating to the necessity of networking technologies that can cope with the significant delays and packet corruption of deep-space communications. In 2002, Kevin Fall started to adapt some of the ideas in the IPN design to terrestrial networks and coined the term delay-tolerant networking and the DTN acronym. A paper published in 2003 SIGCOMM conference gives the motivation for DTNs. The mid-2000s brought about increased interest in DTNs, including a growing number of academic conferences on delay and disruption-tolerant networking, and growing interest in combining work from sensor networks and MANETs with the work on DTN. This field saw many optimizations on classic ad hoc and delay-tolerant networking algorithms and began to e
|
https://en.wikipedia.org/wiki/Symmetric%20game
|
In game theory, a symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. If one can change the identities of the players without changing the payoff to the strategies, then a game is symmetric. Symmetry can come in different varieties. Ordinally symmetric games are games that are symmetric with respect to the ordinal structure of the payoffs. A game is quantitatively symmetric if and only if it is symmetric with respect to the exact payoffs. A partnership game is a symmetric game where both players receive identical payoffs for any strategy set. That is, the payoff for playing strategy a against strategy b receives the same payoff as playing strategy b against strategy a.
Symmetry in 2x2 games
Only 12 out of the 144 ordinally distinct 2x2 games are symmetric. However, many of the commonly studied 2x2 games are at least ordinally symmetric. The standard representations of chicken, the Prisoner's Dilemma, and the Stag hunt are all symmetric games. Formally, in order for a 2x2 game to be symmetric, its payoff matrix must conform to the schema pictured to the right.
The requirements for a game to be ordinally symmetric are weaker, there it need only be the case that the ordinal ranking of the payoffs conform to the schema on the right.
Symmetry and equilibria
Nash (1951) shows that every finite symmetric game has a symmetric mixed strategy Nash equilibrium. Cheng et al. (2004) show that every two-strategy symmetric game has a (not necessarily symmetric) pure strategy Nash equilibrium.
Uncorrelated asymmetries: payoff neutral asymmetries
Symmetries here refer to symmetries in payoffs. Biologists often refer to asymmetries in payoffs between players in a game as correlated asymmetries. These are in contrast to uncorrelated asymmetries which are purely informational and have no effect on payoffs (e.g. see Hawk-dove game).
The general case
A game with a payoff
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.