source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Random%20permutation
|
A random permutation is a random ordering of a set of objects, that is, a permutation-valued random variable. The use of random permutations is often fundamental to fields that use randomized algorithms such as coding theory, cryptography, and simulation. A good example of a random permutation is the shuffling of a deck of cards: this is ideally a random permutation of the 52 cards.
Generating random permutations
Entry-by-entry brute force method
One method of generating a random permutation of a set of size n uniformly at random (i.e., each of the n! permutations is equally likely to appear) is to generate a sequence by taking a random number between 1 and n sequentially, ensuring that there is no repetition, and interpreting this sequence (x1, ..., xn) as the permutation
shown here in two-line notation.
This brute-force method will require occasional retries whenever the random number picked is a repeat of a number already selected. This can be avoided if, on the ith step (when x1, ..., xi − 1 have already been chosen), one chooses a number j at random between 1 and n − i + 1 and sets xi equal to the jth largest of the unchosen numbers.
Fisher-Yates shuffles
A simple algorithm to generate a permutation of n items uniformly at random without retries, known as the Fisher–Yates shuffle, is to start with any permutation (for example, the identity permutation), and then go through the positions 0 through n − 2 (we use a convention where the first element has index 0, and the last element has index n − 1), and for each position i swap the element currently there with a randomly chosen element from positions i through n − 1 (the end), inclusive. It's easy to verify that any permutation of n elements will be produced by this algorithm with probability exactly 1/n!, thus yielding a uniform distribution over all such permutations.
unsigned uniform(unsigned m); /* Returns a random integer 0 <= uniform(m) <= m-1 with uniform distribution */
void initialize_and_p
|
https://en.wikipedia.org/wiki/Fire-control%20system
|
A fire-control system (FCS) is a number of components working together, usually a gun data computer, a director and radar, which is designed to assist a ranged weapon system to target, track, and hit a target. It performs the same task as a human gunner firing a weapon, but attempts to do so faster and more accurately.
Naval based fire control
Origins
The original fire-control systems were developed for ships.
The early history of naval fire control was dominated by the engagement of targets within visual range (also referred to as direct fire). In fact, most naval engagements before 1800 were conducted at ranges of .
Even during the American Civil War, the famous engagement between and was often conducted at less than range.
Rapid technical improvements in the late 19th century greatly increased the range at which gunfire was possible. Rifled guns of much larger size firing explosive shells of lighter relative weight (compared to all-metal balls) so greatly increased the range of the guns that the main problem became aiming them while the ship was moving on the waves. This problem was solved with the introduction of the gyroscope, which corrected this motion and provided sub-degree accuracies. Guns were now free to grow to any size, and quickly surpassed calibre by the 1890s. These guns were capable of such great range that the primary limitation was seeing the target, leading to the use of high masts on ships.
Another technical improvement was the introduction of the steam turbine which greatly increased the performance of the ships. Earlier screw-powered capital ships were capable of perhaps 16 knots, but the first large turbine ships were capable of over 20 knots. Combined with the long range of the guns, this meant that the target ship could move a considerable distance, several ship lengths, between the time the shells were fired and landed. One could no longer eyeball the aim with any hope of accuracy. Moreover, in naval engagements it is also necess
|
https://en.wikipedia.org/wiki/Pappus%27s%20hexagon%20theorem
|
In mathematics, Pappus's hexagon theorem (attributed to Pappus of Alexandria) states that
given one set of collinear points and another set of collinear points then the intersection points of line pairs and and and are collinear, lying on the Pappus line. These three points are the points of intersection of the "opposite" sides of the hexagon .
It holds in a projective plane over any field, but fails for projective planes over any noncommutative division ring. Projective planes in which the "theorem" is valid are called pappian planes.
If one restricts the projective plane such that the Pappus line is the line at infinity, one gets the affine version of Pappus's theorem shown in the second diagram.
If the Pappus line and the lines have a point in common, one gets the so-called little version of Pappus's theorem.
The dual of this incidence theorem states that given one set of concurrent lines , and another set of concurrent lines , then the lines defined by pairs of points resulting from pairs of intersections and and and are concurrent. (Concurrent means that the lines pass through one point.)
Pappus's theorem is a special case of Pascal's theorem for a conic—the limiting case when the conic degenerates into 2 straight lines. Pascal's theorem is in turn a special case of the Cayley–Bacharach theorem.
The Pappus configuration is the configuration of 9 lines and 9 points that occurs in Pappus's theorem, with each line meeting 3 of the points and each point meeting 3 lines. In general, the Pappus line does not pass through the point of intersection of and . This configuration is self dual. Since, in particular, the lines have the properties of the lines of the dual theorem, and collinearity of is equivalent to concurrence of , the dual theorem is therefore just the same as the theorem itself. The Levi graph of the Pappus configuration is the Pappus graph, a bipartite distance-regular graph with 18 vertices and 27 edges.
Proof: affine form
|
https://en.wikipedia.org/wiki/Intracellular%20pH
|
Intracellular pH (pHi) is the measure of the acidity or basicity (i.e., pH) of intracellular fluid. The pHi plays a critical role in membrane transport and other intracellular processes. In an environment with the improper pHi, biological cells may have compromised function. Therefore, pHi is closely regulated in order to ensure proper cellular function, controlled cell growth, and normal cellular processes. The mechanisms that regulate pHi are usually considered to be plasma membrane transporters of which two main types exist — those that are dependent and those that are independent of the concentration of bicarbonate (). Physiologically normal intracellular pH is most commonly between 7.0 and 7.4, though there is variability between tissues (e.g., mammalian skeletal muscle tends to have a pHi of 6.8–7.1). There is also pH variation across different organelles, which can span from around 4.5 to 8.0. pHi can be measured in a number of different ways.
Homeostasis
Intracellular pH is typically lower than extracellular pH due to lower concentrations of HCO3−. A rise of extracellular (e.g., serum) partial pressure of carbon dioxide (pCO2) above 45 mmHg leads to formation of carbonic acid, which causes a decrease of pHi as it dissociates:
H2O + CO2 H2CO3 H+ + HCO3–
Since biological cells contain fluid that can act as a buffer, pHi can be maintained fairly well within a certain range. Cells adjust their pHi accordingly upon an increase in acidity or basicity, usually with the help of CO2 or HCO3– sensors present in the membrane of the cell. These sensors can permit H+ to pass through the cell membrane accordingly, allowing for pHi to be interrelated with extracellular pH in this respect.
Major intracellular buffer systems include those involving proteins or phosphates. Since the proteins have acidic and basic regions, they can serve as both proton donors or acceptors in order to maintain a relatively stable intracellular pH. In the case of a phosphate buffer, subs
|
https://en.wikipedia.org/wiki/Dynamic%20inconsistency
|
In economics, dynamic inconsistency or time inconsistency is a situation in which a decision-maker's preferences change over time in such a way that a preference can become inconsistent at another point in time. This can be thought of as there being many different "selves" within decision makers, with each "self" representing the decision-maker at a different point in time; the inconsistency occurs when not all preferences are aligned.
The term "dynamic inconsistency" is more closely affiliated with game theory, whereas "time inconsistency" is more closely affiliated with behavioral economics.
In game theory
In the context of game theory, dynamic inconsistency is a situation in a dynamic game where a player's best plan for some future period will not be optimal when that future period arrives. A dynamically inconsistent game is subgame imperfect. In this context, the inconsistency is primarily about commitment and credible threats. This manifests itself through a violation of Bellman's Principle of Optimality by the leader or dominant player, as shown in .
For example, a firm might want to commit itself to dramatically dropping the price of a product it sells if a rival firm enters its market. If this threat were credible, it would discourage the rival from entering. However, the firm might not be able to commit its future self to taking such an action because if the rival does in fact end up entering, the firm's future self might determine that, given the fact that the rival is now actually in the market and there is no point in trying to discourage entry, it is now not in its interest to dramatically drop the price. As such, the threat would not be credible. The present self of the firm has preferences that would have the future self be committed to the threat, but the future self has preferences that have it not carry out the threat. Hence, the dynamic inconsistency.
In behavioral economics
In the context of behavioral economics, time inconsistency is relate
|
https://en.wikipedia.org/wiki/AMG%20LASSO
|
AMG LASSO is a media recognition service launched by the All Media Guide in 2004. The LASSO service automatically recognizes CDs, DVDs, and digital audio files in formats such as MP3, WMA, and others. The service uses CD table of contents (ToC), DVD ToC, and acoustic fingerprint based recognition to recognize media. LASSO is available in versions for PCs and embedded devices.
LASSO competes with user submitted services like freedb, Gracenote, MusicBrainz, and Discogs.
See also
List of online music databases
External links
Macrovision's LASSO Product Page (using AMG data)
Online music and lyrics databases
Acoustic fingerprinting
|
https://en.wikipedia.org/wiki/Anti-M%C3%BCllerian%20hormone
|
Anti-Müllerian hormone (AMH), also known as Müllerian-inhibiting hormone (MIH), is a glycoprotein hormone structurally related to inhibin and activin from the transforming growth factor beta superfamily, whose key roles are in growth differentiation and folliculogenesis. In humans, it is encoded by the gene, on chromosome 19p13.3, while its receptor is encoded by the gene on chromosome 12.
AMH is activated by SOX9 in the Sertoli cells of the male fetus. Its expression inhibits the development of the female reproductive tract, or Müllerian ducts (paramesonephric ducts), in the male embryo, thereby arresting the development of fallopian tubes, uterus, and upper vagina. AMH expression is critical to sex differentiation at a specific time during fetal development, and appears to be tightly regulated by nuclear receptor SF-1, transcription GATA factors, sex-reversal gene DAX1, and follicle-stimulating hormone (FSH). Mutations in both the AMH gene and the type II AMH receptor have been shown to cause the persistence of Müllerian derivatives in males that are otherwise normally masculinized.
AMH is also a product of granulosa cells of the preantral and small antral follicles in women. As such, AMH is only present in the ovary until menopause. Production of AMH regulates folliculogenesis by inhibiting recruitment of follicles from the resting pool in order to select for the dominant follicle, after which the production of AMH diminishes. As a product of the granulosa cells, which envelop each egg and provide them energy, AMH can also serve as a molecular biomarker for relative size of the ovarian reserve. In humans, this is helpful because the number of cells in the follicular reserve can be used to predict timing of menopause. In bovine, AMH can be used for selection of females in multi-ovulatory embryo transfer programs by predicting the number of antral follicles developed to ovulation. AMH can also be used as a marker for ovarian dysfunction, such as in women with p
|
https://en.wikipedia.org/wiki/Insular%20cortex
|
The insular cortex (also insula and insular lobe) is a portion of the cerebral cortex folded deep within the lateral sulcus (the fissure separating the temporal lobe from the parietal and frontal lobes) within each hemisphere of the mammalian brain.
The insulae are believed to be involved in consciousness and play a role in diverse functions usually linked to emotion or the regulation of the body's homeostasis. These functions include compassion, empathy, taste, perception, motor control, self-awareness, cognitive functioning, interpersonal experience, and awareness of homeostatic emotions such as hunger, pain and fatigue. In relation to these, it is involved in psychopathology.
The insular cortex is divided into two parts: the anterior insula and the posterior insula in which more than a dozen field areas have been identified. The cortical area overlying the insula toward the lateral surface of the brain is the operculum (meaning lid). The opercula are formed from parts of the enclosing frontal, temporal, and parietal lobes.
Structure
Connections
The anterior part of the insula is subdivided by shallow sulci into three or four short gyri.
The anterior insula receives a direct projection from the basal part of the ventral medial nucleus of the thalamus and a particularly large input from the central nucleus of the amygdala. In addition, the anterior insula itself projects to the amygdala.
One study on rhesus monkeys revealed widespread reciprocal connections between the insular cortex and almost all subnuclei of the amygdaloid complex. The posterior insula projects predominantly to the dorsal aspect of the lateral and to the central amygdaloid nuclei. In contrast, the anterior insula projects to the anterior amygdaloid area as well as the medial, the cortical, the accessory basal magnocellular, the medial basal, and the lateral amygdaloid nuclei.
The posterior part of the insula is formed by a long gyrus.
The posterior insula connects reciprocally with the s
|
https://en.wikipedia.org/wiki/Maximum%20length%20sequence
|
A maximum length sequence (MLS) is a type of pseudorandom binary sequence.
They are bit sequences generated using maximal linear-feedback shift registers and are so called because they are periodic and reproduce every binary sequence (except the zero vector) that can be represented by the shift registers (i.e., for length-m registers they produce a sequence of length 2m − 1). An MLS is also sometimes called an n-sequence or an m-sequence. MLSs are spectrally flat, with the exception of a near-zero DC term.
These sequences may be represented as coefficients of irreducible polynomials in a polynomial ring over Z/2Z.
Practical applications for MLS include measuring impulse responses (e.g., of room reverberation or arrival times from towed sources in the ocean). They are also used as a basis for deriving pseudo-random sequences in digital communication systems that employ direct-sequence spread spectrum and frequency-hopping spread spectrum transmission systems, and in the efficient design of some fMRI experiments.
Generation
MLS are generated using maximal linear-feedback shift registers. An MLS-generating system with a shift register of length 4 is shown in Fig. 1. It can be expressed using the following recursive relation:
where n is the time index and represents modulo-2 addition. For bit values 0 = FALSE or 1 = TRUE, this is equivalent to the XOR operation.
As MLS are periodic and shift registers cycle through every possible binary value (with the exception of the zero vector), registers can be initialized to any state, with the exception of the zero vector.
Polynomial interpretation
A polynomial over GF(2) can be associated with the linear-feedback shift register. It has degree of the length of the shift register, and has coefficients that are either 0 or 1, corresponding to the taps of the register that feed the xor gate. For example, the polynomial corresponding to Figure 1 is x4 + x3 + 1.
A necessary and sufficient condition for the sequence generat
|
https://en.wikipedia.org/wiki/Homebrew%20%28video%20games%29
|
Homebrew, when applied to video games, refers to games produced by hobbyists for proprietary video game consoles which are not intended to be user-programmable. The official documentation is often only available to licensed developers, and these systems may use storage formats that make distribution difficult, such as ROM cartridges or encrypted CD-ROMs. Many consoles have hardware restrictions to prevent unauthorized development.
Development can use unofficial, community maintained toolchains or official development kits such as Net Yaroze, Linux for PlayStation 2, or Microsoft XNA. Targets for homebrew games are typically those which are no longer commercially relevant or produced, and with simpler graphics and/or computational abilities, such as the Atari 2600, Nintendo Entertainment System, Wii, Nintendo 3DS, Genesis, Dreamcast, Game Boy Advance, PlayStation, and PlayStation 2.
Development
New games for older systems are typically developed using emulators. Development for newer systems usually involves actual hardware, given the lack of accurate emulators. Efforts have been made to use actual console hardware for many older systems, though. Atari 2600 programmers may burn an EEPROM to plug into a custom cartridge board or use audio transfer via the Starpath Supercharger. Game Boy Advance developers have several ways to use GBA flash cartridges in this regard.
First generation consoles
Odyssey
In 2009, Odball became the first game for the Magnavox Odyssey since 1973. It was produced by Robert Vinciguerra who has since written several other Odyssey games. On July 11, 2011, Dodgeball was published by Chris Read.
Second generation consoles
Atari 2600
Channel F
A handful of games have been programmed for the Fairchild Channel F, the first console to use ROM cartridges. The first known release is Sean Riddle's clone of Lights Out which included instructions on how to modify the SABA#20 Chess game into a Multi-Cartridge. There is also a version of Tetris and in
|
https://en.wikipedia.org/wiki/Intrinsic%20equation
|
In geometry, an intrinsic equation of a curve is an equation that defines the curve using a relation between the curve's intrinsic properties, that is, properties that do not depend on the location and possibly the orientation of the curve. Therefore an intrinsic equation defines the shape of the curve without specifying its position relative to an arbitrarily defined coordinate system.
The intrinsic quantities used most often are arc length , tangential angle , curvature or radius of curvature, and, for 3-dimensional curves, torsion . Specifically:
The natural equation is the curve given by its curvature and torsion.
The Whewell equation is obtained as a relation between arc length and tangential angle.
The Cesàro equation is obtained as a relation between arc length and curvature.
The equation of a circle (including a line) for example is given by the equation where is the arc length, the curvature and the radius of the circle.
These coordinates greatly simplify some physical problem. For elastic rods for example, the potential energy is given by
where is the bending modulus . Moreover, as , elasticity of rods can be given a simple variational form.
References
External links
Curves
Equations
|
https://en.wikipedia.org/wiki/List%20of%20Unified%20Modeling%20Language%20tools
|
This article compares UML tools. UML tools are software applications which support some functions of the Unified Modeling Language.
General
Features
See also
List of requirements engineering tools
References
External links
.
Technical communication
Software comparisons
Diagramming software
Computing-related lists
|
https://en.wikipedia.org/wiki/Multistorey%20car%20park
|
A multistorey car park (British and Singapore English) or parking garage (American English), also called a multistory, parking building, parking structure, parkade (mainly Canadian), parking ramp, parking deck or indoor parking, is a building designed for car, motorcycle and bicycle parking and where there are a number of floors or levels on which parking takes place. The first known multistory facility was built in London in 1901, and the first underground parking was built in Barcelona in 1904. (See History, below.) The term multistory is almost never used in the US, since parking structures are almost all multiple levels. Parking structures may be heated if they are enclosed.
Design of parking structures can add considerable cost for planning new developments, with costs in the United States around $28,000 per space and $56,000 per space for underground (excluding the cost of land), and can be mandated by cities in new building parking requirements. Some cities such as London have abolished previously enacted minimum parking requirements. Minimum parking requirements are a hallmark of zoning and planning codes for municipalities (States do not prescribe parking requirements, while counties and cities can) in the US.
History
The earliest known multi-story car park was opened in May 1901 by City & Suburban Electric Carriage Company at 6 Denman Street, central London. The location had space for 100 vehicles over seven floors, totaling 19,000 square feet. The same company opened a second location in 1902 for 230 vehicles. The company specialized in the sale, storage, valeting and on-demand delivery of electric vehicles that could travel about 40 miles and had a top speed of 20 miles per hour.
The earliest known parking garage in the United States was built in 1918 for the Hotel La Salle at 215 West Washington Street in the West Loop area of downtown Chicago, Illinois. It was designed by Holabird and Roche. The Hotel La Salle was demolished in 1976, but the pa
|
https://en.wikipedia.org/wiki/Peercasting
|
Peercasting is a method of multicasting streams, usually audio and/or video, to the Internet via peer-to-peer technology. It can be used for commercial, independent, and amateur multicasts. Unlike traditional IP multicast, peercasting can facilitate on-demand content delivery.
Operation
Peercasting usually works by having peers automatically relay a stream to other peers. The P2P overlay network helps peers find a relay for a specified stream to connect to. This method suffers from poor quality of service during times when relays disconnect or peers need to switch to a different relay, referred to as "churn".
Another solution used is minute swarming, wherein a live stream is broken up into minute length files that are swarmed via P2P software such as BitTorrent or Dijjer. However, this suffers from excessive overhead for the formation of a new swarm every minute.
A new technique is to stripe a live stream into multiple substreams, akin to RAID striping. Forward error correction and timing information is applied to these substreams such that the original stream can be reformed using at least all but one of the substreams (fountain codes are an efficient way to make and combine the substreams). In turn, these streams are relayed using the first method.
Another solution is to permit clients to connect to a new relay and resume streaming from where they left off by their old relay. Relays would retain a back buffer to permit clients to resume streaming from anywhere within the range of said buffer. This would essentially be an extension to the Icecast protocol.
Software used for peercasting
Free and open source software
Alluvium (peercasting)
Tribler
PULSE
Proprietary
Ace Stream
PPStream
Rawflow
Red Swoosh
Veoh
See also
Broadcatching
Comparison of streaming media systems
P2PTV
TVUnetworks
Wireless ad hoc network
References
Digital audio
File sharing
Technology neologisms
Peer-to-peer computing
|
https://en.wikipedia.org/wiki/Flue-gas%20desulfurization
|
Flue-gas desulfurization (FGD) is a set of technologies used to remove sulfur dioxide () from exhaust flue gases of fossil-fuel power plants, and from the emissions of other sulfur oxide emitting processes such as waste incineration, petroleum refineries, cement and lime kilns.
Methods
Since stringent environmental regulations limiting emissions have been enacted in many countries, is being removed from flue gases by a variety of methods. Common methods used:
Wet scrubbing using a slurry of alkaline sorbent, usually limestone or lime, or seawater to scrub gases;
Spray-dry scrubbing using similar sorbent slurries;
Wet sulfuric acid process recovering sulfur in the form of commercial quality sulfuric acid;
SNOX Flue gas desulfurization removes sulfur dioxide, nitrogen oxides and particulates from flue gases;
Dry sorbent injection systems that introduce powdered hydrated lime (or other sorbent material) into exhaust ducts to eliminate and from process emissions.
For a typical coal-fired power station, flue-gas desulfurization (FGD) may remove 90 per cent or more of the in the flue gases.
History
Methods of removing sulfur dioxide from boiler and furnace exhaust gases have been studied for over 150 years. Early ideas for flue gas desulfurization were established in England around 1850.
With the construction of large-scale power plants in England in the 1920s, the problems associated with large volumes of from a single site began to concern the public. The emissions problem did not receive much attention until 1929, when the House of Lords upheld the claim of a landowner against the Barton Electricity Works of the Manchester Corporation for damages to his land resulting from emissions. Shortly thereafter, a press campaign was launched against the erection of power plants within the confines of London. This outcry led to the imposition of controls on all such power plants.
The first major FGD unit at a utility was installed in 1931 at Battersea Power St
|
https://en.wikipedia.org/wiki/Kaprekar%27s%20routine
|
In number theory, Kaprekar's routine is an iterative algorithm named after its inventor, Indian mathematician D. R. Kaprekar. Each iteration starts with a number, sorts the digits into descending and ascending order, and calculates the difference between the two new numbers.
As an example, starting with the number 8991 in base 10:
6174, known as Kaprekar's constant, is a fixed point of this algorithm. Any four-digit number (in base 10) with at least two distinct digits will reach 6174 within seven iterations. The algorithm runs on any natural number in any given number base.
Definition and properties
The algorithm is as follows:
Choose any natural number in a given number base . This is the first number of the sequence.
Create a new number by sorting the digits of in descending order, and another number by sorting the digits of in ascending order. These numbers may have leading zeros, which can be ignored. Subtract to produce the next number of the sequence.
Repeat step 2.
The sequence is called a Kaprekar sequence and the function is the Kaprekar mapping. Some numbers map to themselves; these are the fixed points of the Kaprekar mapping, and are called Kaprekar's constants. Zero is a Kaprekar's constant for all bases , and so is called a trivial Kaprekar's constant. All other Kaprekar's constant are nontrivial Kaprekar's constants.
For example, in base 10, starting with 3524,
with 6174 as a Kaprekar's constant.
All Kaprekar sequences will either reach one of these fixed points or will result in a repeating cycle. Either way, the end result is reached in a fairly small number of steps.
Note that the numbers and have the same digit sum and hence the same remainder modulo . Therefore, each number in a Kaprekar sequence of base numbers (other than possibly the first) is a multiple of .
When leading zeroes are retained, only repdigits lead to the trivial Kaprekar's constant.
Families of Kaprekar's constants
In base 4, it can easily
|
https://en.wikipedia.org/wiki/Email%20authentication
|
Email authentication, or validation, is a collection of techniques aimed at providing verifiable information about the origin of email messages by validating the domain ownership of any message transfer agents (MTA) who participated in transferring and possibly modifying a message.
The original base of Internet email, Simple Mail Transfer Protocol (SMTP), has no such feature, so forged sender addresses in emails (a practice known as email spoofing) have been widely used in phishing, email spam, and various types of frauds. To combat this, many competing email authentication proposals have been developed, but only fairly recently have three been widely adopted – SPF, DKIM and DMARC. The results of such validation can be used in automated email filtering, or can assist recipients when selecting an appropriate action.
This article does not cover user authentication of email submission and retrieval.
Rationale
In the early 1980s, when Simple Mail Transfer Protocol (SMTP) was designed, it provided for no real verification of sending user or system. This was not a problem while email systems were run by trusted corporations and universities, but since the commercialization of the Internet in the early 1990s, spam, phishing, and other crimes have been found to increasingly involve email.
Email authentication is a necessary first step towards identifying the origin of messages, and thereby making policies and laws more enforceable.
Hinging on domain ownership is a stance that emerged in the early 2000. It implies a coarse-grained authentication, given that domains appear on the right part of email addresses, after the at sign. Fine-grain authentication, at user level, can be achieved by other means, such as Pretty Good Privacy and S/MIME. At present, digital identity needs to be managed by each individual.
An outstanding rationale for email authentication is the ability to automate email filtering at receiving servers. That way, spoofed messages can be rejected
|
https://en.wikipedia.org/wiki/Windows%20Neptune
|
Neptune was the codename for a version of Microsoft Windows under development in 1999. Based on Windows 2000, it was originally to replace the Windows 9x series and was scheduled to be the first home consumer-oriented version of Windows built on Windows NT code. Internally, the project's name was capitalized as NepTune.
History
Neptune largely resembled Windows 2000, but some new features were introduced. Neptune included a logon screen similar to that later used in Windows XP. A firewall new to Neptune was later integrated into Windows XP as the Windows Firewall. Neptune also experimented with a new HTML and Win32-based user interface originally intended for Windows Me, called Activity Centers, for task-centered operations.
Only one alpha build of Neptune, 5111, was released to testers under a non-disclosure agreement, and later made its way to various beta collectors' sites and virtual museums in 2000. Other builds of Neptune are known to exist due to information in beta builds of Windows Me and Windows XP. In November 2015, a build 5111.6 disk was shown in a Microsoft Channel 9 video; version 5111 was the last build of Neptune that was sent to external testers, with the .1 or .6 after the build number stands for variant, not for compile. It is the only build of Neptune that made its way to the public. Build 5111 included Activity Centers, which could be installed by copying ACCORE.DLL from the installation disk to the hard drive and then running regsvr32 on ACCORE.DLL. The centers contained traces of Windows Me, then code-named Millennium, but were broken due to JavaScript errors, missing links and executables to the Game, Photo, and Music Centers. In response, some Windows enthusiasts have spent years fixing Activity Centers in build 5111 close to what Microsoft intended.
In early 2000, Microsoft merged the team working on Neptune with that developing Odyssey, the successor to Windows 2000 for business customers. The combined team worked on a new project c
|
https://en.wikipedia.org/wiki/Conformable%20matrix
|
In mathematics, a matrix is conformable if its dimensions are suitable for defining some operation (e.g. addition, multiplication, etc.).
Examples
If two matrices have the same dimensions (number of rows and number of columns), they are conformable for addition.
Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. That is, if is an matrix and is an matrix, then needs to be equal to for the matrix product to be defined. In this case, we say that and are conformable for multiplication (in that sequence).
Since squaring a matrix involves multiplying it by itself () a matrix must be (that is, it must be a square matrix) to be conformable for squaring. Thus for example only a square matrix can be idempotent.
Only a square matrix is conformable for matrix inversion. However, the Moore–Penrose pseudoinverse and other generalized inverses do not have this requirement.
Only a square matrix is conformable for matrix exponentiation.
See also
Linear algebra
References
Linear algebra
Matrices
|
https://en.wikipedia.org/wiki/Chinese%20Character%20Code%20for%20Information%20Interchange
|
The Chinese Character Code for Information Interchange () or CCCII is a character set developed by the Chinese Character Analysis Group in Taiwan. It was first published in 1980, and significantly expanded in 1982 and 1987.
It is used mostly by library systems. It is one of the earliest established and most sophisticated encodings for traditional Chinese (predating the establishment of Big5 in 1984 and CNS 11643 in 1986). It is distinguished by its unique system for encoding simplified versions and other variants of its main set of hanzi characters.
A variant of an earlier version of CCCII is used by the Library of Congress as part of MARC-8, under the name East Asian Character Code (EACC, ANSI/NISO Z39.64), where it comprises part of MARC 21's JACKPHY support. However, EACC contains fewer characters than the most recent versions of CCCII. Work at Apple based on Research Libraries Group's CJK Thesaurus, which was used to maintain EACC, was one of the direct predecessors of Unicode's Unihan set.
Design
Byte ranges
CCCII is designed as an 94n set, as defined by ISO/IEC 2022. Each Chinese character is represented by a 3-byte code in which each byte is 7-bit, between 0x21 and 0x7E inclusive. Thus, the maximum number of Chinese characters representable in CCCII is 94×94×94 = 830584. In practice the number of characters encodable by CCCII would be less than this number, because variant characters are encoded in related ISO 2022 planes under CCCII, so most of the code points would have to be reserved for variants.
In practice, however, bytes outside of these ranges are sometimes used. The code 0x212320 is used by some implementations as an ideographic space. A CCCII specification used by libraries in Hong Kong uses codes starting with 0x2120 for punctuation and symbols. The first byte 0x7F is used by some variants to encode codes for some otherwise unavailable Unified Repertoire and Ordering or CJK Unified Ideographs Extension A hanzi (e.g. 0x7F3449 for U+3449 or 0x7F
|
https://en.wikipedia.org/wiki/Data%20URI%20scheme
|
The data URI scheme is a uniform resource identifier (URI) scheme that provides a way to include data in-line in Web pages as if they were external resources. It is a form of file literal or here document. This technique allows normally separate elements such as images and style sheets to be fetched in a single Hypertext Transfer Protocol (HTTP) request, which may be more efficient than multiple HTTP requests, and used by several browser extensions to package images as well as other multimedia contents in a single HTML file for page saving. , data URIs are fully supported by most major browsers, and partially supported in Internet Explorer.
Syntax
The syntax of data URIs is defined in Request for Comments (RFC) 2397, published in August 1998, and follows the URI scheme syntax. A data URI consists of:
data:content/type;base64,
The scheme, data. It is followed by a colon (:).
An optional media type. The media type part may include one or more parameters, in the format attribute=value, separated by semicolons (;) . A common media type parameter is charset, specifying the character set of the media type, where the value is from the IANA list of character set names. If one is not specified, the media type of the data URI is assumed to be text/plain;charset=US-ASCII.
An optional base64 extension base64, separated from the preceding part by a semicolon. When present, this indicates that the data content of the URI is binary data, encoded in ASCII format using the Base64 scheme for binary-to-text encoding. The base64 extension is distinguished from any media type parameters by virtue of not having a =value component and by coming after any media type parameters. Since Base64 encoded data is approximately 33% larger than original data, it is recommended to use Base64 data URIs only if the server supports HTTP compression or embedded files are smaller than 1KB.
The data, separated from the preceding part by a comma (,). The data is a sequence of zero or more octets re
|
https://en.wikipedia.org/wiki/Interix
|
Interix was an optional, POSIX-conformant Unix subsystem for Windows NT operating systems. Interix was a component of Windows Services for UNIX, and a superset of the Microsoft POSIX subsystem. Like the POSIX subsystem, Interix was an environment subsystem for the NT kernel. It included numerous open source utility software programs and libraries. Interix was originally developed and sold as OpenNT until purchased by Microsoft in 1999.
Interix versions 5.2 and 6.0 were respective components of Microsoft Windows Server 2003 R2, Windows Vista Enterprise, Windows Vista Ultimate, and Windows Server 2008 as Subsystem for Unix-based Applications (SUA). Version 6.1 was included in Windows 7 (Enterprise and Ultimate editions) but disabled by default, and in Windows Server 2008 R2 (all editions).
It was available as a deprecated separate download for Windows 8 and Windows Server 2012, and is not available at all on Windows 10.
Details
The complete installation of Interix included (at version 3.5):
Over 350 Unix utilities such as vi, ksh, csh, ls, cat, awk, grep, kill, etc.
A complete set of manual pages for utilities and APIs
GCC 3.3 compiler, includes and libraries
A cc/c89-like wrapper for Microsoft Visual Studio command-line C/C++ compiler
GNU Debugger
X11 client applications and libraries (no X server included, though third party servers were available)
Has Unix "root" capabilities (i.e. setuid files)
Has pthreads, shared libraries, DSOs, job control, signals, sockets, shared memory
The development environment included support for C, C++ and Fortran. Threading was supported using the Pthreads model.
Additional languages could be obtained (Python, Ruby, Tcl, etc.). Unix-based software packaging and build tools were available for installing or creating pre-build software packages.
Starting with release 5.2 (Server 2003/R2) the following capabilities were added:
"Mixed mode" for linking Unix programs with Windows DLLs
64-bit CPU support (in addition to 32-bit)
|
https://en.wikipedia.org/wiki/Unit%20in%20the%20last%20place
|
In computer science and numerical analysis, unit in the last place or unit of least precision (ulp) is the spacing between two consecutive floating-point numbers, i.e., the value the least significant digit (rightmost digit) represents if it is 1. It is used as a measure of accuracy in numeric calculations.
Definition
One definition is: In radix with precision , if , then
Another definition, suggested by John Harrison, is slightly different: is the distance between the two closest straddling floating-point numbers and (i.e., those with and ), assuming that the exponent range is not upper-bounded. These definitions differ only at signed powers of the radix.
The IEEE 754 specification—followed by all modern floating-point hardware—requires that the result of an elementary arithmetic operation (addition, subtraction, multiplication, division, and square root since 1985, and FMA since 2008) be correctly rounded, which implies that in rounding to nearest, the rounded result is within 0.5 ulp of the mathematically exact result, using John Harrison's definition; conversely, this property implies that the distance between the rounded result and the mathematically exact result is minimized (but for the halfway cases, it is satisfied by two consecutive floating-point numbers). Reputable numeric libraries compute the basic transcendental functions to between 0.5 and about 1 ulp. Only a few libraries compute them within 0.5 ulp, this problem being complex due to the Table-maker's dilemma.
Examples
Example 1
Let be a positive floating-point number and assume that the active rounding mode is round to nearest, ties to even, denoted . If , then . Otherwise, or , depending on the value of the least significant digit and the exponent of . This is demonstrated in the following Haskell code typed at an interactive prompt:
> until (\x -> x == x+1) (+1) 0 :: Float
1.6777216e7
> it-1
1.6777215e7
> it+1
1.6777216e7
Here we start with 0 in single precision and repeatedly add
|
https://en.wikipedia.org/wiki/Leonard%20Bosack
|
Leonard X. Bosack (born 1952) is a co-founder of Cisco Systems, an American-based multinational corporation that designs and sells consumer electronics, networking and communications technology, and services. His net worth is approximately $200 million. He was awarded the Computer Entrepreneur Award in 2009 for co-founding Cisco Systems and pioneering and advancing the commercialization of routing technology and the profound changes this technology enabled in the computer industry.
He is largely responsible for pioneering the widespread commercialization of local area network (LAN) technology to connect geographically disparate computers over a multiprotocol router system, which was an unheard-of technology at the time. In 1990, Cisco's management fired Cisco co-founder Sandy Lerner and Bosack resigned. , Bosack was the CEO of XKL LLC, a privately funded engineering company which explores and develops optical networks for data communications.
Background
Born in Pennsylvania in 1952 to Polish Catholic family, Bosack graduated from La Salle College High School in 1969. In 1973, Bosack graduated from the University of Pennsylvania School of Engineering and Applied Science, and joined the Digital Equipment Corporation (DEC) as a hardware engineer. In 1979, he was accepted into Stanford University, and began to study computer science. During his time at Stanford, he was credited for becoming a support engineer for a 1981 project to connect all of Stanford's mainframes, minis, LISP machines, and Altos.
His contribution was to work on the network router that allowed the computer network under his management to share data from the Computer Science Lab with the Business School's network. He met his wife Sandra Lerner at Stanford, where she was the manager of the Business School lab, and the couple married in 1980. Together in 1984, they started Cisco in Menlo Park.
Cisco
In 1984, Bosack co-founded Cisco Systems with his then partner (and now ex-wife) Sandy Lerner. Their
|
https://en.wikipedia.org/wiki/It%C3%B4%20calculus
|
Itô calculus, named after Kiyosi Itô, extends the methods of calculus to stochastic processes such as Brownian motion (see Wiener process). It has important applications in mathematical finance and stochastic differential equations.
The central concept is the Itô stochastic integral, a stochastic generalization of the Riemann–Stieltjes integral in analysis. The integrands and the integrators are now stochastic processes:
where H is a locally square-integrable process adapted to the filtration generated by X , which is a Brownian motion or, more generally, a semimartingale. The result of the integration is then another stochastic process. Concretely, the integral from 0 to any particular t is a random variable, defined as a limit of a certain sequence of random variables. The paths of Brownian motion fail to satisfy the requirements to be able to apply the standard techniques of calculus. So with the integrand a stochastic process, the Itô stochastic integral amounts to an integral with respect to a function which is not differentiable at any point and has infinite variation over every time interval.
The main insight is that the integral can be defined as long as the integrand H is adapted, which loosely speaking means that its value at time t can only depend on information available up until this time. Roughly speaking, one chooses a sequence of partitions of the interval from 0 to t and constructs Riemann sums. Every time we are computing a Riemann sum, we are using a particular instantiation of the integrator. It is crucial which point in each of the small intervals is used to compute the value of the function. The limit then is taken in probability as the mesh of the partition is going to zero. Numerous technical details have to be taken care of to show that this limit exists and is independent of the particular sequence of partitions. Typically, the left end of the interval is used.
Important results of Itô calculus include the integration by parts formula a
|
https://en.wikipedia.org/wiki/Ramsey%20problem
|
The Ramsey problem, or Ramsey pricing, or Ramsey–Boiteux pricing, is a second-best policy problem concerning what prices a public monopoly should charge for the various products it sells in order to maximize social welfare (the sum of producer and consumer surplus) while earning enough revenue to cover its fixed costs.
Under Ramsey pricing, the price markup over marginal cost is inverse to the price elasticity of demand: the more elastic the product's demand, the smaller the markup. Frank P. Ramsey found this 1927 in the context of Optimal taxation: the more elastic the demand, the smaller the optimal tax. The rule was later applied by Marcel Boiteux (1956) to natural monopolies (industries with decreasing average cost). A natural monopoly earns negative profits if it sets price equals to marginal cost, so it must set prices for some or all of the products it sells to above marginal cost if it is to be viable without government subsidies. Ramsey pricing says to mark up most the goods with the least elastic (that is, least price-sensitive) demand.
Description
In a first-best world, without the need to earn enough revenue to cover fixed costs, the optimal solution would be to set the price for each product equal to its marginal cost. If the average cost curve is declining where the demand curve crosses it however, as happens when the fixed cost is large, this would result in a price less than average cost, and the firm could not survive without subsidy. The Ramsey problem is to decide exactly how much to raise each product's price above its marginal cost so the firm's revenue equals its total cost. If there is just one product, the problem is simple: raise the price to where it equals average cost. If there are two products, there is leeway to raise one product's price more and the other's less, so long as the firm can break even overall.
The principle is applicable to pricing of goods that the government is the sole supplier of (public utilities) or regulati
|
https://en.wikipedia.org/wiki/Blitzen%20%28computer%29
|
The Blitzen was a miniaturized SIMD (single instruction, multiple data) computer system designed for NASA in the late 1980s by a team of researchers at Duke University, North Carolina State University and the Microelectronics Center of North Carolina. The Blitzen was composed of a control unit and a set of simple processors connected in a grid topology. The machine influenced, to some extent, the design of the MasPar MP-1 computer.
Applications of the Blitzen machine include high-speed image processing, where each processor operates on a pixel of the input image and communicates with its grid neighbours to apply image processing filters on the image.
References
Classes of computers
SIMD computing
|
https://en.wikipedia.org/wiki/List%20of%20enzymes
|
Enzymes are listed here by their classification in the International Union of Biochemistry and Molecular Biology's Enzyme Commission (EC) numbering system:
:Category:Oxidoreductases (EC 1) (Oxidoreductase)
Dehydrogenase
Luciferase
DMSO reductase
:Category:EC 1.1 (act on the CH-OH group of donors)
:Category:EC 1.1.1 (with NAD+ or NADP+ as acceptor)
Alcohol dehydrogenase (NAD)
Alcohol dehydrogenase (NADP)
Homoserine dehydrogenase
Aminopropanol oxidoreductase
Diacetyl reductase
Glycerol dehydrogenase
Propanediol-phosphate dehydrogenase
glycerol-3-phoshitiendopene dehydrogenase (NAD+)
D-xylulose reductase
L-xylulose reductase
Lactate dehydrogenase
Malate dehydrogenase
Isocitrate dehydrogenase
HMG-CoA reductase
:Category:EC 1.1.2 (with a cytochrome as acceptor)
:Category:EC 1.1.3 (with oxygen as acceptor)
Glucose oxidase
L-gulonolactone oxidase
Thiamine oxidase
Xanthine oxidase
Category:EC 1.1.4 (with a disulfide as acceptor)
:Category:EC 1.1.5 (with a quinone or similar compound as acceptor)
:Category:EC 1.1.99 (with other acceptors)
:Category:EC 1.2 (act on the aldehyde or oxo group of donors)
:Category:EC 1.2.1 (with NAD+ or NADP+ as acceptor)
Acetaldehyde dehydrogenase
Glyceraldehyde 3-phosphate dehydrogenase
Pyruvate dehydrogenase
:Category:EC 1.2.4
Oxoglutarate dehydrogenase
:Category:EC 1.3 (act on the CH-CH group of donors)
:Category:EC 1.3.1 (with NAD+ or NADP+ as acceptor)
Biliverdin reductase
:Category:EC 1.3.2 (with a cytochrome as acceptor)
:Category:EC 1.3.3 (with oxygen as acceptor)
Protoporphyrinogen oxidase
:Category:EC 1.3.5 (with a quinone or similar compound as acceptor)
:Category:EC 1.3.7 (with an iron–sulfur protein as acceptor)
:Category:EC 1.3.99 (with other acceptors)
:Category:EC 1.4 (act on the CH-NH2 group of donors)
:Category:EC 1.4.3
Monoamine oxidase
:Category:EC 1.5 (act on CH-NH group of donors)
:Category:EC 1.5.1 (with NAD+ or NADP+ as acceptor)
Dihydrofolate reductase
Methylenetetrahydrofolate reductase
:C
|
https://en.wikipedia.org/wiki/WPLG
|
WPLG (channel 10) is a television station in Miami, Florida, United States, affiliated with ABC. The station is owned by Berkshire Hathaway as its sole broadcast property. WPLG's studios are located on West Hallandale Beach Boulevard in Pembroke Park, and its transmitter is located in Miami Gardens, Florida.
WPLG signed on the air as WLBW-TV on November 20, 1961, as the replacement for WPST-TV, which was forced off the air by the Federal Communications Commission (FCC) following the revelation of bribery undertaken with one of the commissioners to secure that station's license. L. B. Wilson, Inc., was found to be the only bidder for the original channel 10 license not to have engaged in coercive action, and was thus awarded a temporary permit to begin telecasting. While WPST-TV's license was revoked in July 1960, WLBW-TV had to wait for nearly a year to finally sign on using entirely different facilities, but hired multiple former WPST-TV staffers and picked up the ABC affiliation WPST-TV held. Sold to Post-Newsweek Stations in 1969, WLBW-TV was renamed WPLG the following year in honor of Philip Leslie Graham. Led by on-air talent including Ann Bishop, Dwight Lauderdale, Bryan Norcross, Michael Putney and Calvin Hughes, WPLG's news department emerged in the 1970s as a leader in local television ratings and has maintained that position ever since. WPLG has been owned by Berkshire Hathaway since 2014, when Post-Newsweek (renamed Graham Media Group) divested it, but continues to maintain infrastructure and logistical ties to its previous ownership.
Prior history of channel 10
The first station to broadcast on channel 10 in the Miami market was WPST-TV, owned by Public Service Television, the broadcasting subsidiary of National Airlines (NAL). WPST-TV was the second ABC affiliate in the Miami market, having assumed it from UHF station WITV. WPST-TV first signed on the air on August 2, 1957, from a transmitter tower and facilities purchased from Storer Broadcasting wh
|
https://en.wikipedia.org/wiki/Projection-slice%20theorem
|
In mathematics, the projection-slice theorem, central slice theorem or Fourier slice theorem in two dimensions states that the results of the following two calculations are equal:
Take a two-dimensional function f(r), project (e.g. using the Radon transform) it onto a (one-dimensional) line, and do a Fourier transform of that projection.
Take that same function, but do a two-dimensional Fourier transform first, and then slice it through its origin, which is parallel to the projection line.
In operator terms, if
F1 and F2 are the 1- and 2-dimensional Fourier transform operators mentioned above,
P1 is the projection operator (which projects a 2-D function onto a 1-D line),
S1 is a slice operator (which extracts a 1-D central slice from a function),
then
This idea can be extended to higher dimensions.
This theorem is used, for example, in the analysis of medical
CT scans where a "projection" is an x-ray
image of an internal organ. The Fourier transforms of these images are
seen to be slices through the Fourier transform of the 3-dimensional
density of the internal organ, and these slices can be interpolated to build
up a complete Fourier transform of that density. The inverse Fourier transform
is then used to recover the 3-dimensional density of the object. This technique was first derived by Ronald N. Bracewell in 1956 for a radio-astronomy problem.
The projection-slice theorem in N dimensions
In N dimensions, the projection-slice theorem states that the
Fourier transform of the projection of an N-dimensional function
f(r) onto an m-dimensional linear submanifold
is equal to an m-dimensional slice of the N-dimensional Fourier transform of that
function consisting of an m-dimensional linear submanifold through the origin in the Fourier space which is parallel to the projection submanifold. In operator terms:
The generalized Fourier-slice theorem
In addition to generalizing to N dimensions, the projection-slice theorem can be further generalized with an ar
|
https://en.wikipedia.org/wiki/Abel%20transform
|
In mathematics, the Abel transform, named for Niels Henrik Abel, is an integral transform often used in the analysis of spherically symmetric or axially symmetric functions. The Abel transform of a function f(r) is given by
Assuming that f(r) drops to zero more quickly than 1/r, the inverse Abel transform is given by
In image analysis, the forward Abel transform is used to project an optically thin, axially symmetric emission function onto a plane, and the inverse Abel transform is used to calculate the emission function given a projection (i.e. a scan or a photograph) of that emission function.
In absorption spectroscopy of cylindrical flames or plumes, the forward Abel transform is the integrated absorbance along a ray with closest distance y from the center of the flame, while the inverse Abel transform gives the local absorption coefficient at a distance r from the center. Abel transform is limited to applications with axially symmetric geometries. For more general asymmetrical cases, more general-oriented reconstruction algorithms such as algebraic reconstruction technique (ART), maximum likelihood expectation maximization (MLEM), filtered back-projection (FBP) algorithms should be employed.
In recent years, the inverse Abel transform (and its variants) has become the cornerstone of data analysis in photofragment-ion imaging and photoelectron imaging. Among recent most notable extensions of inverse Abel transform are the "onion peeling" and "basis set expansion" (BASEX) methods of photoelectron and photoion image analysis.
Geometrical interpretation
In two dimensions, the Abel transform F(y) can be interpreted as the projection of a circularly symmetric function f(r) along a set of parallel lines of sight at a distance y from the origin. Referring to the figure on the right, the observer (I) will see
where f(r) is the circularly symmetric function represented by the gray color in the figure. It is assumed that the observer is actually at x = ∞,
|
https://en.wikipedia.org/wiki/Datagram%20Transport%20Layer%20Security
|
Datagram Transport Layer Security (DTLS) is a communications protocol providing security to datagram-based applications by allowing them to communicate in a way designed to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the stream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security guarantees. The DTLS protocol datagram preserves the semantics of the underlying transport—the application does not suffer from the delays associated with stream protocols, but because it uses UDP or SCTP, the application has to deal with packet reordering, loss of datagram and data larger than the size of a datagram network packet. Because DTLS uses UDP or SCTP rather than TCP, it avoids the "TCP meltdown problem", when being used to create a VPN tunnel.
Definition
The following documents define DTLS:
for use with User Datagram Protocol (UDP),
for use with Datagram Congestion Control Protocol (DCCP),
for use with Control And Provisioning of Wireless Access Points (CAPWAP),
for use with Stream Control Transmission Protocol (SCTP) encapsulation,
for use with Secure Real-time Transport Protocol (SRTP) subsequently called DTLS-SRTP in a draft with Secure Real-Time Transport Control Protocol (SRTCP).
DTLS 1.0 is based on TLS 1.1, DTLS 1.2 is based on TLS 1.2, and DTLS 1.3 is based on TLS 1.3. There is no DTLS 1.1 because this version-number was skipped in order to harmonize version numbers with TLS. Like previous DTLS versions, DTLS 1.3 is intended to provide "equivalent security guarantees [to TLS 1.3] with the exception of order protection/non-replayability".
Implementations
Libraries
Applications
Cisco AnyConnect VPN Client uses TLS and invented DTLS based VPN.
OpenConnect is an open source AnyConnect-compatible client and ocserv server that supports (D)TLS.
Cisco InterCloud Fabric uses DTLS to form a tunnel between private and public/provider compute environments
ZScaler tunnel 2.0 uses DTLS
|
https://en.wikipedia.org/wiki/Channel%20Definition%20Format
|
Channel Definition Format (CDF) was an XML file format formerly used in conjunction with Microsoft's Active Channel, Active Desktop and Smart Offline Favorites technologies. The format was designed to "offer frequently updated collections of information, or channels, from any web server for automatic delivery to compatible receiver programs." Active Channel allowed users to subscribe to channels and have scheduled updates delivered to their desktop. Smart Offline Favorites, like channels, enabled users to view webpages from the cache.
History
Submitted to the World Wide Web Consortium (W3C) in March 1997 for consideration as a web standard, CDF marked Microsoft's attempt to capitalize on the push technology trend led by PointCast. The most notable implementation of CDF was Microsoft's Active Desktop, an optional feature introduced with the Internet Explorer 4.0 browser in September 1997. Smart Offline Favorites was introduced in Internet Explorer 5.0.
CDF prefigured aspects of the RSS file format introduced by Netscape in March 1999, and of web syndication at large. Unlike RSS, CDF was never widely adopted and its use remained very limited. As a consequence, Microsoft removed CDF support from Internet Explorer 7 in 2006.
Example
A generic CDF file:
<?xml version="1.0" encoding="UTF-8"?>
<CHANNEL HREF="http://domain/folder/pageOne.extension"
BASE="http://domain/folder/"
LASTMOD="1998-11-05T22:12"
PRECACHE="YES"
LEVEL="0">
<TITLE>Title of Channel</TITLE>
<ABSTRACT>Synopsis of channel's contents.</ABSTRACT>
<SCHEDULE>
<INTERVALTIME DAY="14"/>
</SCHEDULE>
<LOGO HREF="wideChannelLogo.gif" STYLE="IMAGE-WIDE"/>
<LOGO HREF="imageChannelLogo.gif" STYLE="IMAGE"/>
<LOGO HREF="iconChannelLogo.gif" STYLE="ICON"/>
<ITEM HREF="pageTwo.extension"
LASTMOD="1998-11-05T22:12"
PRECACHE="YES"
LEVEL="1">
<TITLE>Page Two's Title</TITLE>
<ABSTRACT>Synopsis of Page Two's contents.</ABSTRACT>
<LOGO H
|
https://en.wikipedia.org/wiki/Dyadic%20transformation
|
The dyadic transformation (also known as the dyadic map, bit shift map, 2x mod 1 map, Bernoulli map, doubling map or sawtooth map) is the mapping (i.e., recurrence relation)
(where is the set of sequences from ) produced by the rule
.
Equivalently, the dyadic transformation can also be defined as the iterated function map of the piecewise linear function
The name bit shift map arises because, if the value of an iterate is written in binary notation, the next iterate is obtained by shifting the binary point one bit to the right, and if the bit to the left of the new binary point is a "one", replacing it with a zero.
The dyadic transformation provides an example of how a simple 1-dimensional map can give rise to chaos. This map readily generalizes to several others. An important one is the beta transformation, defined as . This map has been extensively studied by many authors. It was introduced by Alfréd Rényi in 1957, and an invariant measure for it was given by Alexander Gelfond in 1959 and again independently by Bill Parry in 1960.
Relation to the Bernoulli process
The map can be obtained as a homomorphism on the Bernoulli process. Let be the set of all semi-infinite strings of the letters and . These can be understood to be the flips of a coin, coming up heads or tails. Equivalently, one can write the space of all (semi-)infinite strings of binary bits. The word "infinite" is qualified with "semi-", as one can also define a different space consisting of all doubly-infinite (double-ended) strings; this will lead to the Baker's map. The qualification "semi-" is dropped below.
This space has a natural shift operation, given by
where is an infinite string of binary digits. Given such a string, write
The resulting is a real number in the unit interval The shift induces a homomorphism, also called , on the unit interval. Since one can easily see that For the doubly-infinite sequence of bits the induced homomorphism is the Baker's map.
|
https://en.wikipedia.org/wiki/Virtual%20DOS%20machine
|
Virtual DOS machines (VDM) refer to a technology that allows running 16-bit/32-bit DOS and 16-bit Windows programs when there is already another operating system running and controlling the hardware.
Overview
Virtual DOS machines can operate either exclusively through typical software emulation methods (e.g. dynamic recompilation) or can rely on the virtual 8086 mode of the Intel 80386 processor, which allows real mode 8086 software to run in a controlled environment by catching all operations which involve accessing protected hardware and forwarding them to the normal operating system (as exceptions). The operating system can then perform an emulation and resume the execution of the DOS software.
VDMs generally also implement support for running 16- and 32-bit protected mode software (DOS extenders), which has to conform to the DOS Protected Mode Interface (DPMI).
When a DOS program running inside a VDM needs to access a peripheral, Windows will either allow this directly (rarely), or will present the DOS program with a virtual device driver (VDD) which emulates the hardware using operating system functions. A VDM will systematically have emulations for the Intel 8259A interrupt controllers, the 8254 timer chips, the 8237 DMA controller, etc.
Concurrent DOS 8086 emulation mode
In January 1985 Digital Research together with Intel previewed Concurrent DOS 286 1.0, a version of Concurrent DOS capable of running real mode DOS programs in the 80286's protected mode. The method devised on B-1 stepping processor chips, however, in May 1985 stopped working on the C-1 and subsequent processor steppings shortly before Digital Research was about to release the product. Although with the E-1 stepping Intel started to address the issues in August 1985, so that Digital Research's "8086 emulation mode" worked again utilizing the undocumented LOADALL processor instruction, it was too slow to be practical. Microcode changes for the E-2 stepping improved the speed again. This ea
|
https://en.wikipedia.org/wiki/Double%20fault
|
On the x86 architecture, a double fault exception occurs if the processor encounters a problem while trying to service a pending interrupt or exception. An example situation when a double fault would occur is when an interrupt is triggered but the segment in which the interrupt handler resides is invalid. If the processor encounters a problem when calling the double fault handler, a triple fault is generated and the processor shuts down.
As double faults can only happen due to kernel bugs, they are rarely caused by user space programs in a modern protected mode operating system, unless the program somehow gains kernel access (some viruses and also some low-level DOS programs). Other processors like PowerPC or SPARC generally save state to predefined and reserved machine registers. A double fault will then be a situation where another exception happens while the processor is still using the contents of these registers to process the exception. SPARC processors have four levels of such registers, i.e. they have a 4-window register system.
See also
Triple fault
Further reading
*
Computer errors
Central processing unit
|
https://en.wikipedia.org/wiki/Receptor-mediated%20endocytosis
|
Receptor-mediated endocytosis (RME), also called clathrin-mediated endocytosis, is a process by which cells absorb metabolites, hormones, proteins – and in some cases viruses – by the inward budding of the plasma membrane (invagination). This process forms vesicles containing the absorbed substances and is strictly mediated by receptors on the surface of the cell. Only the receptor-specific substances can enter the cell through this process.
Process
Although receptors and their ligands can be brought into the cell through a few mechanisms (e.g. caveolin and lipid raft), clathrin-mediated endocytosis remains the best studied. Clathrin-mediated endocytosis of many receptor types begins with the ligands binding to receptors on the cell plasma membrane. The ligand and receptor will then recruit adaptor proteins and clathrin triskelions to the plasma membrane around where invagination will take place. Invagination of the plasma membrane then occurs, forming a clathrin-coated pit. Other receptors can nucleate a clathrin-coated pit allowing formation around the receptor. A mature pit will be cleaved from the plasma membrane through the use of membrane-binding and fission proteins such as dynamin (as well as other BAR domain proteins), forming a clathrin-coated vesicle that then uncoats of clathrin and typically fuses to a sorting endosome. Once fused, the endocytosed cargo (receptor and/or ligand) can then be sorted to lysosomal, recycling, or other trafficking pathways.
Function
The function of receptor-mediated endocytosis is diverse. It is widely used for the specific uptake of certain substances required by the cell (examples include LDL via the LDL receptor or iron via transferrin). The role of receptor-mediated endocytosis is well recognized up take downregulation of transmembrane signal transduction but can also promote sustained signal transduction. The activated receptor becomes internalised and is transported to late endosomes and lysosomes for degradation. H
|
https://en.wikipedia.org/wiki/Bacon%27s%20cipher
|
Bacon's cipher or the Baconian cipher is a method of steganographic message encoding devised by Francis Bacon in 1605. A message is concealed in the presentation of text, rather than its content. Bacon cipher is categorized as both a substitution cipher (in plain code) and a concealment cipher (using the two typefaces).
Cipher details
To encode a message, each letter of the plaintext is replaced by a group of five of the letters 'A' or 'B'. This replacement is a 5-bit binary encoding and is done according to the alphabet of the Baconian cipher (from the Latin Alphabet), shown below:
A second version of Bacon's cipher uses a unique code for each letter. In other words, I, J, U and V each have their own pattern in this variant:
The writer must make use of two different typefaces for this cipher. After preparing a false message with the same number of letters as all of the As and Bs in the real, secret message, two typefaces are chosen, one to represent As and the other Bs. Then each letter of the false message must be presented in the appropriate typeface, according to whether it stands for an A or a B.
To decode the message, the reverse method is applied. Each "typeface 1" letter in the false message is replaced with an A and each "typeface 2" letter is replaced with a B. The Baconian alphabet is then used to recover the original message.
Any method of writing the message that allows two distinct representations for each character can be used for the Bacon Cipher.
Bacon himself prepared a Biliteral Alphabet for handwritten capital and small letters with each having two alternative forms, one to be used as A and the other as B. This was published as an illustrated plate in his De Augmentis Scientiarum (The Advancement of Learning).
Because any message of the right length can be used to carry the encoding, the secret message is effectively hidden in plain sight. The false message can be on any topic and thus can distract a person seeking to find the real messag
|
https://en.wikipedia.org/wiki/Sex-chromosome%20dosage%20compensation
|
Dosage compensation is the process by which organisms equalize the expression of genes between members of different biological sexes. Across species, different sexes are often characterized by different types and numbers of sex chromosomes. In order to neutralize the large difference in gene dosage produced by differing numbers of sex chromosomes among the sexes, various evolutionary branches have acquired various methods to equalize gene expression among the sexes. Because sex chromosomes contain different numbers of genes, different species of organisms have developed different mechanisms to cope with this inequality. Replicating the actual gene is impossible; thus organisms instead equalize the expression from each gene. For example, in humans, female (XX) cells randomly silence the transcription of one X chromosome, and transcribe all information from the other, expressed X chromosome. Thus, human females have the same number of expressed X-linked genes per cell as do human males (XY), both sexes having essentially one X chromosome per cell, from which to transcribe and express genes.
Different lineages have evolved different mechanisms to cope with the differences in gene copy numbers between the sexes that are observed on sex chromosomes. Some lineages have evolved dosage compensation, an epigenetic mechanism which restores expression of X or Z specific genes in the heterogametic sex to the same levels observed in the ancestor prior to the evolution of the sex chromosome. Other lineages equalize the expression of the X- or Z- specific genes between the sexes, but not to the ancestral levels, i.e. they possess incomplete compensation with “dosage balance”. One example of this is X-inactivation which occurs in humans. The third documented type of gene dose regulatory mechanism is incomplete compensation without balance (sometimes referred to as incomplete or partial dosage compensation). In this system gene expression of sex-specific loci is reduced in the h
|
https://en.wikipedia.org/wiki/X-inactivation
|
X-inactivation (also called Lyonization, after English geneticist Mary Lyon) is a process by which one of the copies of the X chromosome is inactivated in therian female mammals. The inactive X chromosome is silenced by being packaged into a transcriptionally inactive structure called heterochromatin. As nearly all female mammals have two X chromosomes, X-inactivation prevents them from having twice as many X chromosome gene products as males, who only possess a single copy of the X chromosome (see dosage compensation).
The choice of which X chromosome will be inactivated in a particular embryonic cell is random in placental mammals such as humans, but once an X chromosome is inactivated it will remain inactive throughout the lifetime of the cell and its descendants in the organism (its cell line). The result is that the choice of inactivated X chromosome in all the cells of the organism is a random distribution, often with about half the cells having the paternal X chromosome inactivated and half with an inactivated maternal X chromosome; but commonly, X-inactivation is unevenly distributed across the cell lines within one organism (skewed X-inactivation).
Unlike the random X-inactivation in placental mammals, inactivation in marsupials applies exclusively to the paternally-derived X chromosome.
Mechanism
Cycle of X-chromosome activation in rodents
The paragraphs below have to do only with rodents and do not reflect XI in the majority of mammals.
X-inactivation is part of the activation cycle of the X chromosome throughout the female life. The egg and the fertilized zygote initially use maternal transcripts, and the whole embryonic genome is silenced until zygotic genome activation. Thereafter, all mouse cells undergo an early, imprinted inactivation of the paternally-derived X chromosome in 4–8 cell stage embryos. The extraembryonic tissues (which give rise to the placenta and other tissues supporting the embryo) retain this early imprinted inactivation, and
|
https://en.wikipedia.org/wiki/Magnetohydrodynamic%20generator
|
A magnetohydrodynamic generator (MHD generator) is a magnetohydrodynamic converter that transforms thermal energy and kinetic energy directly into electricity. An MHD generator, like a conventional generator, relies on moving a conductor through a magnetic field to generate electric current. The MHD generator uses hot conductive ionized gas (a plasma) as the moving conductor. The mechanical dynamo, in contrast, uses the motion of mechanical devices to accomplish this.
MHD generators are different from traditional electric generators in that they operate without moving parts (e.g. no turbine) to limit the upper temperature. They therefore have the highest known theoretical thermodynamic efficiency of any electrical generation method. MHD has been extensively developed as a topping cycle to increase the efficiency of electric generation, especially when burning coal or natural gas. The hot exhaust gas from an MHD generator can heat the boilers of a steam power plant, increasing overall efficiency.
Practical MHD generators have been developed for fossil fuels, but these were overtaken by less expensive combined cycles in which the exhaust of a gas turbine or molten carbonate fuel cell heats steam to power a steam turbine.
MHD dynamos are the complement of MHD accelerators, which have been applied to pump liquid metals, seawater and plasmas.
Natural MHD dynamos are an active area of research in plasma physics and are of great interest to the geophysics and astrophysics communities, since the magnetic fields of the Earth and Sun are produced by these natural dynamos.
Principle
The Lorentz Force Law describes the effects of a charged particle moving in a constant magnetic field. The simplest form of this law is given by the vector equation.
where
is the force acting on the particle.
is the charge of the particle,
is the velocity of the particle, and
is the magnetic field.
The vector is perpendicular to both and according to the right hand rule.
Powe
|
https://en.wikipedia.org/wiki/Terminal%20yield
|
In formal language theory, the terminal yield (or fringe) of a tree is the sequence of leaves encountered in an ordered walk of the tree.
Parse trees and/or derivation trees are encountered in the study of phrase structure grammars such as context-free grammars or linear grammars. The leaves of a derivation tree for a formal grammar G are the terminal symbols of that grammar, and the internal nodes the nonterminal or variable symbols. One can read off the corresponding terminal string by performing an ordered tree traversal and recording the terminal symbols in the order they are encountered. The resulting sequence of terminals is a string of the language L(G) generated by the grammar G.
Formal languages
|
https://en.wikipedia.org/wiki/Local%20boundedness
|
In mathematics, a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number.
Locally bounded function
A real-valued or complex-valued function defined on some topological space is called a if for any there exists a neighborhood of such that is a bounded set. That is, for some number one has
In other words, for each one can find a constant, depending on which is larger than all the values of the function in the neighborhood of Compare this with a bounded function, for which the constant does not depend on Obviously, if a function is bounded then it is locally bounded. The converse is not true in general (see below).
This definition can be extended to the case when takes values in some metric space Then the inequality above needs to be replaced with
where is some point in the metric space. The choice of does not affect the definition; choosing a different will at most increase the constant for which this inequality is true.
Examples
The function defined by is bounded, because for all Therefore, it is also locally bounded.
The function defined by is bounded, as it becomes arbitrarily large. However, it locally bounded because for each in the neighborhood where
The function defined by is neither bounded locally bounded. In any neighborhood of 0 this function takes values of arbitrarily large magnitude.
Any continuous function is locally bounded. Here is a proof for functions of a real variable. Let be continuous where and we will show that is locally bounded at for all Taking ε = 1 in the definition of continuity, there exists such that for all with . Now by the triangle inequality, which means that is locally bounded at (taking and the neighborhood ). This argument generalizes easily to when the domain of is any topological space.
The converse of the above r
|
https://en.wikipedia.org/wiki/Risk%20management%20plan
|
A risk management plan is a document that a project manager prepares to foresee risks, estimate impacts, and define responses to risks. It also contains a risk assessment matrix. According to the Project Management Institute, a risk management plan is a "component of the project,
program, or portfolio management plan that describes how risk management activities will be structured and performed."
Moreover, according to the Project Management Institute, a risk is "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project's objectives." Risk is inherent with any project, and project managers should assess risks continually and develop plans to address them. The risk management plan contains an analysis of likely risks with both high and low impact, as well as mitigation strategies to help the project avoid being derailed should common problems arise. Risk management plans should be periodically reviewed by the project team to avoid having the analysis become stale and not reflective of actual potential project risks.
Risk response
Broadly, there are four potential responses to risk with numerous variations on the specific terms used to name these response options:
Avoid – Change plans to circumvent the problem;
Control / mitigate / modify / reduce – Reduce threat impact or likelihood (or both) through intermediate steps;
Accept / retain – Assume the chance of the negative impact (or auto-insurance), eventually budget the cost (e.g. via a contingency budget line); or
Transfer / share – Outsource risk (or a portion of the risk) to a third party or parties that can manage the outcome. This is done financially through insurance contracts or hedging transactions, or operationally through outsourcing an activity.
(Mnemonic: SARA, for Share Avoid Reduce Accept, or A-CAT, for "Avoid, Control, Accept, or Transfer")
Risk management plans often include matrices.
Examples
The United States Department of Defense, as part of acqui
|
https://en.wikipedia.org/wiki/Chebyshev%20distance
|
In mathematics, Chebyshev distance (or Tchebychev distance), maximum metric, or L∞ metric is a metric defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension. It is named after Pafnuty Chebyshev.
It is also known as chessboard distance, since in the game of chess the minimum number of moves needed by a king to go from one square on a chessboard to another equals the Chebyshev distance between the centers of the squares, if the squares have side length one, as represented in 2-D spatial coordinates with axes aligned to the edges of the board. For example, the Chebyshev distance between f6 and e2 equals 4.
Definition
The Chebyshev distance between two vectors or points x and y, with standard coordinates and , respectively, is
This equals the limit of the Lp metrics:
hence it is also known as the L∞ metric.
Mathematically, the Chebyshev distance is a metric induced by the supremum norm or uniform norm. It is an example of an injective metric.
In two dimensions, i.e. plane geometry, if the points p and q have Cartesian coordinates
and , their Chebyshev distance is
Under this metric, a circle of radius r, which is the set of points with Chebyshev distance r from a center point, is a square whose sides have the length 2r and are parallel to the coordinate axes.
On a chessboard, where one is using a discrete Chebyshev distance, rather than a continuous one, the circle of radius r is a square of side lengths 2r, measuring from the centers of squares, and thus each side contains 2r+1 squares; for example, the circle of radius 1 on a chess board is a 3×3 square.
Properties
In one dimension, all Lp metrics are equal – they are just the absolute value of the difference.
The two dimensional Manhattan distance has "circles" i.e. level sets in the form of squares, with sides of length r, oriented at an angle of π/4 (45°) to the coordinate axes, so the planar Chebyshev distance can be viewed
|
https://en.wikipedia.org/wiki/Biological%20target
|
A biological target is anything within a living organism to which some other entity (like an endogenous ligand or a drug) is directed and/or binds, resulting in a change in its behavior or function. Examples of common classes of biological targets are proteins and nucleic acids. The definition is context-dependent, and can refer to the biological target of a pharmacologically active drug compound, the receptor target of a hormone (like insulin), or some other target of an external stimulus. Biological targets are most commonly proteins such as enzymes, ion channels, and receptors.
Mechanism
The external stimulus (i.e., the drug or ligand) physically binds to ("hits") the biological target. The interaction between the substance and the target may be:
noncovalent – A relatively weak interaction between the stimulus and the target where no chemical bond is formed between the two interacting partners and hence the interaction is completely reversible.
reversible covalent – A chemical reaction occurs between the stimulus and target in which the stimulus becomes chemically bonded to the target, but the reverse reaction also readily occurs in which the bond can be broken.
irreversible covalent – The stimulus is permanently bound to the target through irreversible chemical bond formation.
Depending on the nature of the stimulus, the following can occur:
There is no direct change in the biological target, but the binding of the substance prevents other endogenous substances (such as activating hormones) from binding to the target. Depending on the nature of the target, this effect is referred as receptor antagonism, enzyme inhibition, or ion channel blockade.
A conformational change in the target is induced by the stimulus which results in a change in target function. This change in function can mimic the effect of the endogenous substance in which case the effect is referred to as receptor agonism (or channel or enzyme activation) or be the opposite of the endog
|
https://en.wikipedia.org/wiki/Comparison%20of%20file%20archivers
|
The following tables compare general and technical information for a number of file archivers. Please see the individual products' articles for further information. They are neither all-inclusive nor are some entries necessarily up to date. Unless otherwise specified in the footnotes section, comparisons are based on the stable versions—without add-ons, extensions or external programs.
General information
Basic general information about the archivers.
Legend:
Notes:
Operating system support
The operating systems the archivers can run on without emulation or compatibility layer. Ubuntu's own GUI Archive manager, for example, can open and create many archive formats (including Rar archives) even to the extent of splitting into parts and encryption and ability to be read by the native program. This is presumably a "compatibility layer."
Notes:
Archiver features
Information about what common archiver features are implemented natively (without third-party add-ons).
Notes:
Archive format support
Reading
Information about what archive formats the archivers can read. External links lead to information about support in future versions of the archiver or extensions that provide such functionality. Note that gzip, bzip2 and xz are compression formats rather than archive formats.
Notes:
Writing
Information about what archive formats the archivers can write and create. External links lead to information about support in future versions of the archiver or extensions that provide such functionality. Note that gzip, bzip2 and xz are compression formats rather than archive formats.
Notes:
Tar implementations call the external programs gzip and bzip2, 7z, xz, ... to perform compression; these external programs usually come with systems that contain tar.
Requires rar.exe from WinRAR.
Requires external program(if you are using WinZip 11.1 or earlier).
Requires Ace32.exe from WinAce.
The Extractor and XAD are not included in this list because they only exp
|
https://en.wikipedia.org/wiki/Creed%20%26%20Company
|
Creed & Company was a British telecommunications company founded by Frederick George Creed which was an important pioneer in the field of teleprinter machines. It was merged into the International Telephone and Telegraph Corporation (ITT) in 1928.
History
The company was founded by Frederick George Creed and Danish telegraph engineer Harald Bille, and was first incorporated in 1912 as "Creed, Bille & Company Limited". After Bille's death in a railway accident in 1916, his name was dropped from the company's title and it became simply Creed & Company.
The Company spent most of World War I producing high-quality instruments, manufacturing facilities for which were very limited at that time in the UK. Among the items produced were amplifiers, spark-gap transmitters, aircraft compasses, high-voltage generators, bomb release apparatus, and fuses for artillery shells and bombs.
In 1924 Creed entered the teleprinter field with their Model 1P, which was soon superseded by the improved Model 2P. In 1925 Creed acquired the patents for Donald Murray's Murray code, a rationalised Baudot code, and it was used for their new Model 3 Tape Teleprinter of 1927. This machine printed received messages directly onto gummed paper tape at a rate of 65 words per minute and was the first combined start-stop transmitter-receiver teleprinter from Creed to enter mass production.
Some of the key models were:
Creed model 6S (punched paper tape reader)
Creed model 7 (page printing teleprinter introduced in 1931)
Creed model 7B (50 baud page printing teleprinter)
Creed model 7E (page printing teleprinter with overlap cam and range finder)
Creed model 7/TR (non-printing teleprinter reperforator)
Creed model 54 (page printing teleprinter introduced in 1954)
Creed model 75 (page printing teleprinter introduced in 1958)
Creed model 85 (printing reperforator introduced in 1948)
Creed model 86 (printing reperforator using 7/8" wide tape)
Creed model 444 (page printing teleprinter introduc
|
https://en.wikipedia.org/wiki/Adolf%20Kussmaul
|
Adolph Kußmaul (; 22 February 1822 – 28 May 1902) was a German physician and a leading clinician of his time. He was born as the son and grandson of physicians at Graben near Karlsruhe and studied at Heidelberg. He entered the army after graduation and spent two years as an army surgeon. This was followed by a period as a general practitioner before he went to Würzburg to study for his doctorate under Virchow.
He was subsequently Professor of Medicine at Heidelberg (1857), Erlangen (1859), Freiburg (1859) and Straßburg (1876).
Beyond his medical skills he was also active in literature. He is regarded as one of the creators of the term Biedermeier.
He died in Heidelberg.
Eponymous terms
His name continues to be used in eponyms. He described two medical signs and one disease which have eponymous names that remain in use:
Kussmaul breathing - Very deep and labored breathing with normal, rapid or reduced frequency seen in severe Diabetic ketoacidosis (DKA).
Kussmaul's sign - Paradoxical rise in the Jugular venous pressure (JVP) on inhalation in Constrictive pericarditis or Chronic obstructive pulmonary disease (COPD).
Kussmaul disease (Also called Kussmaul-Maier disease) - Polyarteritis nodosa. Named with Rudolf Robert Maier (1824-1888).
The following eponymous terms are considered archaic:
Kussmaul's coma - diabetic coma due to ketoacidosis.
Kussmaul's aphasia - selective mutism.
Firsts
First to describe dyslexia in 1877. (He called it 'word blindness'.)
First to describe polyarteritis nodosa.
First to describe progressive bulbar paralysis.
First to describe selective mutism.
First to diagnose mesenteric embolism.
First to perform pleural tapping and gastric lavage.
First to attempt oesophagoscopy and gastroscopy.
First to describe the emotional symptoms of mercury exposure as a first stage preceding the physical effects.
References
External links
Adolf Kussmaul, biography from Who Named It?.
1822 births
1902 deaths
History of medical imaging
Peop
|
https://en.wikipedia.org/wiki/Electronic%20flight%20instrument%20system
|
In aviation, an electronic flight instrument system (EFIS) is a flight instrument display system in an aircraft cockpit that displays flight data electronically rather than electromechanically. An EFIS normally consists of a primary flight display (PFD), multi-function display (MFD), and an engine indicating and crew alerting system (EICAS) display. Early EFIS models used cathode ray tube (CRT) displays, but liquid crystal displays (LCD) are now more common. The complex electromechanical attitude director indicator (ADI) and horizontal situation indicator (HSI) were the first candidates for replacement by EFIS. Now, however, few flight deck instruments cannot be replaced by an electronic display.
Display units
Primary flight display (PFD)
On the flight deck, the display units are the most obvious parts of an EFIS system, and are the features that lead to the term glass cockpit. The display unit that replaces the artificial horizon is called the primary flight display (PFD). If a separate display replaces the HSI, it is called the navigation display. The PFD displays all information critical to flight, including calibrated airspeed, altitude, heading, attitude, vertical speed and yaw. The PFD is designed to improve a pilot's situational awareness by integrating this information into a single display instead of six different analog instruments, reducing the amount of time necessary to monitor the instruments. PFDs also increase situational awareness by alerting the aircrew to unusual or potentially hazardous conditions — for example, low airspeed, high rate of descent — by changing the color or shape of the display or by providing audio alerts.
The names Electronic Attitude Director Indicator and Electronic Horizontal Situation Indicator are used by some manufacturers. However, a simulated ADI is only the centerpiece of the PFD. Additional information is both superimposed on and arranged around this graphic.
Multi-function displays can render a separate navigatio
|
https://en.wikipedia.org/wiki/Genetic%20architecture
|
Genetic architecture is the underlying genetic basis of a phenotypic trait and its variational properties. Phenotypic variation for quantitative traits is, at the most basic level, the result of the segregation of alleles at quantitative trait loci (QTL). Environmental factors and other external influences can also play a role in phenotypic variation. Genetic architecture is a broad term that can be described for any given individual based on information regarding gene and allele number, the distribution of allelic and mutational effects, and patterns of pleiotropy, dominance, and epistasis.
There are several different experimental views of genetic architecture. Some researchers recognize that the interplay of various genetic mechanisms is incredibly complex, but believe that these mechanisms can be averaged and treated, more or less, like statistical noise. Other researchers claim that each and every gene interaction is significant and that it is necessary to measure and model these individual systemic influences on evolutionary genetics.
Applications
Genetic architecture can be studied and applied at many different levels. At the most basic, individual level, genetic architecture describes the genetic basis for differences between individuals, species, and populations. This can include, among other details, how many genes are involved in a specific phenotype and how gene interactions, such as epistasis, influence that phenotype. Line-cross analyses and QTL analyses can be used to study these differences. This is perhaps the most common way that genetic architecture is studied, and though it is useful for supplying pieces of information, it does not generally provide a complete picture of the genetic architecture as a whole.
Genetic architecture can also be used to discuss the evolution of populations. Classical quantitative genetics models, such as that developed by R.A. Fisher, are based on analyses of phenotype in terms of the contributions from different g
|
https://en.wikipedia.org/wiki/Pleiotropy
|
Pleiotropy (from Greek , 'more', and , 'way') occurs when one gene influences two or more seemingly unrelated phenotypic traits. Such a gene that exhibits multiple phenotypic expression is called a pleiotropic gene. Mutation in a pleiotropic gene may have an effect on several traits simultaneously, due to the gene coding for a product used by a myriad of cells or different targets that have the same signaling function.
Pleiotropy can arise from several distinct but potentially overlapping mechanisms, such as gene pleiotropy, developmental pleiotropy, and selectional pleiotropy. Gene pleiotropy occurs when a gene product interacts with multiple other proteins or catalyzes multiple reactions. Developmental pleiotropy occurs when mutations have multiple effects on the resulting phenotype. Selectional pleiotropy occurs when the resulting phenotype has many effects on fitness (depending on factors such as age and gender).
An example of pleiotropy is phenylketonuria, an inherited disorder that affects the level of phenylalanine, an amino acid that can be obtained from food, in the human body. Phenylketonuria causes this amino acid to increase in amount in the body, which can be very dangerous. The disease is caused by a defect in a single gene on chromosome 12 that codes for enzyme phenylalanine hydroxylase, that affects multiple systems, such as the nervous and integumentary system.
Pleiotropic gene action can limit the rate of multivariate evolution when natural selection, sexual selection or artificial selection on one trait favors one allele, while selection on other traits favors a different allele. Some gene evolution is harmful to an organism. Genetic correlations and responses to selection most often exemplify pleiotropy.
History
Pleiotropic traits had been previously recognized in the scientific community but had not been experimented on until Gregor Mendel's 1866 pea plant experiment. Mendel recognized that certain pea plant traits (seed coat color, flower
|
https://en.wikipedia.org/wiki/ShaBLAMM%21%20NiTro-VLB
|
The ShaBLAMM! NiTro-VLB was a computer system that used a QED R4600 microprocessor implemented on a VESA Local Bus peripheral card and designed to function when connected to a host computer system using an Intel i486. The NiTro-VLB conformed to the ARC standard, and was produced and marketed by ShaBLAMM! Computer as an "upgrade" card for accelerating Windows NT.
Characteristics
The NiTro-VLB is notable for various unique characteristics among personal computer accessories. For example, although the system was marketed as an "upgrade" for computers already using a 486 processor, the NiTro-VLB was in fact of an entirely different architecture (specifically, the MIPS architecture) from the IA32-based 486. Further, as a "parasitic" or "symbiotic" coprocessor, the NiTro-VLB was designed to co-opt the host 486 processor from running, and used four megabytes of the host 486 motherboard's system memory as a DMA buffer (although the NiTro-VLB required its own separate DRAM main memory, in addition to any memory installed on the host 486 motherboard).
This is a type of "parasite"/"host" upgrade card configuration, in which an entire motherboard and processor are implemented on an expansion card designed to connect to a host motherboard's expansion slot. Such configurations are rare among computer systems designed to run Microsoft Windows.
Specifications and benchmarks
The NiTro-VLB's QED R4600 processor, running at 100 MHz, was rated at 73.8 SPECint92 and 63 SPECfp92 (which are similar figures to the first-generation Pentium running at 66 MHz). Faster and costlier versions were designed to run at 133 MHz or 150 MHz.
Sales
Initially, the NiTro-VLB system was priced at $1,095 for a 100 MHz card with no main memory, $1,995 for a 100 MHz card with 16 MB of main memory and a copy of Windows NT, and $2,595 for a 150 MHz card.
See also
Jazz (computer)
MIPS Magnum
DeskStation Tyne
External links
A BYTE magazine article detailing the ShaBlamm! Nitro-VLB
Computer workstati
|
https://en.wikipedia.org/wiki/Numerical%20weather%20prediction
|
Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.
A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to ab
|
https://en.wikipedia.org/wiki/Hybrid%20computer
|
Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical and numerical operations, while the analog component often serves as a solver of differential equations and other mathematically complex problems.
History
The first desktop hybrid computing system was the Hycomp 250, released by Packard Bell in 1961. Another early example was the HYDAC 2400, an integrated hybrid computer released by EAI in 1963. In the 1980s, Marconi Space and Defense Systems Limited (under Peggy Hodges) developed their "Starglow Hybrid Computer", which consisted of three EAI 8812 analog computers linked to an EAI 8100 digital computer, the latter also being linked to an SEL 3200 digital computer. Late in the 20th century, hybrids dwindled with the increasing capabilities of digital computers including digital signal processors.
In general, analog computers are extraordinarily fast, since they are able to solve most mathematically complex equations at the rate at which a signal traverses the circuit, which is generally an appreciable fraction of the speed of light. On the other hand, the precision of analog computers is not good; they are limited to three, or at most, four digits of precision.
Digital computers can be built to take the solution of equations to almost unlimited precision, but quite slowly compared to analog computers. Generally, complex mathematical equations are approximated using iterative methods which take huge numbers of iterations, depending on how good the initial "guess" at the final value is and how much precision is desired. (This initial guess is known as the numerical "seed".) For many real-time operations in the 20th century, such digital calculations were too slow to be of much use (e.g., for very high frequency phased array radars or for weather calculations), but the precision of an analog computer is insufficient.
Hybrid computers can be
|
https://en.wikipedia.org/wiki/Automated%20planning%20and%20scheduling
|
Automated planning and scheduling, sometimes denoted as simply AI planning, is a branch of artificial intelligence that concerns the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multidimensional space. Planning is also related to decision theory.
In known environments with available models, planning can be done offline. Solutions can be found and evaluated prior to execution. In dynamically unknown environments, the strategy often needs to be revised online. Models and policies must be adapted. Solutions usually resort to iterative trial and error processes commonly seen in artificial intelligence. These include dynamic programming, reinforcement learning and combinatorial optimization. Languages used to describe planning and scheduling are often called action languages.
Overview
Given a description of the possible initial states of the world, a description of the desired goals, and a description of a set of possible actions, the planning problem is to synthesize a plan that is guaranteed (when applied to any of the initial states) to generate a state which contains the desired goals (such a state is called a goal state).
The difficulty of planning is dependent on the simplifying assumptions employed. Several classes of planning problems can be identified depending on the properties the problems have in several dimensions.
Are the actions deterministic or non-deterministic? For nondeterministic actions, are the associated probabilities available?
Are the state variables discrete or continuous? If they are discrete, do they have only a finite number of possible values?
Can the current state be observed unambiguously? There can be full observability and partial observability.
How many initial states are there, finite or arbitrarily many?
Do actions hav
|
https://en.wikipedia.org/wiki/Phosphosilicate%20glass
|
Phosphosilicate glass, commonly referred to by the acronym PSG, is a silicate glass commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers, due to its effect in gettering alkali ions. Another common type of phosphosilicate glass is borophosphosilicate glass (BPSG).
Soda-lime phosphosilicate glasses also form the basis for bioactive glasses (e.g. Bioglass), a family of materials which chemically convert to mineralised bone (hydroxy-carbonate-apatite) in physiological fluid.
Bismuth doped phosphosilicate glasses are being explored for use as the active gain medium in fiber lasers for fiber-optic communication.
See also
Wafer (electronics)
References
Glass compositions
Semiconductor device fabrication
|
https://en.wikipedia.org/wiki/Borophosphosilicate%20glass
|
Borophosphosilicate glass, commonly known as BPSG, is a type of silicate glass that includes additives of both boron and phosphorus. Silicate glasses such as PSG and borophosphosilicate glass are commonly used in semiconductor device fabrication for intermetal layers, i.e., insulating layers deposited between succeedingly higher metal or conducting layers.
BPSG has been implicated in increasing a device's susceptibility to soft errors since the boron-10 isotope is good at capturing thermal neutrons from cosmic radiation. It then undergoes fission producing a gamma ray, an alpha particle, and a lithium ion. These products may then dump charge into nearby structures, causing data loss (bit flipping, or single event upset).
In critical designs, depleted boron consisting almost entirely of boron-11 is used to avoid this effect as a radiation hardening measure. Boron-11 is a by-product of the nuclear industry.
References
Semiconductor device fabrication
Glass compositions
Boron compounds
|
https://en.wikipedia.org/wiki/Coliform%20bacteria
|
Coliform bacteria are defined as either motile or non-motile Gram-negative non-spore forming Bacilli that possess β-galactosidase to produce acids and gases under their optimal growth temperature of 35-37 °C. They can be aerobes or facultative aerobes, and are a commonly used indicator of low sanitary quality of foods, milk, and water. Coliforms can be found in the aquatic environment, in soil and on vegetation; they are universally present in large numbers in the feces of warm-blooded animals as they are known to inhabit the gastrointestinal system. While coliform bacteria are not normally causes of serious illness, they are easy to culture, and their presence is used to infer that other pathogenic organisms of fecal origin may be present in a sample, or that said sample is not safe to consume. Such pathogens include disease-causing bacteria, viruses, or protozoa and many multicellular parasites.
Genera
Typical genera include:
Citrobacter are peritrichous facultative anaerobic bacilli between 0.6-6 μm in length. Citrobacter species inhabit intestinal flora without causing harm, but can lead to urinary tract infections, bacteremia, brain abscesses, pneumonia, intra abdominal sepsis, meningitis, and joint infections if they are given the opportunity. Infections of a Citrobacter species has a mortality rate between 33-48%, with infants and immunocompromised individuals being more susceptible.
Enterobacter are motile, flagellated bacilli known for causing infections such as bacteremia, respiratory tract infections, urinary tract infections, infections of areas where surgery occurred, and in extreme cases meningitis, sinusitis and osteomyelitis. To determine the presence of Enterobacter in a sample, they are first grown on MacConkey agar to confirm they are lactose fermenting. An indole test will differentiate Enterobacter from Escherichia, as Enterobacter are indole negative and Escherichia is positive. Enterobacter are distinguished from Klebsiella because of the
|
https://en.wikipedia.org/wiki/Outline%20of%20electrical%20engineering
|
The following outline is provided as an overview of and topical guide to electrical engineering.
Electrical engineering – field of engineering that generally deals with the study and application of electricity, electronics and electromagnetism. The field first became an identifiable occupation in the late nineteenth century after commercialization of the electric telegraph and electrical power supply. It now covers a range of subtopics including power, electronics, control systems, signal processing and telecommunications.
Classification
Electrical engineering can be described as all of the following:
Academic discipline – branch of knowledge that is taught and researched at the college or university level. Disciplines are defined (in part), and recognized by the academic journals in which research is published, and the learned societies and academic departments or faculties to which their practitioners belong.
Branch of engineering – discipline, skill, and profession of acquiring and applying scientific, economic, social, and practical knowledge, in order to design and build structures, machines, devices, systems, materials and processes.
Branches of electrical engineering
Power engineering
Control engineering
Electronic engineering
Microelectronics
Signal processing
Telecommunications engineering
Instrumentation engineering
Computer engineering
Electro-Optical Engineering
Distribution engineering
Related disciplines
Biomedical engineering
Engineering physics
Mechanical engineering
Mechatronics
History of electrical engineering
History of electrical engineering
Timeline of electrical and electronic engineering
General electrical engineering concepts
Electromagnetism
Electromagnetism
Electricity
Magnetism
Electromagnetic spectrum
Optical spectrum
Electrostatics
Electric charge
Coulomb's law
Electric field
Gauss's law
Electric potential
Magnetostatics
Electric current
Ampère's law
Magnetic field
Magnetic moment
Electrody
|
https://en.wikipedia.org/wiki/Magnetoreception
|
Magnetoreception is a sense which allows an organism to detect the Earth's magnetic field. Animals with this sense include some arthropods, molluscs, and vertebrates (fish, amphibians, reptiles, birds, and mammals). The sense is mainly used for orientation and navigation, but it may help some animals to form regional maps. Experiments on migratory birds provide evidence that they make use of a cryptochrome protein in the eye, relying on the quantum radical pair mechanism to perceive magnetic fields. This effect is extremely sensitive to weak magnetic fields, and readily disturbed by radio-frequency interference, unlike a conventional iron compass.
Birds have iron-containing materials in their upper beaks. There is some evidence that this provides a magnetic sense, mediated by the trigeminal nerve, but the mechanism is unknown.
Cartilaginous fish including sharks and stingrays can detect small variations in electric potential with their electroreceptive organs, the ampullae of Lorenzini. These appear to be able to detect magnetic fields by induction. There is some evidence that these fish use magnetic fields in navigation.
History
Biologists have long wondered whether migrating animals such as birds and sea turtles have an inbuilt magnetic compass, enabling them to navigate using the Earth's magnetic field. Until late in the 20th century, evidence for this was essentially only behavioural: many experiments demonstrated that animals could indeed derive information from the magnetic field around them, but gave no indication of the mechanism. In 1972, Roswitha and Wolfgang Wiltschko showed that migratory birds responded to the direction and inclination (dip) of the magnetic field. In 1977, M. M. Walker and colleagues identified iron-based (magnetite) magnetoreceptors in the snouts of rainbow trout. In 2003, G. Fleissner and colleagues found iron-based receptors in the upper beaks of homing pigeons, both seemingly connected to the animal's trigeminal nerve. Resear
|
https://en.wikipedia.org/wiki/Encrypted%20key%20exchange
|
Encrypted Key Exchange (also known as EKE) is a family of password-authenticated key agreement methods described by Steven M. Bellovin and Michael Merritt. Although several of the forms of EKE in this paper were later found to be flawed , the surviving, refined, and enhanced forms of EKE effectively make this the first method to amplify a shared password into a shared key, where the shared key may subsequently be used to provide a zero-knowledge password proof or other functions.
In the most general form of EKE, at least one party encrypts an ephemeral (one-time) public key using a password, and sends it to a second party, who decrypts it and uses it to negotiate a shared key with the first party.
A second paper describes Augmented-EKE, and introduced the concept of augmented password-authenticated key agreement for client/server scenarios. Augmented methods have the added goal of ensuring that password verification data stolen from a server cannot be used by an attacker to masquerade as the client, unless the attacker first determines the password (e.g. by performing a brute force attack on the stolen data).
A version of EKE based on Diffie–Hellman, known as DH-EKE, has survived attack and has led to improved variations, such as the PAK family of methods in IEEE P1363.2.
Since the US patent on EKE expired in late 2011, an EAP authentication method using EKE was published as an IETF RFC. The EAP method uses the Diffie–Hellman variant of EKE.
Patents
, owned by Lucent, describes the initial EKE method. It expired in October 2011.
, owned by Lucent, describes the augmented EKE method. It expired in August 2013.
See also
Password-authenticated key agreement
References
Cryptographic protocols
Key-agreement protocols
|
https://en.wikipedia.org/wiki/Shion%20Uzuki
|
is the main protagonist of the Xenosaga trilogy for the PlayStation 2. In addition, she was in the mobile game Pied Piper, Xenosaga I & II, Xenosaga Freaks, as well as the anime Xenosaga: The Animation.
Character design
Shion Uzuki is the main character of the three episodes of Xenosaga, which have been referred to as "Shion's Arc" by Namco Bandai. While her role is less prominent in Xenosaga Episode II, which focuses on the character Jr., she is the lead character again in the DS remake Xenosaga I & II. While Shion's surname is "Uzuki", Takahashi has stated that she is not meant to be a distant relative of the character Citan Uzuki from Xenogears. She does, however, share his liking for science.
Shion is portrayed as a girl who tries to overcome the tragic events of her past by "looking away from reality and truth". Takahashi compared the character's instinctive tendency to run away from things with his very own personality and mindset. A particular point he wanted to explore in the series was Shion "looking back at herself […] wondering how she should live her life". When he created Xenosaga, Takahashi made "life and death" a central theme of the story and gave each character a different outlook on it; Shion and Albedo were conceived as the two characters that eventually develop the "ideal compromise towards death".
Xenosaga Episode I has an anime-like art style, while Episode II and III use a more realistic style, and thus she appears differently. She wears glasses in the first and third games. If the player loads a save file from Episode II in Episode III, Shion will obtain special armor that will change her outward appearance.
Shion's main theme in Episode I is the ending song "Kokoro"(heart) performed by Joanne Hogg. The tracks "Shion's Crisis", "Shion ~Memories of the Past~" and "Shion ~Emotion~" were also written for scenes focusing on her. "Fighting KOS-MOS" plays during a scene that showcases KOS-MOS' power, but composer Yasunori Mitsuda chose to write
|
https://en.wikipedia.org/wiki/Barbara%20Liskov
|
Barbara Liskov (born November 7, 1939 as Barbara Jane Huberman) is an American computer scientist who has made pioneering contributions to programming languages and distributed computing. Her notable work includes the introduction of abstract data types and the accompanying principle of data abstraction, along with the Liskov substitution principle, which applies these ideas to object-oriented programming, subtyping, and inheritance. Her work was recognized with the 2008 Turing Award, the highest distinction in computer science.
Liskov is one of the earliest women to have been granted a doctorate in computer science in the United States, and the second woman to receive the Turing award. She is currently an Institute Professor and Ford Professor of Engineering at the Massachusetts Institute of Technology.
Early life and education
Liskov was born November 7, 1939, in Los Angeles, California, to a Jewish family, the eldest of Jane (née Dickhoff) and Moses Huberman's four children. She earned her bachelor's degree in mathematics with a minor in physics at the University of California, Berkeley in 1961. At Berkeley, she had only one other female classmate in her major. She applied to graduate mathematics programs at Berkeley and Princeton. At the time Princeton was not accepting female students in mathematics. She was accepted at Berkeley but instead moved to Boston and began working at Mitre Corporation, where she became interested in computers and programming. She worked at Mitre for one year before taking a programming job at Harvard working on language translation.
She then decided to go back to school and applied again to Berkeley, but also to Stanford and Harvard. In March 1968 she became one of the first women in the United States to be awarded a Ph.D. from a computer science department when she was awarded her degree from Stanford University. At Stanford, she worked with John McCarthy and was supported to work in artificial intelligence. The topic of her Ph.D
|
https://en.wikipedia.org/wiki/Darcy%20%28unit%29
|
The darcy (or darcy unit) and millidarcy (md or mD) are units of permeability, named after Henry Darcy. They are not SI units, but they are widely used in petroleum engineering and geology. The unit has also been used in biophysics and biomechanics, where the flow of fluids such as blood through capillary beds and cerebrospinal fluid through the brain interstitial space is being examined. A darcy has dimensional units of length2.
Definition
Permeability measures the ability of fluids to flow through rock (or other porous media). The darcy is defined using Darcy's law, which can be written as:
where:
{|
| || is the volumetric fluid flow rate through the medium
|-
| || is the area of the medium
|-
| || is the permeability of the medium
|-
| || is the dynamic viscosity of the fluid
|-
| || is the applied pressure difference
|-
| || is the thickness of the medium
|}
The darcy is referenced to a mixture of unit systems. A medium with a permeability of 1 darcy permits a flow of 1 cm3/s of a fluid with viscosity 1 cP (1 mPa·s) under a pressure gradient of 1 atm/cm acting across an area of 1 cm2.
Typical values of permeability range as high as 100,000 darcys for gravel, to less than 0.01 microdarcy for granite. Sand has a permeability of approximately 1 darcy.
Tissue permeability, whose measurement in vivo is still in its infancy, is somewhere in the range of 0.01 to 100 darcy.
Origin
The darcy is named after Henry Darcy. Rock permeability is usually expressed in millidarcys (md) because rocks hosting hydrocarbon or water accumulations typically exhibit permeability ranging from 5 to 500 md.
The odd combination of units comes from Darcy's original studies of water flow through columns of sand. Water has a viscosity of 1.0019 cP at about room temperature.
The unit abbreviation "d" is not capitalized (contrary to industry use). The American Association of Petroleum Geologists uses the following unit abbreviations and grammar in their publications:
darcy (plural
|
https://en.wikipedia.org/wiki/Mark%20and%20recapture
|
Mark and recapture is a method commonly used in ecology to estimate an animal population's size where it is impractical to count every individual. A portion of the population is captured, marked, and released. Later, another portion will be captured and the number of marked individuals within the sample is counted. Since the number of marked individuals within the second sample should be proportional to the number of marked individuals in the whole population, an estimate of the total population size can be obtained by dividing the number of marked individuals by the proportion of marked individuals in the second sample. Other names for this method, or closely related methods, include capture-recapture, capture-mark-recapture, mark-recapture, sight-resight, mark-release-recapture, multiple systems estimation, band recovery, the Petersen method, and the Lincoln method.
Another major application for these methods is in epidemiology, where they are used to estimate the completeness of ascertainment of disease registers. Typical applications include estimating the number of people needing particular services (i.e. services for children with learning disabilities, services for medically frail elderly living in the community), or with particular conditions (i.e. illegal drug addicts, people infected with HIV, etc.).
Field work related to mark-recapture
Typically a researcher visits a study area and uses traps to capture a group of individuals alive. Each of these individuals is marked with a unique identifier (e.g., a numbered tag or band), and then is released unharmed back into the environment. A mark-recapture method was first used for ecological study in 1896 by C.G. Johannes Petersen to estimate plaice, Pleuronectes platessa, populations.
Sufficient time is allowed to pass for the marked individuals to redistribute themselves among the unmarked population.
Next, the researcher returns and captures another sample of individuals. Some individuals in this second
|
https://en.wikipedia.org/wiki/DNS%20spoofing
|
DNS spoofing, also referred to as DNS cache poisoning, is a form of computer security hacking in which corrupt Domain Name System data is introduced into the DNS resolver's cache, causing the name server to return an incorrect result record, e.g. an IP address. This results in traffic being diverted to any computer that the attacker chooses.
Overview of the Domain Name System
A Domain Name System server translates a human-readable domain name (such as example.com) into a numerical IP address that is used to route communications between nodes. Normally if the server does not know a requested translation it will ask another server, and the process continues recursively. To increase performance, a server will typically remember (cache) these translations for a certain amount of time. This means if it receives another request for the same translation, it can reply without needing to ask any other servers, until that cache expires.
When a DNS server has received a false translation and caches it for performance optimization, it is considered poisoned, and it supplies the false data to clients. If a DNS server is poisoned, it may return an incorrect IP address, diverting traffic to another computer (often an attacker's).
Cache poisoning attacks
Normally, a networked computer uses a DNS server provided by an Internet service provider (ISP) or the computer user's organization. DNS servers are used in an organization's network to improve resolution response performance by caching previously obtained query results. Poisoning attacks on a single DNS server can affect the users serviced directly by the compromised server or those serviced indirectly by its downstream server(s) if applicable.
To perform a cache poisoning attack, the attacker exploits flaws in the DNS software. A server should correctly validate DNS responses to ensure that they are from an authoritative source (for example by using DNSSEC); otherwise the server might end up caching the incorrect entries l
|
https://en.wikipedia.org/wiki/Private%20Communications%20Technology
|
Private Communications Technology (PCT) 1.0 was a protocol developed by Microsoft in the mid-1990s. PCT was designed to address security flaws in version 2.0 of Netscape's Secure Sockets Layer protocol and to force Netscape to hand control of the then-proprietary SSL protocol to an open standards body.
PCT has since been superseded by SSLv3 and Transport Layer Security. For a while it was still supported by Internet Explorer, but PCT 1.0 has been disabled by default since IE 5 and the option was removed in IE6. It is still found in IIS and in the Windows operating system libraries, although in Windows Server 2003 it is disabled by default. It is used by old versions of MSMQ as the only choice.
Due to its near disuse, it is arguably a security risk, as it has received less attention in testing than commonly used protocols, and there is little incentive for Microsoft to expend effort on maintaining its implementation of it.
References
External links
The Private Communication Technology (PCT) Protocol (published 1995)
Cryptographic protocols
Obsolete technologies
|
https://en.wikipedia.org/wiki/Push-button
|
A push-button (also spelled pushbutton) or simply button is a simple switch mechanism to control some aspect of a machine or a process. Buttons are typically made out of hard material, usually plastic or metal. The surface is usually flat or shaped to accommodate the human finger or hand, so as to be easily depressed or pushed. Buttons are most often biased switches, although many un-biased buttons (due to their physical nature) still require a spring to return to their un-pushed state.
Terms for the "pushing" of a button include pressing, depressing, mashing, slapping, hitting, and punching.
Uses
The "push-button" has been utilized in calculators, push-button telephones, kitchen appliances, and various other mechanical and electronic devices, home and commercial.
In industrial and commercial applications, push buttons can be connected together by a mechanical linkage so that the act of pushing one button causes the other button to be released. In this way, a stop button can "force" a start button to be released. This method of linkage is used in simple manual operations in which the machine or process has no electrical circuits for control.
Red pushbuttons can also have large heads (called mushroom heads) for easy operation and to facilitate the stopping of a machine. These pushbuttons are called emergency stop buttons and for increased safety are mandated by the electrical code in many jurisdictions. This large mushroom shape can also be found in buttons for use with operators who need to wear gloves for their work and could not actuate a regular flush-mounted push button.
As an aid for operators and users in industrial or commercial applications, a pilot light is commonly added to draw the attention of the user and to provide feedback if the button is pushed. Typically this light is included into the center of the pushbutton and a lens replaces the pushbutton hard center disk. The source of the energy to illuminate the light is not directly tied to the con
|
https://en.wikipedia.org/wiki/Coplanarity
|
In geometry, a set of points in space are coplanar if there exists a geometric plane that contains them all. For example, three points are always coplanar, and if the points are distinct and non-collinear, the plane they determine is unique. However, a set of four or more distinct points will, in general, not lie in a single plane.
Two lines in three-dimensional space are coplanar if there is a plane that includes them both. This occurs if the lines are parallel, or if they intersect each other. Two lines that are not coplanar are called skew lines.
Distance geometry provides a solution technique for the problem of determining whether a set of points is coplanar, knowing only the distances between them.
Properties in three dimensions
In three-dimensional space, two linearly independent vectors with the same initial point determine a plane through that point. Their cross product is a normal vector to that plane, and any vector orthogonal to this cross product through the initial point will lie in the plane. This leads to the following coplanarity test using a scalar triple product:
Four distinct points, , are coplanar if and only if,
which is also equivalent to
If three vectors are coplanar, then if (i.e., and are orthogonal) then
where denotes the unit vector in the direction of . That is, the vector projections of on and on add to give the original .
Coplanarity of points in n dimensions whose coordinates are given
Since three or fewer points are always coplanar, the problem of determining when a set of points are coplanar is generally of interest only when there are at least four points involved. In the case that there are exactly four points, several ad hoc methods can be employed, but a general method that works for any number of points uses vector methods and the property that a plane is determined by two linearly independent vectors.
In an -dimensional space where , a set of points are coplanar if and only if the matrix of their relative
|
https://en.wikipedia.org/wiki/Logarithmically%20concave%20function
|
In convex analysis, a non-negative function is logarithmically concave (or log-concave for short) if its domain is a convex set, and if it satisfies the inequality
for all and . If is strictly positive, this is equivalent to saying that the logarithm of the function, , is concave; that is,
for all and .
Examples of log-concave functions are the 0-1 indicator functions of convex sets (which requires the more flexible definition), and the Gaussian function.
Similarly, a function is log-convex if it satisfies the reverse inequality
for all and .
Properties
A log-concave function is also quasi-concave. This follows from the fact that the logarithm is monotone implying that the superlevel sets of this function are convex.
Every concave function that is nonnegative on its domain is log-concave. However, the reverse does not necessarily hold. An example is the Gaussian function = which is log-concave since = is a concave function of . But is not concave since the second derivative is positive for || > 1:
From above two points, concavity log-concavity quasiconcavity.
A twice differentiable, nonnegative function with a convex domain is log-concave if and only if for all satisfying ,
,
i.e.
is
negative semi-definite. For functions of one variable, this condition simplifies to
Operations preserving log-concavity
Products: The product of log-concave functions is also log-concave. Indeed, if and are log-concave functions, then and are concave by definition. Therefore
is concave, and hence also is log-concave.
Marginals: if : is log-concave, then
is log-concave (see Prékopa–Leindler inequality).
This implies that convolution preserves log-concavity, since = is log-concave if and are log-concave, and therefore
is log-concave.
Log-concave distributions
Log-concave distributions are necessary for a number of algorithms, e.g. adaptive rejection sampling. Every distribution with log-concave density is a maximum entropy probability
|
https://en.wikipedia.org/wiki/Vector%20projection
|
The vector projection (also known as the vector component or vector resolution) of a vector on (or onto) a nonzero vector is the orthogonal projection of onto a straight line parallel to .
The projection of onto is often written as or .
The vector component or vector resolute of perpendicular to , sometimes also called the vector rejection of from (denoted or ), is the orthogonal projection of onto the plane (or, in general, hyperplane) that is orthogonal to . Since both and are vectors, and their sum is equal to , the rejection of from is given by:
To simplify notation, this article defines and
Thus, the vector is parallel to the vector is orthogonal to and
The projection of onto can be decomposed into a direction and a scalar magnitude by writting it as
where is a scalar, called the scalar projection of onto , and is the unit vector in the direction of . The scalar projection is defined as
where the operator ⋅ denotes a dot product, ‖a‖ is the length of , and θ is the angle between and .
The scalar projection is equal in absolute value to the length of the vector projection, with a minus sign if the direction of the projection is opposite to the direction of , that is, if the angle between the vectors is more than 90 degrees.
The vector projection can be calculated using the dot product of and as:
Notation
This article uses the convention that vectors are denoted in a bold font (e.g. ), and scalars are written in normal font (e.g. a1).
The dot product of vectors and is written as , the norm of is written ‖a‖, the angle between between and is denoted θ.
Definitions based on angle θ
Scalar projection
The scalar projection of on is a scalar equal to
where θ is the angle between and .
A scalar projection can be used as a scale factor to compute the corresponding vector projection.
Vector projection
The vector projection of on is a vector whose magnitude is the scalar projection of on with the same direction
|
https://en.wikipedia.org/wiki/Macintosh%20Guide
|
Macintosh Guide, also referred to as Apple Guide, was Apple Computer's online help and documentation system, added to the classic Mac OS in System 7.5 and intended to work alongside Balloon Help. In addition to hypertext, indexing and searching of the text, Macintosh Guide also offered a system for teaching users how to accomplish tasks in an interactive manner. However, the process of creating guides was more complicated than non-interactive help and few developers took full advantage of its power. Apple enhanced the help system with HTML-based help in Mac OS 8.5 which worked in conjunction with Macintosh Guide providing links to Macintosh Guide sequences. Macintosh Guide was not carried over into Mac OS X, which uses an HTML-based help system.
Macintosh Guide made use of the AppleEvent Object Model (AEOM), allowing the system to examine the state of the application as it ran, and change the help in response. Help content was created in individual steps, and each step could have assigned to it conditions to determine if the step should be skipped, or if the step was needed. For instance, if the user had already completed several steps of an operation and needed help to complete it, Macintosh Guide could "see" where they were, and skip forward to the proper section of the documentation. Additionally AEOM allowed Macintosh Guide to drive the interface, completing tasks for the user if they clicked on the "Do it for me" buttons (or hypertext).
A distinctive feature of the system was support for Coaching. Using the AEOM, Macintosh Guide could find UI elements on the screen, and circle them using a "red marker" effect to draw the user's eye to it.
Macintosh Guide was also somewhat integrated with Balloon Help, optionally adding hypertext to balloons that would open the right portion of the documentation based on what object the user was currently pointing at with the mouse.
Classic Mac OS
Online help
Apple Inc. services
|
https://en.wikipedia.org/wiki/Kansei%20engineering
|
Kansei engineering (Japanese: 感性工学 kansei kougaku, emotional or affective engineering) aims at the development or improvement of products and services by translating the customer's psychological feelings and needs into the domain of product design (i.e. parameters). It was founded by Mitsuo Nagamachi, Professor Emeritus of Hiroshima University (also former Dean of Hiroshima International University and CEO of International Kansei Design Institute). Kansei engineering parametrically links the customer's emotional responses (i.e. physical and psychological) to the properties and characteristics of a product or service. In consequence, products can be designed to bring forward the intended feeling.
It has now been adopted as one of the topics for professional development by the Royal Statistical Society.
Introduction
Product design on today's markets has become increasingly complex since products contain more functions and have to meet increasing demands such as user-friendliness, manufacturability and ecological considerations. With a shortened product lifecycle, development costs are likely to increase. Since errors in the estimations of market trends can be very expensive, companies therefore perform benchmarking studies that compare with competitors on strategic, process, marketing, and product levels. However, success in a certain market segment not only requires knowledge about the competitors and the performance of competing products, but also about the impressions which a product leaves to the customer. The latter requirement becomes much more important as products and companies are becoming mature. Customers purchase products based on subjective terms such as brand image, reputation, design, impression etc.. A large number of manufacturers have started to consider such subjective properties and develop their products in a way that conveys the company image. A reliable instrument is therefore needed: an instrument which can predict the reception of a produc
|
https://en.wikipedia.org/wiki/Cohort%20%28statistics%29
|
In statistics, epidemiology, marketing and demography, a cohort is a group of subjects who share a defining characteristic (typically subjects who experienced a common event in a selected time period, such as birth or graduation).
Cohort data can oftentimes be more advantageous to demographers than period data. Because cohort data is honed to a specific time period, it is usually more accurate. It is more accurate because it can be tuned to retrieve custom data for a specific study.
In addition, cohort data is not affected by tempo effects, unlike period data. However, cohort data can be disadvantageous in the sense that it can take a long amount of time to collect the data necessary for the cohort study. Another disadvantage of cohort studies is that it can be extremely costly to carry out, since the study will go on for a long period of time, demographers often require sufficient funds to fuel the study.
Demography often contrasts cohort perspectives and period perspectives. For instance, the total cohort fertility rate is an index of the average completed family size for cohorts of women, but since it can only be known for women who have finished child-bearing, it cannot be measured for currently fertile women. It can be calculated as the sum of the cohort's age-specific fertility rates that obtain as it ages through time. In contrast, the total period fertility rate uses current age-specific fertility rates to calculate the completed family size for a notional woman, were she to experience these fertility rates through her life.
A study on a cohort is a cohort study.
Two important types of cohort studies are:
Prospective Cohort Study: In this type of study, there is a collection of exposure data (baseline data) from the subjects recruited before development of the outcomes of interest. The subjects are then followed through time (future) to record when the subject develops the outcome of interest. Ways to follow-up with subjects of the study include: pho
|
https://en.wikipedia.org/wiki/Roll%20center
|
The roll center of a vehicle is the notional point at which the cornering forces in the suspension are reacted to the vehicle body.
There are two definitions of roll center. The most commonly used is the geometric (or kinematic) roll center, whereas the Society of Automotive Engineers uses a force-based definition.
Definition
Geometric roll center is solely dictated by the suspension geometry, and can be found using principles of the instant center of rotation.
Force based roll center, according to the US Society of Automotive Engineers, is "The point in the transverse vertical plane through any pair of wheel centers at which lateral forces may be applied to the sprung mass without producing suspension roll".
The lateral location of the roll center is typically at the center-line of the vehicle when the suspension on the left and right sides of the car are mirror images of each other.
The significance of the roll center can only be appreciated when the vehicle's center of mass is also considered. If there is a difference between the position of the center of mass and the roll center a moment arm is created. When the vehicle experiences angular velocity due to cornering, the length of the moment arm, combined with the stiffness of the springs and possibly anti-roll bars (also called 'anti-sway bar'), defines how much the vehicle will roll. This has other effects too, such as dynamic load transfer.
Application
When the vehicle rolls the roll centers migrate. The roll center height has been shown to affect behavior at the initiation of turns such as nimbleness and initial roll control.
Testing methods
Current methods of analyzing individual wheel instant centers have yielded more intuitive results of the effects of non-rolling weight transfer effects. This type of analysis is better known as the lateral-anti method. This is where one takes the individual instant center locations of each corner of the car and then calculates the resultant vertical reaction v
|
https://en.wikipedia.org/wiki/Bayesian%20search%20theory
|
Bayesian search theory is the application of Bayesian statistics to the search for lost objects. It has been used several times to find lost sea vessels, for example USS Scorpion, and has played a key role in the recovery of the flight recorders in the Air France Flight 447 disaster of 2009. It has also been used in the attempts to locate the remains of Malaysia Airlines Flight 370.
Procedure
The usual procedure is as follows:
Formulate as many reasonable hypotheses as possible about what may have happened to the object.
For each hypothesis, construct a probability density function for the location of the object.
Construct a function giving the probability of actually finding an object in location X when searching there if it really is in location X. In an ocean search, this is usually a function of water depth — in shallow water chances of finding an object are good if the search is in the right place. In deep water chances are reduced.
Combine the above information coherently to produce an overall probability density map. (Usually this simply means multiplying the two functions together.) This gives the probability of finding the object by looking in location X, for all possible locations X. (This can be visualized as a contour map of probability.)
Construct a search path which starts at the point of highest probability and 'scans' over high probability areas, then intermediate probabilities, and finally low probability areas.
Revise all the probabilities continuously during the search. For example, if the hypotheses for location X imply the likely disintegration of the object and the search at location X has yielded no fragments, then the probability that the object is somewhere around there is greatly reduced (though not usually to zero) while the probabilities of its being at other locations is correspondingly increased. The revision process is done by applying Bayes' theorem.
In other words, first search where it most probably will be found, then searc
|
https://en.wikipedia.org/wiki/Wetting%20layer
|
A wetting layer is an monolayer of atoms that is epitaxially grown on a flat surface. The atoms forming the wetting layer can be semimetallic elements/compounds or metallic alloys (for thin films). Wetting layers form when depositing a lattice-mismatched material on a crystalline substrate. This article refers to the wetting layer connected to the growth of self-assembled quantum dots (e.g. InAs on GaAs). These quantum dots form on top of the wetting layer. The wetting layer can influence the states of the quantum dot for applications in quantum information processing and quantum computation.
Process
The wetting layer is epitaxially grown on a surface using molecular beam epitaxy (MBE). The temperatures required for wetting layer growth typically range from 400-500 degrees Celsius. When a material A is deposited on a surface of a lattice-mismatched material B, the first atomic layer of material A often adopts the lattice constant of B. This mono-layer of material A is called the wetting layer. When the thickness of layer A increases further, it becomes energetically unfavorable for material A to keep the lattice constant of B. Due to the high strain of layer A, additional atoms group together once a certain critical thickness of layer A is reached. This island formation reduces the elastic energy. Overgrown with material B, the wetting layer forms a quantum well in case material A has a lower bandgap than B. In this case, the formed islands are quantum dots. Further annealing can be used to modify the physical properties of the wetting layer/quantum dot
.
Properties
The wetting layer is a close-to mono-atomic layer with a thickness of typically 0.5 nanometers. The electronic properties of the quantum dot can change as a result of the wetting layer. Also, the strain of the quantum dot can change due to the wetting layer.
Notes
External links
Wetting layer on arxiv.org
group website of M. Dähne
Quantum electronics
Thin film deposition
|
https://en.wikipedia.org/wiki/April%20O%27Neil
|
April O'Neil is a fictional character from the Teenage Mutant Ninja Turtles comics. She is the first human ally of the Ninja Turtles.
April made her first appearance in the Mirage comic series in 1984 as a computer programmer. She was later portrayed as a strong-willed news reporter in the Turtles' first animated series, as a warrior in the Teenage Mutant Ninja Turtles Adventures comic produced by Archie Comics, and various other personas in different TMNT media.
While depicted as a red-haired adult white woman in the comic series and the 1987 and 2003 animated series, she is depicted as being a teenager in the 2012 and 2018 animated series, and also black in the latter series, its 2022 film, and the 2023 film Mutant Mayhem. Her love interests have varied, though she is typically paired romantically the vigilante Casey Jones.
April was voiced by Renae Jacobs in the 1987 animated series, Veronica Taylor in the 2003 animated series, Sarah Michelle Gellar in the 2007 film TMNT, Mae Whitman in the 2012 animated series, Kat Graham in the 2018 animated series and its 2022 film, and Ayo Edebiri in the 2023 film Mutant Mayhem. In film, she has been portrayed by Judith Hoag (1990), Paige Turco (1991 and 1993), Megan Fox (2014 and 2016), and by Malina Weissman as the younger version of the character in the 2014 film.
Comics
Mirage Comics
In the original Mirage Comics storyline for Teenage Mutant Ninja Turtles, April O'Neil was a skilled computer programmer and assistant to a famous yet nefarious scientist, Baxter Stockman. She helped program his MOUSER robots but, after discovering Baxter was using them to burrow into bank vaults, she fled his workshop. Robots chased her into the sewer where she was promptly saved by three of the Turtles. The Turtles later successfully fended off a MOUSER invasion.
After leaving her job with Baxter, April decided to open an antique shop. The shop was subsequently destroyed in a battle between the Turtles and Shredder and the Foot Clan
|
https://en.wikipedia.org/wiki/Wald%27s%20equation
|
In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantities. In its simplest form, it relates the expectation of a sum of randomly many finite-mean, independent and identically distributed random variables to the expected number of terms in the sum and the random variables' common expectation under the condition that the number of terms in the sum is independent of the summands.
The equation is named after the mathematician Abraham Wald. An identity for the second moment is given by the Blackwell–Girshick equation.
Basic version
Let be a sequence of real-valued, independent and identically distributed random variables and let be an integer-valued random variable that is independent of the sequence . Suppose that and the have finite expectations. Then
Example
Roll a six-sided dice. Take the number on the die (call it ) and roll that number of six-sided dice to get the numbers , and add up their values. By Wald's equation, the resulting value on average is
General version
Let be an infinite sequence of real-valued random variables and let be a nonnegative integer-valued random variable.
Assume that:
. are all integrable (finite-mean) random variables,
. for every natural number , and
. the infinite series satisfies
Then the random sums
are integrable and
If, in addition,
. all have the same expectation, and
. has finite expectation,
then
Remark: Usually, the name Wald's equation refers to this last equality.
Discussion of assumptions
Clearly, assumption () is needed to formulate assumption () and Wald's equation. Assumption () controls the amount of dependence allowed between the sequence and the number of terms; see the counterexample below for the necessity. Note that assumption () is satisfied when is a stopping time for a sequence of independent random variables . Assumption () is of more t
|
https://en.wikipedia.org/wiki/Prime%20constant
|
The prime constant is the real number whose th binary digit is 1 if is prime and 0 if is composite or 1.
In other words, is the number whose binary expansion corresponds to the indicator function of the set of prime numbers. That is,
where indicates a prime and is the characteristic function of the set of prime numbers.
The beginning of the decimal expansion of ρ is:
The beginning of the binary expansion is:
Irrationality
The number can be shown to be irrational. To see why, suppose it were rational.
Denote the th digit of the binary expansion of by . Then since is assumed rational, its binary expansion is eventually periodic, and so there exist positive integers and such that
for all and all .
Since there are an infinite number of primes, we may choose a prime . By definition we see that . As noted, we have for all . Now consider the case . We have , since is composite because . Since we see that is irrational.
References
External links
Irrational numbers
Prime numbers
Articles containing proofs
Mathematical constants
|
https://en.wikipedia.org/wiki/Germplasm
|
Germplasm are genetic resources such as seeds, tissues, and DNA sequences that are maintained for the purpose of animal and plant breeding, conservation efforts, agriculture, and other research uses. These resources may take the form of seed collections stored in seed banks, trees growing in nurseries, animal breeding lines maintained in animal breeding programs or gene banks. Germplasm collections can range from collections of wild species to elite, domesticated breeding lines that have undergone extensive human selection. Germplasm collection is important for the maintenance of biological diversity, food security, and conservation efforts.
In the United States, germplasm resources are regulated by the National Genetic Resources Program (NGRP), created by the U.S. congress in 1990. In addition the web server The Germplasm Resources Information Network (GRIN) provides information about germplasms as they pertain to agriculture production.
Germplasm Regulation
In the United States, germplasm resources are regulated by the National Genetic Resources Program (NGRP), created by the U.S. congress in 1990. In addition the web server The Germplasm Resources Information Network (GRIN) provides information about germplasms as they pertain to agriculture production.
Specifically for plants, there is the U.S. National Plant Germplasm System (NPGS) which holds > 450,000 accessions with 10,000 species of the 85 most commonly grown crops. Many accessions held are international species, and NPGS distributes germplasm resources internationally.
As genetic information moves largely online there is a transition in germplasm information from a physical location (seed banks, cryopreserving) to online platforms containing genetic sequences. In addition there are issues in the collection germplasm information and where they are shared. Historically some germplasm information had been collected in developing countries and then shared to researchers who then sell the donor country the
|
https://en.wikipedia.org/wiki/Fin%20field-effect%20transistor
|
A fin field-effect transistor (FinFET) is a multigate device, a MOSFET (metal–oxide–semiconductor field-effect transistor) built on a substrate where the gate is placed on two, three, or four sides of the channel or wrapped around the channel, forming a double or even multi gate structure. These devices have been given the generic name "FinFETs" because the source/drain region forms fins on the silicon surface. The FinFET devices have significantly faster switching times and higher current density than planar CMOS (complementary metal–oxide–semiconductor) technology.
FinFET is a type of non-planar transistor, or "3D" transistor. It is the basis for modern nanoelectronic semiconductor device fabrication. Microchips utilizing FinFET gates first became commercialized in the first half of the 2010s, and became the dominant gate design at 14 nm, 10 nm and 7 nm process nodes.
It is common for a single FinFET transistor to contain several fins, arranged side by side and all covered by the same gate, that act electrically as one, to increase drive strength and performance.
History
After the MOSFET was first demonstrated by Mohamed Atalla and Dawon Kahng of Bell Labs in 1960, the concept of a double-gate thin-film transistor (TFT) was proposed by H. R. Farrah (Bendix Corporation) and R. F. Steinberg in 1967. A double-gate MOSFET was later proposed by Toshihiro Sekigawa of the Electrotechnical Laboratory (ETL) in a 1980 patent describing the planar XMOS transistor. Sekigawa fabricated the XMOS transistor with Yutaka Hayashi at the ETL in 1984. They demonstrated that short-channel effects can be significantly reduced by sandwiching a fully depleted silicon-on-insulator (SOI) device between two gate electrodes connected together.
The first FinFET transistor type was called a "Depleted Lean-channel Transistor" or "DELTA" transistor, which was first fabricated in Japan by Hitachi Central Research Laboratory's Digh Hisamoto, Toru Kaga, Yoshifumi Kawamoto and Eiji Takeda in 198
|
https://en.wikipedia.org/wiki/HardBall%21
|
HardBall! is a baseball video game published by Accolade. Initially released for the Commodore 64 in 1985, it was ported to other computers over the next several years. A Sega Genesis cartridge was published in 1991. HardBall! was followed by sequels HardBall II, HardBall III, HardBall IV, HardBall 5, and HardBall 6.
Gameplay
Play is controlled with a joystick or arrow keys and an action button. One of the four cardinal directions is used to choose the pitch, and again to aim it towards low, high, inside (towards batter), or outside (away from batter). The same directions are used to aim the swing when batting. When fielding after a hit, the defensive player closest to the ball will flash to show it is the one currently under control. The four directions are then used to throw to one of the four bases.
Hardball! was one of the first baseball video games to incorporate the perspective from the pitcher's mound, similar to MLB broadcasts. There are also managerial options available. The player has a selection of pitchers to choose from. Each team member has his own statistics that affect his performance, and can be rearranged as desired. Prior to HardBall!s release, there were managerial baseball games available, such as MicroLeague Baseball but HardBall! was the first to integrate that aspect with the arcade control of the game action itself.
Reception
Hardball! was a commercial blockbuster. The Commodore 64 topped the UK sales chart in March 1986. It went on to become Accolade's best-selling Commodore game as of late 1987, and by 1989 had surpassed 500,000 units sold.
Info rated Hardball! four-plus stars out of five, stating that it "is easily the best baseball simulation we have seen to date for the 64/128" and praising its graphics. ANALOG Computing praised the Atari 8-bit version's gameplay, graphic, and animation, only criticizing the computer opponent's low difficulty level. The magazine concluded that the game "is in a league of its own, above all other A
|
https://en.wikipedia.org/wiki/Shibboleth%20%28software%29
|
Shibboleth is a single sign-on log-in system for computer networks and the Internet. It allows people to sign in using just one identity to various systems run by federations of different organizations or institutions. The federations are often universities or public service organizations.
The Shibboleth Internet2 middleware initiative created an architecture and open-source implementation for identity management and federated identity-based authentication and authorization (or access control) infrastructure based on Security Assertion Markup Language (SAML). Federated identity allows the sharing of information about users from one security domain to the other organizations in a federation. This allows for cross-domain single sign-on and removes the need for content providers to maintain usernames and passwords. Identity providers (IdPs) supply user information, while service providers (SPs) consume this information and give access to secure content.
History
The Shibboleth project grew out of Internet2. Today, the project is managed by the Shibboleth Consortium. Two of the most popular software components managed by the Shibboleth Consortium are the Shibboleth Identity Provider and the Shibboleth Service Provider, both of which are implementations of SAML.
The project was named after an identifying passphrase used in the Bible (Judges ) because Ephraimites were not able to pronounce "sh".
The Shibboleth project was started in 2000 to facilitate the sharing of resources between organizations with incompatible authentication and authorization infrastructures. Architectural work was performed for over a year prior to any software development. After development and testing, Shibboleth IdP 1.0 was released in July 2003. This was followed by the release of Shibboleth IdP 1.3 in August 2005.
Version 2.0 of the Shibboleth software was a major upgrade released in March 2008. It included both IdP and SP components, but, more importantly, Shibboleth 2.0 supported SAML 2.0
|
https://en.wikipedia.org/wiki/A%20Naturalist%20in%20Indian%20Seas
|
A Naturalist in Indian Seas, or, Four Years with the Royal Indian Marine Survey Ship Investigator is a 1902 publication by Alfred William Alcock, a British naturalist and carcinologist. The book is mostly a narrative describing the Investigator's journey through areas of the Indian Ocean, such as the Laccadive Sea, the Bay of Bengal and the Andaman Sea. It also details the history of the Investigator, as well as the marine biology of the Indian Ocean.
The book is considered a classic in natural history travel, and in 1903, The Geographical Journal described it as "a most fascinating and complete popular account of the deep-sea fauna of the Indian seas. The book is one of intense interest throughout to a zoologist". In its original edition, A Naturalist in Indian Seas was 328 pages long and published in 8 volumes in London.
References
External links
A naturalist in Indian seas; or, Four years with the Royal Indian marine survey ship "Investigator,", Alfred Alcock, Marine Survey of India. J. Murray, 1902.
1902 non-fiction books
Books about India
Indian Ocean
Marine biology
British travel books
|
https://en.wikipedia.org/wiki/Helmert%E2%80%93Wolf%20blocking
|
The Helmert–Wolf blocking (HWB) is a least squares solution method for the solution of a sparse block system of linear equations. It was first reported by F. R. Helmert for use in geodesy problems in 1880; (1910–1994) published his direct semianalytic solution in 1978.
It is based on ordinary Gaussian elimination in matrix form or partial minimization form.
Description
Limitations
The HWB solution is very fast to compute but it is optimal only if observational errors do not correlate between the data blocks. The generalized canonical correlation analysis (gCCA) is the statistical method of choice for making those harmful cross-covariances vanish. This may, however, become quite tedious depending on the nature of the problem.
Applications
The HWB method is critical to satellite geodesy and similar large problems. The HWB method can be extended to fast Kalman filtering (FKF) by augmenting its linear regression equation system to take into account information from numerical forecasts, physical constraints and other ancillary data sources that are available in realtime. Operational accuracies can then be computed reliably from the theory of minimum-norm quadratic unbiased estimation (Minque) of C. R. Rao.
See also
Block matrix
Notes
Statistical algorithms
Least squares
Geodesy
|
https://en.wikipedia.org/wiki/Fast%20Kalman%20filter
|
The fast Kalman filter (FKF), devised by Antti Lange (born 1941), is an extension of the Helmert–Wolf blocking (HWB) method from geodesy to safety-critical real-time applications of Kalman filtering (KF) such as GNSS navigation up to the centimeter-level of accuracy and
satellite imaging of the Earth including atmospheric tomography.
Motivation
Kalman filters are an important filtering technique for building fault-tolerance into a wide range of systems, including real-time imaging.
The ordinary Kalman filter is an optimal filtering algorithm for linear systems. However, an optimal Kalman filter is not stable (i.e. reliable) if Kalman's observability and controllability conditions are not continuously satisfied. These conditions are very challenging to maintain for any larger system. This means that even optimal Kalman filters may start diverging towards false solutions. Fortunately, the stability of an optimal Kalman filter can be controlled by monitoring its error variances if only these can be reliably estimated (e.g. by MINQUE). Their precise computation is, however, much more demanding than the optimal Kalman filtering itself. The FKF computing method often provides the required speed-up also in this respect.
Optimum calibration
Calibration parameters are a typical example of those state parameters that may create serious observability problems if a narrow window of data (i.e. too few measurements) is continuously used by a Kalman filter. Observing instruments onboard orbiting satellites gives an example of optimal Kalman filtering where their calibration is done indirectly on ground. There may also exist other state parameters that are hardly or not at all observable if too small samples of data are processed at a time by any sort of a Kalman filter.
Inverse problem
The computing load of the inverse problem of an ordinary Kalman recursion is roughly proportional to the cube of the number of the measurements processed simultaneously. This number can alwa
|
https://en.wikipedia.org/wiki/Multiuser%20DOS
|
Multiuser DOS is a real-time multi-user multi-tasking operating system for IBM PC-compatible microcomputers.
An evolution of the older Concurrent CP/M-86, Concurrent DOS and Concurrent DOS 386 operating systems, it was originally developed by Digital Research and acquired and further developed by Novell in 1991. Its ancestry lies in the earlier Digital Research 8-bit operating systems CP/M and MP/M, and the 16-bit single-tasking CP/M-86 which evolved from CP/M.
When Novell abandoned Multiuser DOS in 1992, the three master value-added resellers (VARs) DataPac Australasia, Concurrent Controls and Intelligent Micro Software were allowed to take over and continued independent development into Datapac Multiuser DOS and System Manager, CCI Multiuser DOS, and IMS Multiuser DOS and REAL/32.
The FlexOS line, which evolved from Concurrent DOS 286 and Concurrent DOS 68K, was sold off to Integrated Systems, Inc. (ISI) in July 1994.
Concurrent CP/M-86
The initial version of CP/M-86 1.0 (with BDOS 2.x) was adapted and became available to the IBM PC in 1982. It was commercially unsuccessful as IBM's PC DOS 1.0 offered much the same facilities for a considerably lower price. Neither PC DOS nor CP/M-86 could fully exploit the power and capabilities of the new 16-bit machine.
It was soon supplemented by an implementation of CP/M's multitasking 'big brother', MP/M-86 2.0, since September 1981. This turned a PC into a multiuser machine capable of supporting multiple concurrent users using dumb terminals attached by serial ports. The environment presented to each user made it seem as if they had the entire computer to themselves. Since terminals cost a fraction of the then-substantial price of a complete PC, this offered considerable cost savings, as well as facilitating multi-user applications such as accounts or stock control in a time when PC networks were rare, very expensive and difficult to implement.
CP/M-86 1.1 (with BDOS 2.2) and MP/M-86 2.1 were merged to create Concurr
|
https://en.wikipedia.org/wiki/Linear%20referencing
|
Linear referencing, also called linear reference system or linear referencing system (LRS), is a method of spatial referencing in engineering and construction, in which the locations of physical features along a linear element are described in terms of measurements from a fixed point, such as a milestone along a road. Each feature is located by either a point (e.g. a signpost) or a line (e.g. a no-passing zone). If a segment of the linear element or route is changed, only those locations on the changed segment need to be updated.
Linear referencing is suitable for management of data related to linear features like roads, railways, oil and gas transmission pipelines, power and data transmission lines, and rivers.
Motivation
A system for identifying the location of pipeline features and characteristics is by measuring distance from the start of the pipeline. An example linear reference address is: Engineering Station 1145 + 86 on pipeline Alpha = 114,586 feet from the start of the pipeline. With a reroute, cumulative stationing might not be the same as engineering stationing, because of the addition of the extra pipeline. Linear referencing systems compute the differences to resolve this dilemma.
Linear referencing is one of a family of methods of expressing location. Coordinates such as latitude and longitude are another member of the family, as are landmark references such as "5 km south of Ayers Rock." Linear referencing has traditionally been the expression of choice in engineering applications such as road and pipeline maintenance. One can more realistically dispatch a worker to a bridge 12.7 km along a road from a reference point, rather than to a pair of coordinates or a landmark. The road serves as the reference frame, just as the earth serves as the reference frame for latitude and longitude.
Benefits
Linear referencing can be used to define points along a linear feature with just a small amount of information such as the name of a road and the distance
|
https://en.wikipedia.org/wiki/SILC%20%28protocol%29
|
SILC (Secure Internet Live Conferencing protocol) is a protocol that provides secure synchronous conferencing services (very much like IRC) over the Internet.
Components
The SILC protocol can be divided in three main parts: SILC Key Exchange (SKE) protocol, SILC Authentication protocol and SILC Packet protocol. SILC protocol additionally defines SILC Commands that are used to manage the SILC session. SILC provides channels (groups), nicknames, private messages, and other common features. However, SILC nicknames, in contrast to many other protocols (e.g. IRC), are not unique; a user is able to use any nickname, even if one is already in use. The real identification in the protocol is performed by unique Client ID. The SILC protocol uses this to overcome nickname collision, a problem present in many other protocols. All messages sent in a SILC network are binary, allowing them to contain any type of data, including text, video, audio, and other multimedia data.
The SKE protocol is used to establish session key and other security parameters for protecting the SILC Packet protocol. The SKE itself is based on the Diffie–Hellman key exchange algorithm (a form of asymmetric cryptography) and the exchange is protected with digital signatures. The SILC Authentication protocol is performed after successful SKE protocol execution to authenticate a client and/or a server. The authentication may be based on passphrase or on digital signatures, and if successful gives access to the relevant SILC network. The SILC Packet protocol is intended to be a secure binary packet protocol, assuring that the content of each packet (consisting of a packet header and packet payload) is secured and authenticated. The packets are secured using algorithms based on symmetric cryptography and authenticated by using Message Authentication Code algorithm, HMAC.
SILC channels (groups) are protected by using symmetric channel keys. It is optionally possible to digitally sign all channel messa
|
https://en.wikipedia.org/wiki/Premature%20convergence
|
In evolutionary algorithms (EA), the term of premature convergence means that a population for an optimization problem converged too early, resulting in being suboptimal. In this context, the parental solutions, through the aid of genetic operators, are not able to generate offspring that are superior to, or outperform, their parents. Premature convergence is a common problem found in evolutionary algorithms in general and genetic algorithms in particular, as it leads to a loss, or convergence of, a large number of alleles, subsequently making it very difficult to search for a specific gene in which the alleles were present. An allele is considered lost if, in a population, a gene is present, where all individuals are sharing the same value for that particular gene. An allele is, as defined by De Jong, considered to be a converged allele, when 95% of a population share the same value for a certain gene (see also convergence).
Strategies for preventing premature convergence
Strategies to regain genetic variation can be:
a mating strategy called incest prevention,
uniform crossover,
favored replacement of similar individuals (preselection or crowding),
segmentation of individuals of similar fitness (fitness sharing),
increasing population size.
The genetic variation can also be regained by mutation though this process is highly random.
One way to reduce the risk of premature convergence is to use structured populations instead of the commonly used panmictic ones, see below.
Identification of the occurrence of premature convergence
It is hard to determine when premature convergence has occurred, and it is equally hard to predict its presence in the future. One measure is to use the difference between the average and maximum fitness values, as used by Patnaik & Srinivas, to then vary the crossover and mutation probabilities. Population diversity is another measure which has been extensively used in studies to measure premature convergence. However, although i
|
https://en.wikipedia.org/wiki/Unary%20function
|
In mathematics, a unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its range coincides with its domain. In contrast, a unary function's domain may or may not coincide with its range.
Examples
The successor function, denoted , is a unary operator. Its domain and codomain are the natural numbers; its definition is as follows:
In many programming languages such as C, executing this operation is denoted by postfixing to the operand, i.e. the use of is equivalent to executing the assignment .
Many of the elementary functions are unary functions, including the trigonometric functions, logarithm with a specified base, exponentiation to a particular power or base, and hyperbolic functions.
See also
Arity
Binary function
Binary operator
List of mathematical functions
Ternary operation
Unary operation
References
Foundations of Genetic Programming
Functions and mappings
Types of functions
|
https://en.wikipedia.org/wiki/Load-balanced%20switch
|
A load-balanced switch is a switch architecture which guarantees 100% throughput with no central arbitration at all, at the cost of sending each packet across the crossbar twice. Load-balanced switches are a subject of research for large routers scaled past the point of practical central arbitration.
Introduction
Internet routers are typically built using line cards connected with a switch. Routers supporting moderate total bandwidth may use a bus as their switch, but high bandwidth routers typically use some sort of crossbar interconnection. In a crossbar, each output connects to one input, so that information can flow through every output simultaneously. Crossbars used for packet switching are typically reconfigured tens of millions of times per second. The schedule of these configurations is determined by a central arbiter, for example a Wavefront arbiter, in response to requests by the line cards to send information to one another.
Perfect arbitration would result in throughput limited only by the maximum throughput of each crossbar input or output. For example, if all traffic coming into line cards A and B is destined for line card C, then the maximum traffic that cards A and B can process together is limited by C. Perfect arbitration has been shown to require massive amounts of computation, that scales up much faster than the number of ports on the crossbar. Practical systems use imperfect arbitration heuristics (such as iSLIP) that can be computed in reasonable amounts of time.
A load-balanced switch is not related to a load balancing switch, which refers to a kind of router used as a front end to a farm of web servers to spread requests to a single website across many servers.
Basic architecture
As shown in the figure to the right, a load-balanced switch has N input line cards, each of rate R, each connected to N buffers by a link of rate R/N. Those buffers are in turn each connected to N output line cards, each of rate R, by links of rate R/N.
|
https://en.wikipedia.org/wiki/Comparison%20of%20command%20shells
|
A command shell is a command-line interface to interact with and manipulate a computer's operating system.
General characteristics
Interactive features
Background execution
Background execution allows a shell to run a command without user interaction in the terminal, freeing the command line for additional work with the shell. POSIX shells and other Unix shells allow background execution by using the & character at the end of command. In PowerShell, the Start-Process or Start-Job cmdlets can be used.
Completions
Completion features assist the user in typing commands at the command line, by looking for and suggesting matching words for incomplete ones. Completion is generally requested by pressing the completion key (often the key).
Command name completion is the completion of the name of a command. In most shells, a command can be a program in the command path (usually $PATH), a builtin command, a function or alias.
Path completion is the completion of the path to a file, relative or absolute.
Wildcard completion is a generalization of path completion, where an expression matches any number of files, using any supported syntax for file matching.
Variable completion is the completion of the name of a variable name (environment variable or shell variable).
Bash, zsh, and fish have completion for all variable names. PowerShell has completions for environment variable names, shell variable names and — from within user-defined functions — parameter names.
Command argument completion is the completion of a specific command's arguments. There are two types of arguments, named and positional: Named arguments, often called options, are identified by their name or letter preceding a value, whereas positional arguments consist only of the value. Some shells allow completion of argument names, but few support completing values.
Bash, zsh and fish offer parameter name completion through a definition external to the command, distributed in a separate completion defi
|
https://en.wikipedia.org/wiki/Methyl%20butyrate
|
Methyl butyrate, also known under the systematic name methyl butanoate, is the methyl ester of butyric acid. Like most esters, it has a fruity odor, in this case resembling apples or pineapples. At room temperature, it is a colorless liquid with low solubility in water, upon which it floats to form an oily layer. Although it is flammable, it has a relatively low vapor pressure (40 mmHg at ), so it can be safely handled at room temperature without special safety precautions.
Methyl butyrate is present in small amounts in several plant products, especially pineapple oil. It can be produced by distillation from essential oils of vegetable origin, but is also manufactured on a small scale for use in perfumes and as a food flavoring.
Methyl butyrate has been used in combustion studies as a surrogate fuel for the larger fatty acid methyl esters found in biodiesel. However, studies have shown that, due to its short-chain length, methyl butyrate does not reproduce well the negative temperature coefficient (NTC) behaviour and early CO2 formation characteristics of real biodiesel fuels. Therefore, methyl butyrate is not a suitable surrogate fuel for biodiesel combustion studies.
References
Methyl esters
Butyrate esters
Perfume ingredients
Flavors
|
https://en.wikipedia.org/wiki/Stokes%20parameters
|
The Stokes parameters are a set of values that describe the polarization state of electromagnetic radiation. They were defined by George Gabriel Stokes in 1852,<ref>S. Chandrasekhar 'Radiative Transfer, Dover Publications, New York, 1960, , page 25</ref> as a mathematically convenient alternative to the more common description of incoherent or partially polarized radiation in terms of its total intensity (I), (fractional) degree of polarization (p), and the shape parameters of the polarization ellipse. The effect of an optical system on the polarization of light can be determined by constructing the Stokes vector for the input light and applying Mueller calculus, to obtain the Stokes vector of the light leaving the system. The original Stokes paper was discovered independently by Francis Perrin in 1942 and by Subrahamanyan Chandrasekhar in 1947,Chandrasekhar, S. (1947). The transfer of radiation in stellar atmospheres. Bulletin of the American Mathematical Society, 53(7), 641-711. who named it as the Stokes parameters.
Definitions
The relationship of the Stokes parameters S0, S1, S2, S3 to intensity and polarization ellipse parameters is shown in the equations below and the figure on the right.
Here , and are the spherical coordinates of the three-dimensional vector of cartesian coordinates . is the total intensity of the beam, and is the degree of polarization, constrained by . The factor of two before represents the fact that any polarization ellipse is indistinguishable from one rotated by 180°, while the factor of two before indicates that an ellipse is indistinguishable from one with the semi-axis lengths swapped accompanied by a 90° rotation. The phase information of the polarized light is not recorded in the Stokes parameters. The four Stokes parameters are sometimes denoted I, Q, U and V, respectively.
Given the Stokes parameters, one can solve for the spherical coordinates with the following equations:
Stokes vectors
The Stokes parameters are oft
|
https://en.wikipedia.org/wiki/Satellite%20navigation
|
A satellite navigation or satnav system is a system that uses satellites to provide autonomous geopositioning. A satellite navigation system with global coverage is termed global navigation satellite system (GNSS). , four global systems are operational: the United States’s Global Positioning System (GPS), Russia's Global Navigation Satellite System (GLONASS), China's BeiDou Navigation Satellite System, and the European Union's Galileo.
Regional navigation satellite systems in use are Japan's Quasi-Zenith Satellite System (QZSS), a GPS satellite-based augmentation system to enhance the accuracy of GPS, with satellite navigation independent of GPS scheduled for 2023, and the Indian Regional Navigation Satellite System (IRNSS) or NavIC, which is planned to be expanded to a global version in the long term.
Satellite navigation allows satellite navigation devices to determine their location (longitude, latitude, and altitude/elevation) to high precision (within a few centimeters to meters) using time signals transmitted along a line of sight by radio from satellites. The system can be used for providing position, navigation or for tracking the position of something fitted with a receiver (satellite tracking). The signals also allow the electronic receiver to calculate the current local time to a high precision, which allows time synchronisation. These uses are collectively known as Positioning, Navigation and Timing (PNT). Satnav systems operate independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the positioning information generated.
Global coverage for each system is generally achieved by a satellite constellation of 18–30 medium Earth orbit (MEO) satellites spread between several orbital planes. The actual systems vary, but all use orbital inclinations of >50° and orbital periods of roughly twelve hours (at an altitude of about ).
Classification
GNSS systems that provide enhanced accuracy and integrity m
|
https://en.wikipedia.org/wiki/Eudiometer
|
A eudiometer is a laboratory device that measures the change in volume of a gas mixture following a physical or chemical change.
Description
Depending on the reaction being measured, the device can take a variety of forms. In general, it is similar to a graduated cylinder, and is most commonly found in two sizes: 50 mL and 100 mL. It is closed at the top end with the bottom end immersed in water or mercury. The liquid traps a sample of gas in the cylinder, and the graduation allows the volume of the gas to be measured.
For some reactions, two platinum wires (chosen for their non-reactivity) are placed in the sealed end so an electric spark can be created between them. The electric spark can initiate a reaction in the gas mixture and the graduation on the cylinder can be read to determine the change in volume resulting from the reaction. The use of the device is quite similar to the original barometer, except that the gas inside displaces some of the liquid that is used.
History
In 1772, Joseph Priestley began experimenting with different "airs" using his own redesigned pneumatic trough in which mercury instead of water would trap gases that were usually soluble in water. From these experiments Priestley is credited with discovering many new gases such as oxygen, hydrogen chloride, and ammonia. He also discovered a way to find the purity or "goodness" of air using "nitrous air test". The eudiometer functions on the greater solubility of NO2 in water over NO, and the oxidation reaction of NO into NO2 by air oxygen:
2 NO + O2 → 2 NO2.
A quantity of air is combined with NO over water, and the more soluble compound NO2 dissolves, leaving the remaining air somewhat contracted in volume. The richer the air was in oxygen, the greater was the contraction.
Marsilio Landriani was studying pneumatic chemistry with Pietro Moscati when they attempted to quantify Priestley's nitric acid test for air quality. Landriani used a pneumatic trough in the form of a tall,
|
https://en.wikipedia.org/wiki/Food%20engineering
|
Food engineering is a scientific, academic, and professional field that interprets and applies principles of engineering, science, and mathematics to food manufacturing and operations, including the processing, production, handling, storage, conservation, control, packaging and distribution of food products. Given its reliance on food science and broader engineering disciplines such as electrical, mechanical, civil, chemical, industrial and agricultural engineering, food engineering is considered a multidisciplinary and narrow field.
Due to the complex nature of food materials, food engineering also combines the study of more specific chemical and physical concepts such as biochemistry, microbiology, food chemistry, thermodynamics, transport phenomena, rheology, and heat transfer. Food engineers apply this knowledge to the cost-effective design, production, and commercialization of sustainable, safe, nutritious, healthy, appealing, affordable and high-quality ingredients and foods, as well as to the development of food systems, machinery, and instrumentation.
History
Although food engineering is a relatively recent and evolving field of study, it is based on long-established concepts and activities. The traditional focus of food engineering was preservation, which involved stabilizing and sterilizing foods, preventing spoilage, and preserving nutrients in food for prolonged periods of time. More specific traditional activities include food dehydration and concentration, protective packaging, canning and freeze-drying . The development of food technologies were greatly influenced and urged by wars and long voyages, including space missions, where long-lasting and nutritious foods were essential for survival. Other ancient activities include milling, storage, and fermentation processes. Although several traditional activities remain of concern and form the basis of today’s technologies and innovations, the focus of food engineering has recently shifted to food qua
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.