source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Network%20Access%20Identifier
|
In computer networking, the Network Access Identifier (NAI) is a standard way of identifying users who request access to a network. The standard syntax is "user@realm". Sample NAIs include (from RFC 4282):
bob
joe@example.com
fred@foo-9.example.com
fred.smith@example.com
fred_smith@example.com
fred$@example.com
fred=?#$&*+-/^smith@example.com
eng.example.net!nancy@example.net
eng%nancy@example.net
@privatecorp.example.net
\(user\)@example.net
alice@xn--tmonesimerkki-bfbb.example.net
Network Access Identifiers were originally defined in RFC 2486, which was superseded by RFC 4282, which has been superseded by RFC 7542. The latter RFC is the current standard for the NAI. NAIs are commonly found as user identifiers in the RADIUS and Diameter network access protocols and the EAP authentication protocol.
The Network Access Identifier (NAI) is the user identity submitted by the client during network access authentication.
It is used mainly for two purposes:
The NAI is used when roaming, to identify the user.
To assist in the routing of the authentication request to the user's authentication server.
See also
Diameter
EAP
RADIUS
Request for Comments
External links
Internet Standards
|
https://en.wikipedia.org/wiki/Bioelectromagnetics
|
Bioelectromagnetics, also known as bioelectromagnetism, is the study of the interaction between electromagnetic fields and biological entities. Areas of study include electromagnetic fields produced by living cells, tissues or organisms, the effects of man-made sources of electromagnetic fields like mobile phones, and the application of electromagnetic radiation toward therapies for the treatment of various conditions.
Biological phenomena
Bioelectromagnetism is studied primarily through the techniques of electrophysiology. In the late eighteenth century, the Italian physician and physicist Luigi Galvani first recorded the phenomenon while dissecting a frog at a table where he had been conducting experiments with static electricity. Galvani coined the term animal electricity to describe the phenomenon, while contemporaries labeled it galvanism. Galvani and contemporaries regarded muscle activation as resulting from an electrical fluid or substance in the nerves. Short-lived electrical events called action potentials occur in several types of animal cells which are called excitable cells, a category of cell include neurons, muscle cells, and endocrine cells, as well as in some plant cells. These action potentials are used to facilitate inter-cellular communication and activate intracellular processes. The physiological phenomena of action potentials are possible because voltage-gated ion channels allow the resting potential caused by electrochemical gradient on either side of a cell membrane to resolve..
Several animals are suspected to have the ability to sense electromagnetic fields; for example, several aquatic animals have structures potentially capable of sensing changes in voltage caused by a changing magnetic field, while migratory birds are thought to use magnetoreception in navigation.
Bioeffects of electromagnetic radiation
Most of the molecules in the human body interact weakly with electromagnetic fields in the radio frequency or extremely low frequen
|
https://en.wikipedia.org/wiki/Ingress%20cancellation
|
Ingress cancellation is a method for removing narrowband noise from an electromagnetic signal using a digital filter. This type of filter is used on hybrid fiber-coaxial broadband networks.
If a carrier appears in the middle of the upstream data signal, ingress cancellation can remove the interfering carrier without causing packet loss.
Ingress cancellation also removes one or more carriers that are higher in amplitude than the data signal. Ingress cancellation eventually will break if the in-channel ingress gets too high.
References
See also
Distortion
Electromagnetic interference
Ingress filtering
Noise reduction
Digital electronics
|
https://en.wikipedia.org/wiki/Spanier%E2%80%93Whitehead%20duality
|
In mathematics, Spanier–Whitehead duality is a duality theory in homotopy theory, based on a geometrical idea that a topological space X may be considered as dual to its complement in the n-sphere, where n is large enough. Its origins lie in Alexander duality theory, in homology theory, concerning complements in manifolds. The theory is also referred to as S-duality, but this can now cause possible confusion with the S-duality of string theory. It is named for Edwin Spanier and J. H. C. Whitehead, who developed it in papers from 1955.
The basic point is that sphere complements determine the homology, but not the homotopy type, in general. What is determined, however, is the stable homotopy type, which was conceived as a first approximation to homotopy type. Thus Spanier–Whitehead duality fits into stable homotopy theory.
Statement
Let X be a compact neighborhood retract in . Then and are dual objects in the category of pointed spectra with the smash product as a monoidal structure. Here is the union of and a point, and are reduced and unreduced suspensions respectively.
Taking homology and cohomology with respect to an Eilenberg–MacLane spectrum recovers Alexander duality formally.
References
Homotopy theory
Duality theories
|
https://en.wikipedia.org/wiki/Hermitian%20function
|
In mathematical analysis, a Hermitian function is a complex function with the property that its complex conjugate is equal to the original function with the variable changed in sign:
(where the indicates the complex conjugate) for all in the domain of . In physics, this property is referred to as PT symmetry.
This definition extends also to functions of two or more variables, e.g., in the case that is a function of two variables it is Hermitian if
for all pairs in the domain of .
From this definition it follows immediately that: is a Hermitian function if and only if
the real part of is an even function,
the imaginary part of is an odd function.
Motivation
Hermitian functions appear frequently in mathematics, physics, and signal processing. For example, the following two statements follow from basic properties of the Fourier transform:
The function is real-valued if and only if the Fourier transform of is Hermitian.
The function is Hermitian if and only if the Fourier transform of is real-valued.
Since the Fourier transform of a real signal is guaranteed to be Hermitian, it can be compressed using the Hermitian even/odd symmetry. This, for example, allows the discrete Fourier transform of a signal (which is in general complex) to be stored in the same space as the original real signal.
If f is Hermitian, then .
Where the is cross-correlation, and is convolution.
If both f and g are Hermitian, then .
See also
Types of functions
Calculus
|
https://en.wikipedia.org/wiki/Alexander%20duality
|
In mathematics, Alexander duality refers to a duality theory initiated by a result of J. W. Alexander in 1915, and subsequently further developed, particularly by Pavel Alexandrov and Lev Pontryagin. It applies to the homology theory properties of the complement of a subspace X in Euclidean space, a sphere, or other manifold. It is generalized by Spanier–Whitehead duality.
General statement for spheres
Let be a compact, locally contractible subspace of the sphere of dimension n. Let be the complement of in . Then if stands for reduced homology or reduced cohomology, with coefficients in a given abelian group, there is an isomorphism
for all . Note that we can drop local contractibility as part of the hypothesis if we use Čech cohomology, which is designed to deal with local pathologies.
Applications
This is useful for computing the cohomology of knot and link complements in . Recall that a knot is an embedding and a link is a disjoint union of knots, such as the Borromean rings. Then, if we write the link/knot as , we have
,
giving a method for computing the cohomology groups. Then, it is possible to differentiate between different links using the Massey products.
For example, for the Borromean rings , the homology groups are
Alexander duality for constructible sheaves
For smooth manifolds, Alexander duality is a formal consequence of Verdier duality for sheaves of abelian groups. More precisely, if we let denote a smooth manifold and we let be a closed subspace (such as a subspace representing a cycle, or a submanifold) represented by the inclusion , and if is a field, then if is a sheaf of -vector spaces we have the following isomorphism
,
where the cohomology group on the left is compactly supported cohomology. We can unpack this statement further to get a better understanding of what it means. First, if is the constant sheaf and is a smooth submanifold, then we get
,
where the cohomology group on the right is local cohomology with support in .
|
https://en.wikipedia.org/wiki/Landauer%27s%20principle
|
Landauer's principle is a physical principle pertaining to the lower theoretical limit of energy consumption of computation. It holds that an irreversible change in information stored in a computer, such as merging two computational paths, dissipates a minimum amount of heat to its surroundings.
The principle was first proposed by Rolf Landauer in 1961.
Statement
Landauer's principle states that the minimum energy needed to erase one bit of information is proportional to the temperature at which the system is operating. More specifically, the energy needed for this computational task is given by
where is the Boltzmann constant. At room temperature, the Landauer limit represents an energy of approximately . Modern computers use about a billion times as much energy per operation.
History
Rolf Landauer first proposed the principle in 1961 while working at IBM. He justified and stated important limits to an earlier conjecture by John von Neumann. For this reason, it is sometimes referred to as being simply the Landauer bound or Landauer limit.
In 2008 and 2009, researchers showed that Landauer's principle can be derived from the second law of thermodynamics and the entropy change associated with information gain, developing the thermodynamics of quantum and classical feedback-controlled systems.
In 2011, the principle was generalized to show that while information erasure requires an increase in entropy, this increase could theoretically occur at no energy cost. Instead, the cost can be taken in another conserved quantity, such as angular momentum.
In a 2012 article published in Nature, a team of physicists from the École normale supérieure de Lyon, University of Augsburg and the University of Kaiserslautern described that for the first time they have measured the tiny amount of heat released when an individual bit of data is erased.
In 2014, physical experiments tested Landauer's principle and confirmed its predictions.
In 2016, researchers used a laser probe
|
https://en.wikipedia.org/wiki/Sarcopenia
|
Sarcopenia is a type of muscle loss (muscle atrophy) that occurs with aging and/or immobility. It is characterized by the degenerative loss of skeletal muscle mass, quality, and strength. The rate of muscle loss is dependent on exercise level, co-morbidities, nutrition and other factors. The muscle loss is related to changes in muscle synthesis signalling pathways. It is distinct from cachexia, in which muscle is degraded through cytokine-mediated degradation, although both conditions may co-exist. Sarcopenia is considered a component of frailty syndrome. Sarcopenia can lead to reduced quality of life, falls, fracture, and disability.
Sarcopenia is a factor in changing body composition associated with aging populations; and certain muscle regions are expected to be affected first, specifically the anterior thigh and abdominal muscles. In population studies, body mass index (BMI) is seen to decrease in aging populations while bioelectrical impedance analysis (BIA) shows body fat proportion rising.
The term sarcopenia is from Greek σάρξ sarx, "flesh" and πενία penia, "poverty". This was first proposed by Rosenberg in 1989, who wrote that "there may be no single feature of age-related decline that could more dramatically affect ambulation, mobility, calorie intake, and overall nutrient intake and status, independence, breathing, etc".
Signs and symptoms
The hallmark sign of sarcopenia is loss of lean muscle mass, or muscle atrophy. The change in body composition may be difficult to detect due to obesity, changes in fat mass, or edema. Changes in weight, limb or waist circumference are not reliable indicators of muscle mass changes. Sarcopenia may also cause reduced strength, functional decline and increased risk of falling. Sarcopenia may also have no symptoms until it is severe and is often unrecognized. Research has shown, however, that hypertrophy may occur in the upper parts of the body to compensate for this loss of lean muscle mass Therefore, one early indica
|
https://en.wikipedia.org/wiki/Defence%20Information%20Infrastructure
|
Defence Information Infrastructure (DII) is a secure military network owned by the United Kingdom's Ministry of Defence MOD. It is used by all branches of the armed forces, including the Royal Navy, British Army and Royal Air Force as well as MOD civil servants. It reaches to deployed bases and ships at sea, but not to aircraft in flight.
The partnership developing DII is called the Atlas Consortium and is made up of DXC Technology (formerly EDS), Fujitsu, Airbus Defence and Space (formerly EADS Defence & Security) and CGI (formerly Logica).
Starting in May 2016, MOD users of DII begin to migrate to the New Style of IT within the defence to be known as MODNET; again supported by ATLAS.
Overview
DII supports 2,000 MOD sites with some 150,000 terminals (desktops and laptops) and 300,000 user accounts. It is designed to offer a high level of resilience, flexibility, and security in the provision of connectivity from ‘business space to battlespace’ in MOD offices in the UK, bases overseas, at sea, and on the front line. It aims to rationalise and improve IT provision for the defence sector in the 21st century; involving a major culture change for MOD users and their ways of working through a structure of shared working areas with controlled security and access. It should provide a records management system and search facility together with a range of office services. It hosts several hundred COTS (commercial off-the-shelf) and bespoke MOD applications from a range of suppliers judged to meet the required security standards. The network handles alphanumeric data, graphics, and video. The system carries information from Restricted to above-Secret levels, but users are able to see only the data and applications for which they are authorised.
Incremental approach
In order to de-risk the programme Atlas and the MOD took an incremental approach to the development and implementation of DII, with a separate contract for each increment. The extended timeline allowed t
|
https://en.wikipedia.org/wiki/Mauve%20%28test%20suite%29
|
Mauve is a project to provide a free software test suite for the Java class libraries. Mauve is developed by the members of Kaffe, GNU Classpath, GCJ, and other projects. Unlike a similar project, JUnit, Mauve is designed to run on various experimental Java virtual machines, where some features may be still missing. Because of this, Mauve does not discover the testing method by name, as JUnit does. Mauve can also be used to test the user java application, not just the core class library. Mauve is released under GNU General Public License.
Example
The "Hello world" example in Mauve:
// Tags: JDK1.4
public class HelloWorld implements Testlet {
// Test if 3 * 2 = 6
public void test(TestHarness harness) {
harness.check(3 * 2, 6, "Multiplication failed.");
}
}
See also
Technology Compatibility Kit
External links
Mauve homepage
Extreme programming
Software testing
|
https://en.wikipedia.org/wiki/Call%20super
|
Call super is a code smell or anti-pattern of some object-oriented programming languages. Call super is a design pattern in which a particular class stipulates that in a derived subclass, the user is required to override a method and call back the overridden function itself at a particular point. The overridden method may be intentionally incomplete, and reliant on the overriding method to augment its functionality in a prescribed manner. However, the fact that the language itself may not be able to enforce all conditions prescribed on this call is what makes this an anti-pattern.
Description
In object-oriented programming, users can inherit the properties and behaviour of a superclass in subclasses. A subclass can override methods of its superclass, substituting its own implementation of the method for the superclass's implementation. Sometimes the overriding method will completely replace the corresponding functionality in the superclass, while in other cases the superclass's method must still be called from the overriding method. Therefore, most programming languages require that an overriding method must explicitly call the overridden method on the superclass for it to be executed.
The call super anti-pattern relies on the users of an interface or framework to derive a subclass from a particular class, override a certain method and require the overridden method to call the original method from the overriding method:
This is often required, since the superclass must perform some setup tasks for the class or framework to work correctly, or since the superclass's main task (which is performed by this method) is only augmented by the subclass.
The anti-pattern is the of calling the parent. There are many examples in real code where the method in the subclass may still want the superclass's functionality, usually where it is only augmenting the parent functionality. If it still has to call the parent class even if it is fully replacing the functionality, the a
|
https://en.wikipedia.org/wiki/Hesiod%20%28name%20service%29
|
In computing, the Hesiod name service originated in Project Athena (1983–1991). It uses DNS functionality to provide access to databases of information that change infrequently. In Unix environments it often serves to distribute information kept in the , , and files, among others.
Frequently an LDAP server is used to distribute the same kind of information that Hesiod does. However, because Hesiod can leverage existing DNS servers, deploying it to a network is fairly easy.
In a Unix-like system users usually have a line in the file for each local user like:
foo:x:100:10:Foo Bar:/home/foo:/bin/sh
This line is composed of seven colon-separated fields which hold the following data:
user login name (string);
password hash or "x" if shadow password file is in use (string);
user id (unsigned integer);
user's primary group id (unsigned integer);
Gecos field (four comma separated fields, string);
user home directory (string);
user login shell (string).
This system works fine for a small number of users on a small number of machines. But when more users start using more machines, having this information managed in one location becomes critical. This is where Hesiod enters.
Instead of having this information stored on every machine, Hesiod stores it in records on your DNS server. Then each client can query the DNS server for this information instead of looking for it locally. In BIND the records for the above user might look something like:
foo.passwd.ns.example.net HS TXT "foo:x:100:10:Foo Bar:/home/foo:/bin/sh"
100.passwd.ns.example.net HS TXT "foo:x:100:10:Foo Bar:/home/foo:/bin/sh"
100.uid.ns.example.net HS TXT "foo:x:100:10:Foo Bar:/home/foo:/bin/sh"
There are three records because the system needs to be able to access the information in different ways. The first line supports looking up the user by their login name and the second two allow it to look up information by the user's uid. Note the use of the HS class instead of IN as migh
|
https://en.wikipedia.org/wiki/Intertidal%20zone
|
The intertidal zone or foreshore is the area above water level at low tide and underwater at high tide: in other words, the part of the littoral zone within the tidal range. This area can include several types of habitats with various species of life, such as seastars, sea urchins, and many species of coral with regional differences in biodiversity. Sometimes it is referred to as the littoral zone or seashore, although those can be defined as a wider region.
The well-known area also includes steep rocky cliffs, sandy beaches, bogs or wetlands (e.g., vast mudflats). The area can be a narrow strip, as in Pacific islands that have only a narrow tidal range, or can include many meters of shoreline where shallow beach slopes interact with high tidal excursion. The peritidal zone is similar but somewhat wider, extending from above the highest tide level to below the lowest. Organisms in the intertidal zone are adapted to an environment of harsh extremes, living in water pressure with the potential of reaching 5,580 pounds per square inch. The intertidal zone is also home to several species from different phyla (Porifera, Annelida, Coelenterata, Mollusca, Arthropoda, etc.).
Water is available regularly with the tides that can vary from brackish waters, fresh with rain, to highly saline and dry salt, with drying between tidal inundations. Wave splash can dislodge residents from the littoral zone. With the intertidal zone's high exposure to sunlight, the temperature can range from very hot with full sunshine to near freezing in colder climates. Some microclimates in the littoral zone are moderated by local features and larger plants such as mangroves. Adaptation in the littoral zone allows the use of nutrients supplied in high volume on a regular basis from the sea, which is actively moved to the zone by tides. Edges of habitats, in this case land and sea, are themselves often significant ecologies, and the littoral zone is a prime example.
A typical rocky shore can be di
|
https://en.wikipedia.org/wiki/PL-3
|
PL-3 or POS-PHY Level 3 is a network protocol. It is the name of the interface that the Optical Internetworking Forum's SPI-3 Interoperability Agreement is based on. It was proposed by PMC-Sierra to the Optical Internetworking Forum and adopted in June 2000. The name means Packet Over SONET Physical layer level 3. PL-3 was developed by PMC-Sierra in conjunction with the SATURN Development Group.
The name is an acronym of an acronym of an acronym as the P in PL stands for "POS-PHY" and the S in POS-PHY stands for "SONET" (Synchronous Optical Network). The L in PL stands for "Layer".
Context
There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities.
Applications
It was designed to be used in systems that support OC-48 SONET interfaces . A typical application of PL-3 (SPI-3) is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace.
Technical details
The interface consists of (per direction):
32 TTL signals for the data path
8 TTL signals for control
one TTL signal for clock
8 TTL signals for optional additional multi-channel status
There are several clocking options. The interface operates around 100 MHz. Implementations of SPI-3 (PL-3) have been produced which allow somewhat higher clock rates.
|
https://en.wikipedia.org/wiki/Avionics%20Full-Duplex%20Switched%20Ethernet
|
Avionics Full-Duplex Switched Ethernet (AFDX), also ARINC 664, is a data network, patented by international aircraft manufacturer Airbus, for safety-critical applications that utilizes dedicated bandwidth while providing deterministic quality of service (QoS). AFDX is a worldwide registered trademark by Airbus. The AFDX data network is based on Ethernet technology using commercial off-the-shelf (COTS) components. The AFDX data network is a specific implementation of ARINC Specification 664 Part 7, a profiled version of an IEEE 802.3 network per parts 1 & 2, which defines how commercial off-the-shelf networking components will be used for future generation Aircraft Data Networks (ADN). The six primary aspects of an AFDX data network include full duplex, redundancy, determinism, high speed performance, switched and profiled network.
History
Many commercial aircraft use the ARINC 429 standard developed in 1977 for safety-critical applications. ARINC 429 utilizes a unidirectional bus with a single transmitter and up to twenty receivers. A data word consists of 32 bits communicated over a twisted pair cable using the bipolar return-to-zero modulation. There are two speeds of transmission: high speed operates at 100 kbit/s and low speed operates at 12.5 kbit/s. ARINC 429 operates in such a way that its single transmitter communicates in a point-to-point connection, thus requiring a significant amount of wiring which amounts to added weight.
Another standard, ARINC 629, introduced by Boeing for the 777 provided increased data speeds of up to 2 Mbit/s and allowing a maximum of 120 data terminals. This ADN operates without the use of a bus controller thereby increasing the reliability of the network architecture. The drawback is that it requires custom hardware which can add significant cost to the aircraft. Because of this, other manufacturers did not openly accept the ARINC 629 standard.
AFDX was designed as the next-generation aircraft data network. Basing on standards
|
https://en.wikipedia.org/wiki/SPI-3
|
SPI-3 or System Packet Interface Level 3 is the name of a chip-to-chip, channelized, packet interface widely used in high-speed communications devices. It was proposed by PMC-Sierra based on their PL-3 interface to the Optical Internetworking Forum and adopted in June 2000. PL-3 was developed by PMC-Sierra in conjunction with the SATURN Development Group.
Applications
It was designed to be used in systems that support OC-48 SONET interfaces . A typical application of SPI-3 is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace.
Technical details
The interface consists of (per direction):
32 TTL signals for the data path
8 TTL signals for control
one TTL signal for clock
8 TTL signals for optional additional multi-channel status
There are several clocking options. The interface operates around 100 MHz. Implementations of SPI-3 (PL-3) have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets.
SPI-3 in the marketplace
SPI-3 (and PL-3) was a highly successful interface with many semiconductor devices produced to it.
See also
System Packet Interface
SPI-4.2
External links
OIF Interoperability Agreements
Network protocols
|
https://en.wikipedia.org/wiki/Computational%20immunology
|
In academia, computational immunology is a field of science that encompasses high-throughput genomic and bioinformatics approaches to immunology. The field's main aim is to convert immunological data into computational problems, solve these problems using mathematical and computational approaches and then convert these results into immunologically meaningful interpretations.
Introduction
The immune system is a complex system of the human body and understanding it is one of the most challenging topics in biology. Immunology research is important for understanding the mechanisms underlying the defense of human body and to develop drugs for immunological diseases and maintain health. Recent findings in genomic and proteomic technologies have transformed the immunology research drastically. Sequencing of the human and other model organism genomes has produced increasingly large volumes of data relevant to immunology research and at the same time huge amounts of functional and clinical data are being reported in the scientific literature and stored in clinical records. Recent advances in bioinformatics or computational biology were helpful to understand and organize these large-scale data and gave rise to new area that is called Computational immunology or immunoinformatics.
Computational immunology is a branch of bioinformatics and it is based on similar concepts and tools, such as sequence alignment and protein structure prediction tools. Immunomics is a discipline like genomics and proteomics. It is a science, which specifically combines immunology with computer science, mathematics, chemistry, and biochemistry for large-scale analysis of immune system functions. It aims to study the complex protein–protein interactions and networks and allows a better understanding of immune responses and their role during normal, diseased and reconstitution states. Computational immunology is a part of immunomics, which is focused on analyzing large-scale experimental data.
His
|
https://en.wikipedia.org/wiki/Barry%20%28dog%29
|
Barry der Menschenretter (1800–1814), also known as Barry, was a dog of a breed which was later called the St. Bernard that worked as a mountain rescue dog in Switzerland and Italy for the Great St Bernard Hospice. He predates the modern St. Bernard, and was lighter built than the modern breed. He has been described as the most famous St. Bernard, as he was credited with saving more than 40 lives during his lifetime, hence his byname meaning "people rescuer" in German.
The legend surrounding him was that he was killed while attempting a rescue; however, this is untrue. Barry retired to Bern, Switzerland and after his death his body was passed into the care of the Natural History Museum of Bern. His skin has been preserved through taxidermy although his skull was modified in 1923 to match the Saint Bernard of that time period. His story and name have been used in literary works, and a monument to him stands in the Cimetière des Chiens near Paris. At the hospice, one dog has always been named Barry in his honor; and since 2004, the Fondation Barry du Grand Saint Bernard has been set up to take over the responsibility for breeding dogs from the hospice.
History
The first mention in the Great St Bernard Hospice archives of a dog was in 1707 which simply said "A dog was buried by us." The dogs are thought to have been introduced to the monastery as watchdogs at some point between 1660 and 1670. Old skulls from the collection of the Natural History Museum of Bern show that at least two types of dog lived at the hospice. By 1800, the year that Barry was born, it was known that a special kind of dog was being used for rescue work in the pass. This general variety of dog was known as a Küherhund, or cowherd's dog.
Measurements of his preserved body show that Barry was significantly smaller and lighter built than the modern Saint Bernard, weighing between whereas modern Bernards weigh between 54 and 81kg (120 to 180lbs) His current mounted height is approximately , but
|
https://en.wikipedia.org/wiki/GURPS%20Ice%20Age
|
GURPS Ice Age is a genre sourcebook published by Steve Jackson Games in 1989 using the rules of GURPS (Generic Universal Role-Playing System).
Contents
GURPS Ice Age is a supplement of GURPS rules for adventure set in prehistoric times. Character rules for various hominid races include Homo Habilis, Homo Erectus, Neanderthal, and Cro-Magnon peoples. New skills are also described, and there is a section on shamanism and magic. The book provides campaign setting data and animal descriptions for Pleistocene Europe and Pliocene Africa, and as Lawrence Schick noted in his 1991 book Heroic Worlds, dinosaurs are "thrown in for those who don't care too much about scientific accuracy". The book also includes campaign suggestions and sample scenarios, including an introductory adventure, "Wolf Pack on Bear River".
Publication history
GURPS Ice Age: Roleplaying in the Prehistoric World was written by Kirk Wilson Tate, with cover art by Guy Burchak and illustrations by Donna Barr, and was first published by Steve Jackson Games in 1989 as a 64-page book.
Most of the material was later slightly reworked and republished in GURPS Dinosaurs, excluding the interior art by comics artist Donna Barr and the introductory adventure.
Reception
In the June 1989 edition of Games International (Issue 6), James Wallis was impressed by this book, commenting "Whether you're thinking of cavemen against dinosaurs, anthropological campaigns or moaning black monoliths, GURPS Ice Age lets you do it all." He concluded by giving the book an above-average rating of 4 out of 5, calling it, "well researched, well presented and makes fascinating reading, with a surprisingly large potential for adventuring."
In the August 1989 edition of Dragon (Issue #148), Jim Bambra, was impressed with this book, commenting, "A game centered around cavemen and woolly mammoths? The GURPS Ice Age game takes this unusual subject and does a first-class job of turning it into a credible and detailed setting, including lo
|
https://en.wikipedia.org/wiki/Workflow%20pattern
|
A workflow pattern is a specialized form of design pattern as defined in the area of software engineering or business process engineering. Workflow patterns refer specifically to recurrent problems and proven solutions related to the development of workflow applications in particular, and more broadly, process-oriented applications.
Concept
Workflow patterns are concepts of economised development. Their usage should follow strategies of simplifying maintenance and reducing modelling work.
Workflow is performed in real time. The mechanisms of control must support the typical pace of work. Design patterns must delay execution of workflow.
Aggregation
Workflow patterns may usually be aggregated as chains and the conditions for starting and terminating must be explicitly defined.
Application
Workflow patterns can be applied in various context, hence the conditions for use must be explicitly defined and shown in order to prevent misinterpretation.
Van der Aalst classification
A well-known collection of workflow patterns is that proposed by Wil van der Aalst et al. (2003) in their paper Workflow Patterns. with earlier versions published in 2000–02. This collection of patterns focuses on one specific aspect of process-oriented application development, namely the description of control flow dependencies between activities in a workflow/process. These patterns are divided into the following categories:
Basic Control Patterns
Sequence - execute two or more activities in sequence
Parallel Split - execute two or more activities in any order or in parallel
Synchronize - synchronize two or more activities that may execute in any order or in parallel; do not proceed with the execution of subsequent activities until all preceding activities have completed; also known as barrier synchronization.
Exclusive Choice - choose one execution path from many alternatives based on data that is available when the execution of the process reaches the exclusive choice
Simple Me
|
https://en.wikipedia.org/wiki/Hutchinson%27s%20ratio
|
In ecological theory, the Hutchinson's ratio is the ratio of the size differences between similar species when they are living together as compared to when they are isolated. It is named after G. Evelyn Hutchinson who concluded that various key attributes in species varied according to the ratio of 1:1.1 to 1:1.4.
See also
Hutchinson's rule
References
External links
Ecology
|
https://en.wikipedia.org/wiki/Blocking%20oscillator
|
A blocking oscillator (sometimes called a pulse oscillator) is a simple configuration of discrete electronic components which can produce a free-running signal, requiring only a resistor, a transformer, and one amplifying element such as a transistor or vacuum tube. The name is derived from the fact that the amplifying element is cut-off or "blocked" for most of the duty cycle, producing periodic pulses on the principle of a relaxation oscillator. The non-sinusoidal output is not suitable for use as a radio-frequency local oscillator, but it can serve as a timing generator, to power lights, LEDs, EL wire, or small neon indicators. If the output is used as an audio signal, the simple tones are also sufficient for applications such as alarms or a Morse code practice device. Some cameras use a blocking oscillator to strobe the flash prior to a shot to reduce the red-eye effect.
Due to the circuit's simplicity, it forms the basis for many of the learning projects in commercial electronic kits. The secondary winding of the transformer can be fed to a speaker, a lamp, or the windings of a relay. Instead of a resistor, a potentiometer placed in parallel with the timing capacitor permits the frequency to be adjusted freely, but at low resistances the transistor can be overdriven, and possibly damaged. The output signal will jump in amplitude and be greatly distorted.
Circuit operation
The circuit works due to positive feedback through the transformer and involves two times—the time Tclosed when the switch is closed, and the time Topen when the switch is open. The following abbreviations are used in the analysis:
t, time, a variable
Tclosed: instant at the end of the closed cycle, beginning of open cycle. Also a measure of the time duration when the switch is closed.
Topen: instant at the end of the open cycle, beginning of closed cycle. Same as T=0. Also a measure of the time duration when the switch is open.
Vb, source voltage e.g. Vbattery
Vp, voltage across th
|
https://en.wikipedia.org/wiki/Vector%20calculus%20identities
|
The following are important identities involving derivatives and integrals in vector calculus.
Operator notation
Gradient
For a function in three-dimensional Cartesian coordinate variables, the gradient is the vector field:
where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables , also called a scalar field, the gradient is the vector field:
where are orthogonal unit vectors in arbitrary directions.
As the name implies, the gradient is proportional to and points in the direction of the function's most rapid (positive) change.
For a vector field , also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix:
For a tensor field of any order k, the gradient is a tensor field of order k + 1.
For a tensor field of order k > 0, the tensor field of order k + 1 is defined by the recursive relation
where is an arbitrary constant vector.
Divergence
In Cartesian coordinates, the divergence of a continuously differentiable vector field is the scalar-valued function:
As the name implies the divergence is a measure of how much vectors are diverging.
The divergence of a tensor field of non-zero order k is written as , a contraction to a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity,
where is the directional derivative in the direction of multiplied by its magnitude. Specifically, for the outer product of two vectors,
For a tensor field of order k > 1, the tensor field of order k − 1 is defined by the recursive relation
where is an arbitrary constant vector.
Curl
In Cartesian coordinates, for the curl is the vector field:
where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively.
As the name implies the curl is a measure of how much nearby vectors te
|
https://en.wikipedia.org/wiki/Phase-change%20incubator
|
The phase-change incubator is a low-cost, low-maintenance incubator that tests for microorganisms in water supplies. It uses small balls containing a chemical compound that, when heated and then kept insulated, will stay at 37 °C (approx. 99 °F) for 24 hours. This allows cultures to be tested without the need for a laboratory or an expensive portable incubator. Thus it is particularly useful for poor or remote communities. The phase-change incubator was developed in the late 1990s by Amy Smith, when she was a graduate student at MIT. Smith has also started a non-profit organization called A Drop in the Bucket to distribute the incubators and to train people on how to use them to test water quality. Her “Test Water Cheap” system could be used at remote locations to test for bacteria such as E.coli.
Embrace, an organization that from Stanford University, is applying a similar concept to design low-cost incubators for premature and low birth weight babies in developing countries.
See also
Appropriate technology
References
External links
Student's low-cost solution aids high-tech problem in Africa
Necessity Is the Mother of Invention
American inventions
Appropriate technology
Microbiology equipment
|
https://en.wikipedia.org/wiki/Email%20tracking
|
Email tracking is a method for monitoring whether the email message is read by the intended recipient. Most tracking technologies use some form of digitally time-stamped record to reveal the exact time and date when an email is received or opened, as well as the IP address of the recipient.
Email tracking is useful when the sender wants to know whether the intended recipient actually received the email or clicked the links. However, due to the nature of the technology, email tracking cannot be considered an absolutely accurate indicator that a message was opened or read by the recipient.
Most email marketing software provides tracking features, sometimes in aggregate (e.g., click-through rate), and sometimes on an individual basis.
Read-receipts
Some email applications, such as Microsoft Office Outlook and Mozilla Thunderbird, employ a read-receipt tracking mechanism. The sender selects the receipt request option prior to sending the message, and then upon sending, each recipient has the option of notifying the sender that the message was received or read by the recipient.
However, requesting a receipt does not guarantee that one will be received, for several reasons. Not all email applications or services support sending read receipts, and users can usually disable the functionality if they so wish. Those that do support it are not necessarily compatible with or capable of recognizing requests from a different email service or application. Generally, read receipts are only useful within an organization where all mail users are using the same email service and application.
Depending on the recipient's mail client and settings, they may be forced to click a notification button before they can move on with their work. Even though it is an opt-in process, a recipient might consider it inconvenient, discourteous, or invasive.
Read receipts are sent back to the sender's "inbox" as email messages, but the location may be changed depending on the software used and i
|
https://en.wikipedia.org/wiki/Council%20for%20Responsible%20Genetics
|
The Council for Responsible Genetics (CRG) was a nonprofit NGO with a focus on biotechnology.
History
The Council for Responsible Genetics was founded in 1983 in Cambridge, Massachusetts.
An early voice concerned about the social and ethical implications of modern genetic technologies, CRG organized a 1985 Congressional Briefing and a 1986 panel of the American Association for the Advancement of Science, both focusing on the potential dangers of genetically engineered biological weapons. Francis Boyle was asked to draft legislation setting limits on the use of genetic engineering, leading to the Biological Weapons Anti-Terrorism Act of 1989.
CRG was the first organization to advance a comprehensive, scientifically based position against human germline engineering. It was also the first to compile documented cases of genetic discrimination, laying the intellectual groundwork for the Genetic Information Nondiscrimination Act of 2008 (GINA).
The organization created both a Genetic Bill of Rights and a Citizen's Guide to Genetically Modified Food. Also notable are CRG's support for the "Safe Seeds Campaign" (for avoiding gene flow from genetically engineered to non-GE seed) and the organization of a US conference on Forensic DNA Databanks and Racial Disparities in the Criminal Justice System. In 2010 CRG led a successful campaign to roll back a controversial student genetic testing program at the University of California, Berkeley. In 2011, CRG led a campaign to successfully enact [CalGINA] in California, which extended genetic privacy and nondiscrimination protections to life, disability and long term care insurance, mortgages, lending and other areas.
CRG issued five anthologies of commentaries:
Rights and Liberties in the Biotech Age edited by Sheldon Krimsky and Peter Shorett
Race and the Genetic Revolution: Science, Myth and Culture
Genetic Explanations: Sense and Nonsense edited by Krimsky and Jeremy Gruber
Biotechnology in our Lives edited by Krim
|
https://en.wikipedia.org/wiki/Online%20diary
|
An online diary or web diary, is a personal diary or journal that is published on the World Wide Web on a personal website or a diary-hosting website.
Overview
Online diaries have existed since at least 1994. As a community formed, these publications came to be almost exclusively known as online journals. Today they are almost exclusively called blogs, though some differentiate by calling them personal blogs. The running updates of online diarists combined with links inspired the term 'weblog' which was eventually contracted to form the word 'blog'.
In online diaries, people write about their day-to-day experiences, social commentary, complaints, poems, prose, illicit thoughts and any content that might be found in a traditional paper diary or journal. They often allow readers to contribute through comments or community posting.
Modern online diary platforms may allow the writer to make entries from a PC, tablet or smartphone. Writers might rate how they feel each day, invite someone to engage in a personal conversation or find counseling.
Early history
Online diaries soon caught the attention of the media with the publication of the book 24 Hours in Cyberspace (1996) which captured personal profiles of the people involved in early web pages. The earliest book-length scholarly discussion of online diaries is Philippe Lejeune's Cher écran, ("Dear Screen").
In 1998, Simon Firth described in Salon magazine how many early online diarists were abandoning the form. Yet, he said, "While many of the movement's pioneers may be tired and disillusioned, the genre shows plenty of signs of life – of blossoming, even, into something remarkable: a new literary form that allows writers to connect with readers in an excitingly new way."
Formation of a community
As diarists (sometimes called escribitionists) began to learn from each other, several Webrings formed to connect various diaries and journals; the most popular was Open Pages, which started in July 1996 and had 537
|
https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Szekeres%20theorem
|
In mathematics, the Erdős–Szekeres theorem asserts that, given r, s, any sequence of distinct real numbers with length at least (r − 1)(s − 1) + 1 contains a monotonically increasing subsequence of length r or a monotonically decreasing subsequence of length s. The proof appeared in the same 1935 paper that mentions the Happy Ending problem.
It is a finitary result that makes precise one of the corollaries of Ramsey's theorem. While Ramsey's theorem makes it easy to prove that every infinite sequence of distinct real numbers contains a monotonically increasing infinite subsequence or a monotonically decreasing infinite subsequence, the result proved by Paul Erdős and George Szekeres goes further.
Example
For r = 3 and s = 2, the formula tells us that any permutation of three numbers has an increasing subsequence of length three or a decreasing subsequence of length two. Among the six permutations of the numbers 1,2,3:
1,2,3 has an increasing subsequence consisting of all three numbers
1,3,2 has a decreasing subsequence 3,2
2,1,3 has a decreasing subsequence 2,1
2,3,1 has two decreasing subsequences, 2,1 and 3,1
3,1,2 has two decreasing subsequences, 3,1 and 3,2
3,2,1 has three decreasing length-2 subsequences, 3,2, 3,1, and 2,1.
Alternative interpretations
Geometric interpretation
One can interpret the positions of the numbers in a sequence as x-coordinates of points in the Euclidean plane, and the numbers themselves as y-coordinates; conversely, for any point set in the plane, the y-coordinates of the points, ordered by their x-coordinates, forms a sequence of numbers (unless two of the points have equal x-coordinates). With this translation between sequences and point sets, the Erdős–Szekeres theorem can be interpreted as stating that in any set of at least rs − r − s + 2 points we can find a polygonal path of either r − 1 positive-slope edges or s − 1 negative-slope edges. In particular (taking r = s), in any set of at least n points we can find a
|
https://en.wikipedia.org/wiki/Colure
|
Colure, in astronomy, is either of the two principal meridians of the celestial sphere.
Equinoctial colure
The equinoctial colure is the meridian or great circle of the celestial sphere which passes through the celestial poles and the two equinoxes: the first point of Aries and the first point of Libra.
Solstitial colure
The solstitial colure is the meridian or great circle of the celestial sphere which passes through the poles and the two solstices: the first point of Cancer and the first point of Capricorn. There are several stars closely aligned with the solstitial colure: Pi Herculis, Delta Aurigae, and Theta Scorpii. This makes the solstitial colure point towards the North Celestial Pole and Polaris.
See also
Celestial coordinate system
Ecliptic
Celestial sphere
Right ascension
Equinox
Solstice
References
Kaler, Jim. "Pi Aurigae." Pi Aurigae. N.p. 22 Feb. 2008. Web.
Astronomical coordinate systems
Solstices
|
https://en.wikipedia.org/wiki/Constant%20speed%20drive
|
A constant speed drive (CSD) also known as a constant speed generator, is a type of transmission that takes an input shaft rotating at a wide range of speeds, delivering this power to an output shaft that rotates at a constant speed, despite the varying input. They are used to drive mechanisms, typically electrical generators, that require a constant input speed.
The term is most commonly applied to hydraulic transmissions found on the accessory drives of gas turbine engines, such as aircraft jet engines. On modern aircraft, the CSD is often combined with a generator into a single unit known as an integrated drive generator (IDG).
Mechanism
CSDs are mainly used on airliner and military aircraft jet engines to drive the alternating current (AC) electrical generator. In order to produce the proper voltage at a constant AC frequency, usually three-phase 115 VAC at 400 Hz, an alternator needs to spin at a constant specific speed (typically 6,000 RPM for air-cooled generators). Since the jet engine gearbox speed varies from idle to full power, this creates the need for a constant speed drive (CSD). The CSD takes the variable speed output of the accessory drive gearbox and hydro-mechanically produces a constant output RPM.
Different systems have been used to control the alternator speed. Modern designs are mostly hydrokinetic, but early designs often took advantage of the bleed air available from the engines. Some of these were mostly mechanically powered, with an air turbine to provide a vernier speed adjustment. Others were purely turbine-driven.
Integrated drive generator
On aircraft such as the Airbus A310, Airbus A320 family, Airbus A320neo, Airbus A330, Airbus A330neo, Airbus A340, Boeing 737 Next Generation, 747, 757, 767 and 777, an integrated drive generator (IDG) is used. This unit is simply a CSD and an oil cooled generator inside the same case. Troubleshooting is simplified as this unit is the line-replaceable electrical generation unit on the engine.
Ma
|
https://en.wikipedia.org/wiki/Bockstein%20homomorphism
|
In homological algebra, the Bockstein homomorphism, introduced by , is a connecting homomorphism associated with a short exact sequence
of abelian groups, when they are introduced as coefficients into a chain complex C, and which appears in the homology groups as a homomorphism reducing degree by one,
To be more precise, C should be a complex of free, or at least torsion-free, abelian groups, and the homology is of the complexes formed by tensor product with C (some flat module condition should enter). The construction of β is by the usual argument (snake lemma).
A similar construction applies to cohomology groups, this time increasing degree by one. Thus we have
The Bockstein homomorphism associated to the coefficient sequence
is used as one of the generators of the Steenrod algebra. This Bockstein homomorphism has the following two properties:
,
;
in other words, it is a superderivation acting on the cohomology mod p of a space.
See also
Bockstein spectral sequence
References
.
Algebraic topology
Homological algebra
|
https://en.wikipedia.org/wiki/Cartan%20formula
|
In mathematics, Cartan formula can mean:
one in differential geometry: , where , and are Lie derivative, exterior derivative, and interior product, respectively, acting on differential forms. See interior product for the detail. It is also called the Cartan homotopy formula or Cartan magic formula. This formula is named after Élie Cartan.
one in algebraic topology, which is one of the five axioms of Steenrod algebra. It reads:
.
See Steenrod algebra for the detail. The name derives from Henri Cartan, son of Élie.
Footnotes
See also
List of things named after Élie Cartan
|
https://en.wikipedia.org/wiki/Complete%20intersection
|
In mathematics, an algebraic variety V in projective space is a complete intersection if the ideal of V is generated by exactly codim V elements. That is, if V has dimension m and lies in projective space Pn, there should exist n − m homogeneous polynomials:
in the homogeneous coordinates Xj, which generate all other homogeneous polynomials that vanish on V.
Geometrically, each Fi defines a hypersurface; the intersection of these hypersurfaces should be V. The intersection of hypersurfaces will always have dimension at least m, assuming that the field of scalars is an algebraically closed field such as the complex numbers. The question is essentially, can we get the dimension down to m, with no extra points in the intersection? This condition is fairly hard to check as soon as the codimension . When then V is automatically a hypersurface and there is nothing to prove.
Examples
Easy examples of complete intersections are given by hypersurfaces which are defined by the vanishing locus of a single polynomial. For example,
gives an example of a quintic threefold. It can be difficult to find explicit examples of complete intersections of higher dimensional varieties using two or more explicit examples (bestiary), but, there is an explicit example of a 3-fold of type given by
Non-examples
Twisted cubic
One method for constructing local complete intersections is to take a projective complete intersection variety and embed it into a higher dimensional projective space. A classic example of this is the twisted cubic in : it is a smooth local complete intersection meaning in any chart it can be expressed as the vanishing locus of two polynomials, but globally it is expressed by the vanishing locus of more than two polynomials. We can construct it using the very ample line bundle over giving the embedding
by
Note that . If we let the embedding gives the following relations:
Hence the twisted cubic is the projective scheme
Union of varieties differing in dim
|
https://en.wikipedia.org/wiki/Complete%20intersection%20ring
|
In commutative algebra, a complete intersection ring is a commutative ring similar to the coordinate rings of varieties that are complete intersections. Informally, they can be thought of roughly as the local rings that can be defined using the "minimum possible" number of relations.
For Noetherian local rings, there is the following chain of inclusions:
Definition
A local complete intersection ring is a Noetherian local ring whose completion is the quotient of a regular local ring by an ideal generated by a regular sequence. Taking the completion is a minor technical complication caused by the fact that not all local rings are quotients of regular ones. For rings that are quotients of regular local rings, which covers most local rings that occur in algebraic geometry, it is not necessary to take completions in the definition.
There is an alternative intrinsic definition that does not depend on embedding the ring in a regular local ring.
If R is a Noetherian local ring with maximal ideal m, then the dimension of m/m2 is called the embedding dimension emb dim (R) of R. Define a graded algebra H(R) as the homology of the Koszul complex with respect to a minimal system of generators of m/m2; up to isomorphism this only depends on R and not on the choice of the generators of m. The dimension of H1(R) is denoted by ε1 and is called the first deviation of R; it vanishes if and only if R is regular.
A Noetherian local ring is called a complete intersection ring if its
embedding dimension is the sum of the dimension and the first deviation:
emb dim(R) = dim(R) + ε1(R).
There is also a recursive characterization of local complete intersection rings that can be used as a definition, as follows. Suppose that R is a complete Noetherian local ring. If R has dimension greater than 0 and x is an element in the maximal ideal that is not a zero divisor then R is a complete intersection ring if and only if R/(x) is. (If the maximal ideal consists entirely of zero divisors th
|
https://en.wikipedia.org/wiki/Insects%20on%20stamps
|
Many countries have featured insects on stamps. Insect related topics such as the mosquito eradication (anti malaria) programme of the 1960s as well as graphic designs based on insects have also appeared. Many stamps also feature butterflies.
Insects only started to appear on stamps much later than other larger and more attractive animals. The first postal stamp featuring a beetle was released in 1948 in Chile as a tribute to natural historian Claudio Gay. Since then, insects have become popular subjects in philately. Between 1953 and 1969, about 100 stamps featuring beetles were published worldwide. Most of the time, aesthetically attractive species are pictured, but some stamps also feature pests. In other instances, due to simplified drawing, it is hard to identify what species is depicted on the stamp.
See also
Topical stamp collecting
Stamp catalog
References
External links
Skap's Bug Stamps. Multi-imaged and very complete (to N)
Insect Stamps (in easy French)
Beetles on Stamps
Insects on Stamps
www.malariastamps.com - 5GB of images of malaria and mosquitoes on stamps and slogans from over 100 countries
Animals on stamps
Stamps
Insects in culture
|
https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20thermodynamics%20and%20statistical%20mechanics
|
A list of notable textbooks in thermodynamics and statistical mechanics, arranged by category and date.
Only or mainly thermodynamics
Both thermodynamics and statistical mechanics
2e Kittel, Charles; and Kroemer, Herbert (1980) New York: W.H. Freeman
2e (1988) Chichester: Wiley , .
(1990) New York: Dover
Statistical mechanics
. 2e (1936) Cambridge: University Press; (1980) Cambridge University Press.
; (1979) New York: Dover
Vol. 5 of the Course of Theoretical Physics. 3e (1976) Translated by J.B. Sykes and M.J. Kearsley (1980) Oxford : Pergamon Press.
. 3e (1995) Oxford: Butterworth-Heinemann
. 2e (1987) New York: Wiley
. 2e (1988) Amsterdam: North-Holland . 2e (1991) Berlin: Springer Verlag ,
; (2005) New York: Dover
2e (2000) Sausalito, Calif.: University Science
2e (1998) Chichester: Wiley
Specialized topics
Kinetic theory
Vol. 10 of the Course of Theoretical Physics (3rd Ed). Translated by J.B. Sykes and R.N. Franklin (1981) London: Pergamon ,
Quantum statistical mechanics
Mathematics of statistical mechanics
Translated by G. Gamow (1949) New York: Dover
. Reissued (1974), (1989); (1999) Singapore: World Scientific
; (1984) Cambridge: University Press . 2e (2004) Cambridge: University Press
Miscellaneous
(available online here)
Historical
(1896, 1898) Translated by Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover
Translated by J. Kestin (1956) New York: Academic Press.
German Encyclopedia of Mathematical Sciences. Translated by Michael J. Moravcsik (1959) Ithaca: Cornell University Press; (1990) New York: Dover
See also
List of textbooks on classical mechanics and quantum mechanics
List of textbooks in electromagnetism
List of books on general relativity
Further reading
References
External links
Statistical Mechanics and Thermodynamics Texts Clark University curriculum development project
Lists of science textbooks
Mathematics-related lists
Physics-relate
|
https://en.wikipedia.org/wiki/Timing%20mark
|
A timing mark is an indicator used for setting the timing of the ignition system of an engine, typically found on the crankshaft pulley (as pictured) or the flywheel. These have the largest radius rotating at crankshaft speed and therefore are the place where marks at one degree intervals will be farthest apart.
On older engines it is common to set the ignition timing using a timing light, which flashes in time with the ignition system (and hence engine rotation). Shining the light on the timing marks makes them appear stationary due to the stroboscopic effect. The ignition timing can then be adjusted to fire at the correct point in the engine's rotation, typically a few degrees before top dead centre and advancing with increasing engine speed. The timing can be adjusted by loosening and slightly rotating the distributor in its seat.
Modern engines usually use a crank sensor directly connected to the engine management system.
The term can also be used to describe the tick marks along the length of an optical mark recognition sheet, used to confirm the location of the sheet as it passes through the reader. See, for example, U.S. Patent 3,218,439 (filed 1964, granted 1965), which refers to a timing track down the left side of the form, and U.S. Patent 3,267,258 (filed 1963, granted 1966), which refers to a column of timing marks on the right side of the form.
The term can also be used to describe the timing patterns used in some barcodes, such as PostBar, Data Matrix, Aztec Code, etc.
References
Ignition systems
Synchronization
Engine technology
|
https://en.wikipedia.org/wiki/GIO
|
GIO is a computer bus standard developed by SGI and used in a variety of their products in the 1990s as their primary expansion system. GIO was similar in concept to competing standards such as NuBus or (later) PCI, but saw little use outside SGI and severely limited the devices available on their platform as a result. Most devices using GIO were SGI's own graphics cards, although a number of cards supporting high-speed data access such as Fibre Channel and FDDI were available from third parties. Later SGI machines use the XIO bus, which is laid out as a computer network as opposed to a bus.
Description
Like most busses of the era, GIO was a 32-bit address and data multiplexed bus that was normally clocked at 25 or 33 MHz. This meant that the bus uses the same path for addressing and data, thus normally requiring three cycles to transfer a single 32-bit value; one cycle to send the address, the next to send the data and then another to read or write it. This limited the bus to a maximum throughput of about 16 Mbyte/s at 33 MHz for these sorts of small transfers. However the system also included a long-burst read/write mode that allowed continual transfers of up to 4 kilobytes of data (the fundamental page size in R3000-based SGI machines); using this mode dramatically increased the throughput to 132 Mbyte/s (32 bits per cycle * 33 MHz). GIO also included a "real time" interrupt allowing devices to interrupt these long transfers if needed. Bus arbitration was controlled by the Processor Interface Controller (PIC) in the original R3000-based SGI Indigo systems.
Physically, GIO used a 96-pin connector and fairly small cards 6.44 inches (16.3576 cm) long by 3.375 inches (8.5725 cm) wide. In the Indigo series, the cards were aligned vertically above each other within the case, as opposed to the more common arrangement where the cards lie at right angles to the motherboard. This led to a "tall and skinny" case design. Since the cards were "above" each other in-line, i
|
https://en.wikipedia.org/wiki/Microsoft%20Transaction%20Server
|
Microsoft Transaction Server (MTS) was software that provided services to Component Object Model (COM) software components, to make it easier to create large distributed applications. The major services provided by MTS were automated transaction management, instance management (or just-in-time activation) and role-based security. MTS is considered to be the first major software to implement aspect-oriented programming.
MTS was first offered in the Windows NT 4.0 Option Pack. In Windows 2000, MTS was enhanced and better integrated with the operating system and COM, and was renamed COM+. COM+ added object pooling, loosely-coupled events and user-defined simple transactions (compensating resource managers) to the features of MTS.
COM+ is still provided with Windows Server 2003 and Windows Server 2008, and the Microsoft .NET Framework provides a wrapper for COM+ in the EnterpriseServices namespace. The Windows Communication Foundation (WCF) provides a way of calling COM+ applications with web services. However, COM+ is based on COM, and Microsoft's strategic software architecture is now web services and .NET, not COM. There are pure .NET-based alternatives for many of the features provided by COM+, and in the long term it is likely COM+ will be phased out.
Architecture
A basic MTS architecture comprises:
the MTS Executive (mtxex.dll)
the Factory Wrappers and Context Wrappers for each component
the MTS Server Component
MTS clients
auxiliary systems like:
COM runtime services
the Service Control Manager (SCM)
the Microsoft Distributed Transaction Coordinator (MS-DTC)
the Microsoft Message Queue (MSMQ)
the COM-Transaction Integrator (COM-TI)
etc.
COM components that run under the control of the MTS Executive are called MTS components. In COM+, they are referred to as COM+ Applications. MTS components are in-process DLLs. MTS components are deployed and run in the MTS Executive which manages them. As with other COM components, an object implementing the IC
|
https://en.wikipedia.org/wiki/Adjunction%20formula
|
In mathematics, especially in algebraic geometry and the theory of complex manifolds, the adjunction formula relates the canonical bundle of a variety and a hypersurface inside that variety. It is often used to deduce facts about varieties embedded in well-behaved spaces such as projective space or to prove theorems by induction.
Adjunction for smooth varieties
Formula for a smooth subvariety
Let X be a smooth algebraic variety or smooth complex manifold and Y be a smooth subvariety of X. Denote the inclusion map by i and the ideal sheaf of Y in X by . The conormal exact sequence for i is
where Ω denotes a cotangent bundle. The determinant of this exact sequence is a natural isomorphism
where denotes the dual of a line bundle.
The particular case of a smooth divisor
Suppose that D is a smooth divisor on X. Its normal bundle extends to a line bundle on X, and the ideal sheaf of D corresponds to its dual . The conormal bundle is , which, combined with the formula above, gives
In terms of canonical classes, this says that
Both of these two formulas are called the adjunction formula.
Examples
Degree d hypersurfaces
Given a smooth degree hypersurface we can compute its canonical and anti-canonical bundles using the adjunction formula. This reads aswhich is isomorphic to .
Complete intersections
For a smooth complete intersection of degrees , the conormal bundle is isomorphic to , so the determinant bundle is and its dual is , showingThis generalizes in the same fashion for all complete intersections.
Curves in a quadric surface
embeds into as a quadric surface given by the vanishing locus of a quadratic polynomial coming from a non-singular symmetric matrix. We can then restrict our attention to curves on . We can compute the cotangent bundle of using the direct sum of the cotangent bundles on each , so it is . Then, the canonical sheaf is given by , which can be found using the decomposition of wedges of direct sums of vector bundles. Then, usi
|
https://en.wikipedia.org/wiki/Concept%20drift
|
In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Drift detection and drift adaptation are of paramount importance in the fields that involve dynamically changing data and data models.
Predictive model decay
In machine learning and predictive analytics this drift phenomenon is called concept drift. In machine learning, a common element of a data model are the statistical properties, such as probability distribution of the actual data. If they deviate from the statistical properties of the training data set, then the learned predictions may become invalid, if the drift is not addressed.
Data configuration decay
Another important area is software engineering, where three types of data drift affecting data fidelity may be recognized. Changes in the software environment ("infrastructure drift") may invalidate software infrastructure configuration. "Structural drift" happens when the data schema changes, which may invalidate databases. "Semantic drift" is changes in the meaning of data while the structure does not change. In many cases this may happen in complicated applications when many independent developers introduce changes without proper awareness of the effects of their changes in other areas of the software system.
For many application systems, the nature of data on which they operate are subject to changes for various reasons, e.g., due to changes in business model, system updates, or switching the platform on which the system operates.
In the case of cloud computing, infrastructure drift that may affect the applications running on cloud may be caused by the updates of cloud software.
There are several types of detrimental effects o
|
https://en.wikipedia.org/wiki/Disk%20mirroring
|
In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies.
In a disaster recovery context, mirroring data over long distance is referred to as storage replication. Depending on the technologies used, replication can be performed synchronously, asynchronously, semi-synchronously, or point-in-time. Replication is enabled via microcode on the disk array controller or via server software. It is typically a proprietary solution, not compatible between various data storage device vendors.
Mirroring is typically only synchronous. Synchronous writing typically achieves a recovery point objective (RPO) of zero lost data. Asynchronous replication can achieve an RPO of just a few seconds while the remaining methodologies provide an RPO of a few minutes to perhaps several hours.
Disk mirroring differs from file shadowing that operates on the file level, and disk snapshots where data images are never re-synced with their origins.
Overview
Typically, mirroring is provided in either hardware solutions such as disk arrays, or in software within the operating system (such as Linux mdadm and device mapper). Additionally, file systems like Btrfs or ZFS provide integrated data mirroring. There are additional benefits from Btrfs and ZFS, which maintain both data and metadata integrity checksums, making themselves capable of detecting bad copies of blocks, and using mirrored data to pull up data from correct blocks.
There are several scenarios for what happens when a disk fails. In a hot swap system, in the event of a disk failure, the system itself typically diagnoses a disk failure and signals a failure. Sophisticated systems may automatically activate a hot standby disk and use the remaining active disk to copy live data onto this disk. Alternatively, a new disk is
|
https://en.wikipedia.org/wiki/Peetre%20theorem
|
In mathematics, the (linear) Peetre theorem, named after Jaak Peetre, is a result of functional analysis that gives a characterisation of differential operators in terms of their effect on generalized function spaces, and without mentioning differentiation in explicit terms. The Peetre theorem is an example of a finite order theorem in which a function or a functor, defined in a very general way, can in fact be shown to be a polynomial because of some extraneous condition or symmetry imposed upon it.
This article treats two forms of the Peetre theorem. The first is the original version which, although quite useful in its own right, is actually too general for most applications.
The original Peetre theorem
Let M be a smooth manifold and let E and F be two vector bundles on M. Let
be the spaces of smooth sections of E and F. An operator
is a morphism of sheaves which is linear on sections such that the support of D is non-increasing: supp Ds ⊆ supp s for every smooth section s of E. The original Peetre theorem asserts that, for every point p in M, there is a neighborhood U of p and an integer k (depending on U) such that D is a differential operator of order k over U. This means that D factors through a linear mapping iD from the k-jet of sections of E into the space of smooth sections of F:
where
is the k-jet operator and
is a linear mapping of vector bundles.
Proof
The problem is invariant under local diffeomorphism, so it is sufficient to prove it when M is an open set in Rn and E and F are trivial bundles. At this point, it relies primarily on two lemmas:
Lemma 1. If the hypotheses of the theorem are satisfied, then for every x∈M and C > 0, there exists a neighborhood V of x and a positive integer k such that for any y∈V\{x} and for any section s of E whose k-jet vanishes at y (jks(y)=0), we have |Ds(y)|<C.
Lemma 2. The first lemma is sufficient to prove the theorem.
We begin with the proof of Lemma 1.
Suppose the lemma is false. Then there
|
https://en.wikipedia.org/wiki/International%20Article%20Number
|
The International Article Number (also known as European Article Number or EAN) is a standard describing a barcode symbology and numbering system used in global trade to identify a specific retail product type, in a specific packaging configuration, from a specific manufacturer. The standard has been subsumed in the Global Trade Item Number standard from the GS1 organization; the same numbers can be referred to as GTINs and can be encoded in other barcode symbologies defined by GS1. EAN barcodes are used worldwide for lookup at retail point of sale, but can also be used as numbers for other purposes such as wholesale ordering or accounting. These barcodes only represent the digits 0–9, unlike some other barcode symbologies which can represent additional characters.
The most commonly used EAN standard is the thirteen-digit EAN-13, a superset of the original 12-digit Universal Product Code (UPC-A) standard developed in 1970 by George J. Laurer. An EAN-13 number includes a 3-digit GS1 prefix (indicating country of registration or special type of product). A prefix with a first digit of "0" indicates a 12-digit UPC-A code follows. A prefix with first two digits of "45" or "49" indicates a Japanese Article Number (JAN) follows.
The less commonly used 8-digit EAN-8 barcode was introduced for use on small packages, where EAN-13 would be too large. 2-digit EAN-2 and 5-digit EAN-5 are supplemental barcodes, placed on the right-hand side of EAN-13 or UPC. These are generally used in periodicals, like magazines and books, to indicate the current year's issue number and in weighed products like food, to indicate the manufacturer's suggested retail price.
Composition
The 13-digit EAN-13 number consists of four components:
GS1 prefix – 3 digits
Manufacturer code – variable length
Product code – variable length
Check digit
GS1 prefix
The first three digits of the EAN-13 (GS1 Prefix) usually identify the GS1 Member Organization which the manufacturer has joined (not ne
|
https://en.wikipedia.org/wiki/Order%20dimension
|
In mathematics, the dimension of a partially ordered set (poset) is the smallest number of total orders the intersection of which gives rise to the partial order.
This concept is also sometimes called the order dimension or the Dushnik–Miller dimension of the partial order.
first studied order dimension; for a more detailed treatment of this subject than provided here, see .
Formal definition
The dimension of a finite poset P is the least integer t for which there exists a family
of linear extensions of P so that, for every x and y in P, x precedes y in P if and only if it precedes y in all of the linear extensions. That is,
An alternative definition of order dimension is the minimal number of total orders such that P embeds into their product with componentwise ordering i.e. if and only if for all i (, ).
Realizers
A family of linear orders on X is called a realizer of a poset P = (X, <P) if
,
which is to say that for any x and y in X,
x <P y precisely when x <1 y, x <2 y, ..., and x <t y.
Thus, an equivalent definition of the dimension of a poset P is "the least cardinality of a realizer of P."
It can be shown that any nonempty family R of linear extensions is a realizer of a finite partially ordered set P if and only if, for every critical pair (x,y) of P, y <i x for some order
<i in R.
Example
Let n be a positive integer, and let P be the partial order on the elements ai and bi (for 1 ≤ i ≤ n) in which ai ≤ bj whenever i ≠ j, but no other pairs are comparable. In particular, ai and bi are incomparable in P; P can be viewed as an oriented form of a crown graph. The illustration shows an ordering of this type for n = 4.
Then, for each i, any realizer must contain a linear order that begins with all the aj except ai (in some order), then includes bi, then ai, and ends with all the remaining bj. This is so because if there were a realizer that didn't include such an order, then the intersection of that realizer's orders would have ai preceding bi, wh
|
https://en.wikipedia.org/wiki/List%20of%20automation%20protocols
|
This is a list of communication protocols used for the automation of processes (industrial or otherwise), such as for building automation, power-system automation, automatic meter reading, and vehicular automation.
Process automation protocols
AS-i – Actuator-sensor interface, a low level 2-wire bus establishing power and communications to basic digital and analog devices
BSAP – Bristol Standard Asynchronous Protocol, developed by Bristol Babcock Inc.
CC-Link Industrial Networks – Supported by the CLPA
CIP (Common Industrial Protocol) – can be treated as application layer common to DeviceNet, CompoNet, ControlNet and EtherNet/IP
ControlNet – an implementation of CIP, originally by Allen-Bradley
DeviceNet – an implementation of CIP, originally by Allen-Bradley
DF-1 - used by Allen-Bradley ControlLogix, CompactLogix, PLC-5, SLC-500, and MicroLogix class devices
DNP3 - a protocol used to communicate by industrial control and utility SCADA systems
DirectNet – Koyo / Automation Direct proprietary, yet documented PLC interface
EtherCAT
Ethernet Global Data (EGD) – GE Fanuc PLCs (see also SRTP)
EtherNet/IP – IP stands for "Industrial Protocol". An implementation of CIP, originally created by Rockwell Automation
Ethernet Powerlink – an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG).
FINS, Omron's protocol for communication over several networks, including Ethernet.
FOUNDATION fieldbus – H1 & HSE
HART Protocol
HostLink Protocol, Omron's protocol for communication over serial links.
Interbus, Phoenix Contact's protocol for communication over serial links, now part of PROFINET IO
MECHATROLINK – open protocol originally developed by Yaskawa, supported by the MMA
MelsecNet, and MelsecNet II, /B, and /H, supported by Mitsubishi Electric.
Modbus PEMEX
Modbus Plus
Modbus RTU or ASCII or TCP
MPI – Multi Point Interface
OSGP – The Open Smart Grid Protocol, a widely use protocol for smart grid devices built on ISO/IEC 14908.1
OpenADR – Open Automated D
|
https://en.wikipedia.org/wiki/Schnyder%27s%20theorem
|
In graph theory, Schnyder's theorem is a characterization of planar graphs in terms
of the order dimension of their incidence posets. It is named after Walter Schnyder, who published its proof in 1989.
The incidence poset of an undirected graph with vertex set and edge set is the partially ordered set of height 2 that has as its elements. In this partial order, there is an order relation when is a vertex, is an edge, and is one of the two endpoints of .
The order dimension of a partial order is the smallest number of total orderings whose intersection is the given partial order; such a set of orderings is called a realizer of the partial order.
Schnyder's theorem states that a graph is planar if and only if the order dimension of is at most three.
Extensions
This theorem has been generalized by to a tight bound on the dimension of the height-three partially ordered sets formed analogously from the vertices, edges and faces of a convex polyhedron, or more generally of an embedded planar graph: in both cases, the order dimension of the poset is at most four. However, this result cannot be generalized to higher-dimensional convex polytopes, as there exist four-dimensional polytopes whose face lattices have unbounded order dimension.
Even more generally, for abstract simplicial complexes, the order dimension of the face poset of the complex is at most , where is the minimum dimension of a Euclidean space in which the complex has a geometric realization .
Other graphs
As Schnyder observes, the incidence poset of a graph G has order dimension two if and only if the graph is a path or a subgraph of a path. For, in when an incidence poset has order dimension is two, its only possible realizer consists of two total orders that (when restricted to the graph's vertices) are the reverse of each other. Any other two orders would have an intersection that includes an order relation between two vertices, which is not allowed for incidence posets. For these two o
|
https://en.wikipedia.org/wiki/Degree%20of%20anonymity
|
In anonymity networks (e.g., Tor, Crowds, Mixmaster, I2P, etc.), it is important to be able to measure quantitatively the guarantee that is given to the system. The degree of anonymity is a device that was proposed at the 2002 Privacy Enhancing Technology (PET) conference. Two papers put forth the idea of using entropy as the basis for formally measuring anonymity: "Towards an Information Theoretic Metric for Anonymity", and "Towards Measuring Anonymity". The ideas presented are very similar with minor differences in the final definition of .
Background
Anonymity networks have been developed and many have introduced methods of proving the anonymity guarantees that are possible, originally with simple Chaum Mixes and Pool Mixes the size of the set of users was seen as the security that the system could provide to a user. This had a number of problems; intuitively if the network is international then it is unlikely that a message that contains only Urdu came from the United States, and vice versa. Information like this and via methods like the predecessor attack and intersection attack helps an attacker increase the probability that a user sent the message.
Example With Pool Mixes
As an example consider the network shown above, in here and are users (senders), , and are servers (receivers), the boxes are mixes, and , and where denotes the anonymity set. Now as there are pool mixes let the cap on the number of incoming messages to wait before sending be ; as such if , or is communicating with and receives a message then knows that it must have come from (as the links between the mixes can only have message at a time). This is in no way reflected in 's anonymity set, but should be taken into account in the analysis of the network.
Degree of Anonymity
The degree of anonymity takes into account the probability associated with each user, it begins by defining the entropy of the system (here is where the papers differ slightly but only with notation,
|
https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Steenrod%20axioms
|
In mathematics, specifically in algebraic topology, the Eilenberg–Steenrod axioms are properties that homology theories of topological spaces have in common. The quintessential example of a homology theory satisfying the axioms is singular homology, developed by Samuel Eilenberg and Norman Steenrod.
One can define a homology theory as a sequence of functors satisfying the Eilenberg–Steenrod axioms. The axiomatic approach, which was developed in 1945, allows one to prove results, such as the Mayer–Vietoris sequence, that are common to all homology theories satisfying the axioms.
If one omits the dimension axiom (described below), then the remaining axioms define what is called an extraordinary homology theory. Extraordinary cohomology theories first arose in K-theory and cobordism.
Formal definition
The Eilenberg–Steenrod axioms apply to a sequence of functors from the category of pairs of topological spaces to the category of abelian groups, together with a natural transformation called the boundary map (here is a shorthand for . The axioms are:
Homotopy: Homotopic maps induce the same map in homology. That is, if is homotopic to , then their induced homomorphisms are the same.
Excision: If is a pair and U is a subset of A such that the closure of U is contained in the interior of A, then the inclusion map induces an isomorphism in homology.
Dimension: Let P be the one-point space; then for all .
Additivity: If , the disjoint union of a family of topological spaces , then
Exactness: Each pair (X, A) induces a long exact sequence in homology, via the inclusions and :
If P is the one point space, then is called the coefficient group. For example, singular homology (taken with integer coefficients, as is most common) has as coefficients the integers.
Consequences
Some facts about homology groups can be derived directly from the axioms, such as the fact that homotopically equivalent spaces have isomorphic homology groups.
The homology of some
|
https://en.wikipedia.org/wiki/Solid-state%20lighting
|
Solid-state lighting (SSL) is a type of lighting that uses semiconductor light-emitting diodes (LEDs), organic light-emitting diodes (OLED), or polymer light-emitting diodes (PLED) as sources of illumination rather than electrical filaments, plasma (used in arc lamps such as fluorescent lamps), or gas.
Solid state electroluminescence is used in SSL, as opposed to incandescent bulbs (which use thermal radiation) or fluorescent tubes. Compared to incandescent lighting, SSL creates visible light with reduced heat generation and less energy dissipation. Most common "white LEDs” convert blue light from a solid-state device to an (approximate) white light spectrum using photoluminescence, the same principle used in conventional fluorescent tubes.
The typically small mass of a solid-state electronic lighting device provides for greater resistance to shock and vibration compared to brittle glass tubes/bulbs and long, thin filament wires. They also eliminate filament evaporation, potentially increasing the life span of the illumination device.
Solid-state lighting is often used in traffic lights and is also used in modern vehicle lights, street and parking lot lights, train marker lights, building exteriors, remote controls etc. Controlling the light emission of LEDs may be done most effectively by using the principles of nonimaging optics. Solid-state lighting has made significant advances in industry. In the entertainment lighting industry, standard incandescent tungsten-halogen lamps are being replaced by solid-state lighting fixtures.
See also
L Prize
LED lamp
List of light sources
Smart lighting
References
Further reading
Kho, Mu-Jeong, Javed, T., Mark, R., Maier, E., and David, C. (2008) 'Final Report: OLED Solid State Lighting: Kodak European Research' MOTI (Management of Technology and Innovation) Project, Judge Business School of the University of Cambridge and Kodak European Research, Final Report presented on 4 March 2008 at Kodak European Research at C
|
https://en.wikipedia.org/wiki/American%20Society%20for%20Engineering%20Education
|
The American Society for Engineering Education (ASEE) is a non-profit member association, founded in 1893, dedicated to promoting and improving engineering and engineering technology education. The purpose of ASEE is the advancement of education in all of its functions which pertain to engineering and allied branches of science and technology, including the processes of teaching and learning, counseling, research, extension services and public relations. ASEE administers the engineering technology honor society Tau Alpha Pi.
History
A full reading of the history of ASEE can be found in a 1993 centennial article in the Journal of Engineering Education.
Founded initially as the Society for the Promotion of Engineering Education (SPEE) in 1893, the society was created at a time of great growth in American higher education. In 1862, Congress passed the Morrill Land-Grant Act, which provided money for states to establish public institutions of higher education. These institutions focused on providing practical skills, especially "for the benefit of Agriculture and the Mechanic Arts". As a result of increasingly available higher education, more Americans started entering the workforce with advanced training in applied fields of knowledge. However, they often lacked grounding in the science and engineering principles underlying this practical knowledge.
After a generation of students had passed through these new public universities, professors of engineering began to question whether they should adopt a more rigorous approach to teaching the fundamentals of their field. Ultimately, they concluded that engineering curricula should stress fundamental scientific and mathematical principles, not hands-on apprenticeship experiences. To organize support for this approach to engineering education, SPEE was formed in the midst of the 1893 Chicago World’s Fair. Known as the World's Columbian Exposition, this event heralded the promise of science and engineering by introducing ma
|
https://en.wikipedia.org/wiki/Feigenbaum%20function
|
In the study of dynamical systems the term Feigenbaum function has been used to describe two different functions introduced by the physicist Mitchell Feigenbaum:
the solution to the Feigenbaum-Cvitanović functional equation; and
the scaling function that described the covers of the attractor of the logistic map
Feigenbaum-Cvitanović functional equation
This functional equation arises in the study of one-dimensional maps that, as a function of a parameter, go through a period-doubling cascade. Discovered by Mitchell Feigenbaum and Predrag Cvitanović, the equation is the mathematical expression of the universality of period doubling. It specifies a function g and a parameter by the relation
with the initial conditionsFor a particular form of solution with a quadratic dependence of the solution
near is one of the Feigenbaum constants.
The power series of is approximately
Renormalization
The Feigenbaum function can be derived by a renormalization argument.
The Feigenbaum function satisfiesfor any map on the real line at the onset of chaos.
Scaling function
The Feigenbaum scaling function provides a complete description of the attractor of the logistic map at the end of the period-doubling cascade. The attractor is a Cantor set, and just as the middle-third Cantor set, it can be covered by a finite set of segments, all bigger than a minimal size dn. For a fixed dn the set of segments forms a cover Δn of the attractor. The ratio of segments from two consecutive covers, Δn and Δn+1 can be arranged to approximate a function σ, the Feigenbaum scaling function.
See also
Logistic map
Presentation function
Notes
Bibliography
Bound as Order in Chaos, Proceedings of the International Conference on Order and Chaos held at the Center for Nonlinear Studies, Los Alamos, New Mexico 87545,USA 24–28 May 1982, Eds. David Campbell, Harvey Rose; North-Holland Amsterdam .
Chaos theory
Dynamical systems
|
https://en.wikipedia.org/wiki/NWLink
|
NWLink is Microsoft's implementation of Novell's IPX/SPX protocols. NWLink includes an implementation of NetBIOS atop IPX/SPX.
NWLink packages data to be compatible with client/server services on NetWare Networks. However, NWLink does not provide access to NetWare File and Print Services. To access the File and Print Services the Client Service for NetWare needs to be installed.
NWLink connects NetWare servers through the Gateway Service for NetWare or Client Service for NetWare and provides the transport protocol that connects Windows operating systems to IPX/SPX NetWare networks and compatible operating systems. NWLink supports NetBIOS and Windows Sockets application programming interfaces (API).
NWLink protocols are as follows:
SPX/SPXII
IPX
Service Advertising Protocol (SAP)
Routing Information Protocol (RIP)
NetBIOS
Forwarder
NWLink also provides the following functionalities:
Runs other communication protocol stacks, such as Transmission Control Protocol/Internet Protocol (TCP/IP)
Uses multiple frame types for network adapter binding
Using NWLink IPX/SPX/NetBIOS
NWLink IPX/SPX/NetBIOS Compatible Transport is Microsoft's implementation of the Novell IPX/SPX (Internetwork Packet Exchange/Sequenced Packet Exchange) protocol stack. The Windows XP implementation of the IPX/SPX protocol stack adds NetBIOS support.
The main function of NWLink is to act as a transport protocol to route packets through internetworks. By itself, the NWLink protocol does not allow you to access the data across the network. If you want to access NetWare File and Print Services, you must install NWLink and Client Services for NetWare (software that works at the upper layers of the OSI model to allow access to file and print services).
One advantage of using NWLink is that is easy to install and configure.
Configuring NWLink IPX/SPX
The only options that are configured for NWLink are the internal network number and the frame type. Normally, you leave both settings at their
|
https://en.wikipedia.org/wiki/Weierstrass%20point
|
In mathematics, a Weierstrass point on a nonsingular algebraic curve defined over the complex numbers is a point such that there are more functions on , with their poles restricted to only, than would be predicted by the Riemann–Roch theorem.
The concept is named after Karl Weierstrass.
Consider the vector spaces
where is the space of meromorphic functions on whose order at is at least and with no other poles. We know three things: the dimension is at least 1, because of the constant functions on ; it is non-decreasing; and from the Riemann–Roch theorem the dimension eventually increments by exactly 1 as we move to the right. In fact if is the genus of , the dimension from the -th term is known to be
for
Our knowledge of the sequence is therefore
What we know about the ? entries is that they can increment by at most 1 each time (this is a simple argument: has dimension as most 1 because if and have the same order of pole at , then will have a pole of lower order if the constant is chosen to cancel the leading term). There are question marks here, so the cases or need no further discussion and do not give rise to Weierstrass points.
Assume therefore . There will be steps up, and steps where there is no increment. A non-Weierstrass point of occurs whenever the increments are all as far to the right as possible: i.e. the sequence looks like
Any other case is a Weierstrass point. A Weierstrass gap for is a value of such that no function on has exactly a -fold pole at only. The gap sequence is
for a non-Weierstrass point. For a Weierstrass point it contains at least one higher number. (The Weierstrass gap theorem or Lückensatz is the statement that there must be gaps.)
For hyperelliptic curves, for example, we may have a function with a double pole at only. Its powers have poles of order and so on. Therefore, such a has the gap sequence
In general if the gap sequence is
the weight of the Weierstrass point is
This is introduced
|
https://en.wikipedia.org/wiki/European%20embedded%20value
|
The European embedded value (EEV) is an effort by the CFO Forum to standardize the calculation of the embedded value. For this purpose the CFO Forum has released guidelines how embedded value should be calculated.
There is a lot of subjectivity involved in calculating the value of a life insurer. Insurance contracts are long-term contracts, so the value of the company now is dependent on how each of those contracts end up performing. Profit is made if the policyholder does not die, for example, and just contributes premiums over many years. Losses are possible for policies where the insured dies soon after signing the contract. And profitability is also affected by whether (and when) a policy might terminate early.
An actuary calculates an embedded value by making certain assumptions about life expectancy, persistency, investment conditions, and so on - thus making an estimate of what the company is worth now. But if each person has a different opinion on how things will turn out, you could expect a range of inconsistent estimates of the worth of the company. With this range of approaches, it is very difficult to compare EV calculations between companies.
The CFO Forum was formed to consider general issues relevant to measuring the value of insurance companies. The EEV was the output of this forum, and allows greater consistency in the such calculations, making them more useful.
Types
EEV can be "real world" or "market consistent". The former takes the best estimate for parameters that are available, whereas the latter uses a slightly constrained set of parameters which are close to best estimate, but which produce results which match market-related hedge costs.
Real-world EEV usually uses a risk discount rate made up of the risk-free rate plus a risk margin which reflects the weighted average cost of capital and Beta from the CAPM model. Using company-level economic models clearly reflects a top-down approach to determining the risk discount rate.
Market-cons
|
https://en.wikipedia.org/wiki/Kokee%20Ditch
|
The Kōkee Ditch is an irrigation canal on the island of Kauai.
In 1923, construction began on the Kōkee Ditch system to open the mauka hills to sugar cane production.
By 1926, the Kōkee Ditch was completed, diverting water from Mohihi Stream and the headwaters of the Waimea River in the Alakai Swamp at an altitude of about 3400 feet. About one-fourth of the Kōkee Ditch supply irrigated the highland sugar cane fields below Puu Ōpae reservoir on Niu Ridge, and the balance irrigated the highland fields east of Kōkee Road.
Canals in Hawaii
Geography of Kauai
Irrigation projects
Irrigation in the United States
Canals opened in 1926
|
https://en.wikipedia.org/wiki/Azumaya%20algebra
|
In mathematics, an Azumaya algebra is a generalization of central simple algebras to -algebras where need not be a field. Such a notion was introduced in a 1951 paper of Goro Azumaya, for the case where is a commutative local ring. The notion was developed further in ring theory, and in algebraic geometry, where Alexander Grothendieck made it the basis for his geometric theory of the Brauer group in Bourbaki seminars from 1964–65. There are now several points of access to the basic definitions.
Over a ring
An Azumaya algebra
over a commutative ring is an -algebra obeying any of the following equivalent conditions:
There exists an -algebra such that the tensor product of -algebras is Morita equivalent to .
The -algebra is Morita equivalent to , where is the opposite algebra of .
The center of is , and is separable.
is finitely generated, faithful, and projective as an -module, and the tensor product is isomorphic to via the map sending to the endomorphism of .
Examples over a field
Over a field , Azumaya algebras are completely classified by the Artin–Wedderburn theorem since they are the same as central simple algebras. These are algebras isomorphic to the matrix ring for some division algebra over whose center is just . For example, quaternion algebras provide examples of central simple algebras.
Examples over local rings
Given a local commutative ring , an -algebra is Azumaya if and only if is free of positive finite rank as an -module, and the algebra is a central simple algebra over , hence all examples come from central simple algebras over .
Cyclic algebras
There is a class of Azumaya algebras called cyclic algebras which generate all similarity classes of Azumaya algebras over a field , hence all elements in the Brauer group (defined below). Given a finite cyclic Galois field extension of degree , for every and any generator there is a twisted polynomial ring , also denoted , generated by an element such that
and the
|
https://en.wikipedia.org/wiki/Hierarchical%20state%20routing
|
Hierarchical state routing (HSR), proposed in Scalable Routing Strategies for Ad Hoc Wireless Networks by Iwata et al. (1999), is a typical example of a hierarchical routing protocol.
HSR maintains a hierarchical topology, where elected clusterheads at the lowest level become members of the next higher level. On the higher level, superclusters are formed, and so on. Nodes which want to communicate to a node outside of their cluster ask their clusterhead to forward their packet to the next level, until a clusterhead of the other node is in the same cluster. The packet then travels down to the destination node.
Furthermore, HSR proposes to cluster nodes in a logical way instead of in a geological way: members of the same company or in the same battlegroup are clustered together, assuming they will communicate much within the logical cluster.
HSR does not specify how a cluster is to be formed.
Routing algorithms
|
https://en.wikipedia.org/wiki/Instantaneous%20phase%20and%20frequency
|
Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function:
where arg is the complex argument function.
The instantaneous frequency is the temporal rate of change of the instantaneous phase.
And for a real-valued function s(t), it is determined from the function's analytic representation, sa(t):
where represents the Hilbert transform of s(t).
When φ(t) is constrained to its principal value, either the interval or , it is called wrapped phase. Otherwise it is called unwrapped phase, which is a continuous function of argument t, assuming sa(t) is a continuous function of t. Unless otherwise indicated, the continuous form should be inferred.
Examples
Example 1
where ω > 0.
In this simple sinusoidal example, the constant θ is also commonly referred to as phase or phase offset. φ(t) is a function of time; θ is not. In the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference (sin or cos) is specified. φ(t) is unambiguously defined.
Example 2
where ω > 0.
In both examples the local maxima of s(t) correspond to φ(t) = 2N for integer values of N. This has applications in the field of computer vision.
Formulations
Instantaneous angular frequency is defined as:
and instantaneous (ordinary) frequency is defined as:
where φ(t) must be the unwrapped phase; otherwise, if φ(t) is wrapped, discontinuities in φ(t) will result in Dirac delta impulses in f(t).
The inverse operation, which always unwraps phase, is:
This instantaneous frequency, ω(t), can be derived directly from the real and imaginary parts of sa(t), instead of the complex arg without concern of phase unwrapping.
2m1 and m2 are the integer multiples of necessary to add to unwrap the phase. At values of time, t, whe
|
https://en.wikipedia.org/wiki/Scheinerman%27s%20conjecture
|
In mathematics, Scheinerman's conjecture, now a theorem, states that every planar graph is the intersection graph of a set of line segments in the plane. This conjecture was formulated by E. R. Scheinerman in his Ph.D. thesis (1984), following earlier results that every planar graph could be represented as the intersection graph of a set of simple curves in the plane . It was proven by .
For instance, the graph G shown below to the left may be represented as the intersection graph of the set of segments shown below to the right. Here, vertices of G are represented by straight line segments and edges of G are represented by intersection points.
Scheinerman also conjectured that segments with only three directions would be sufficient to represent 3-colorable graphs, and conjectured that analogously every planar graph could be represented using four directions. If a graph is represented with segments having only k directions
and no two segments belong to the same line, then the graph can be colored using k colors, one color for each direction. Therefore, if every planar graph can be represented in this way with only four directions,
then the four color theorem follows.
and proved that every bipartite planar graph can be represented as an intersection graph of horizontal and vertical line segments; for this result see also . proved that every triangle-free planar graph can be represented as an intersection graph of line segments having only three directions; this result implies Grötzsch's theorem that triangle-free planar graphs can be colored with three colors. proved that if a planar graph G can be 4-colored in such a way that no separating cycle uses all four colors, then G has a representation as an intersection graph of segments.
proved that planar graphs are in 1-STRING, the class of intersection graphs of simple curves in the plane that intersect each other in at most one crossing point per pair. This class is intermediate between the intersectio
|
https://en.wikipedia.org/wiki/Damien%20Doligez
|
Damien Doligez is a French academic and programmer. He is best known for his role as a developer of the OCaml system, especially its garbage collector. He is a research scientist (chargé de recherche) at the French government research institution INRIA.
Activities
In 1990, Doligez and Xavier Leroy built an implementation of Caml (called Caml Light) based on a bytecode interpreter with a fast, sequential garbage collector, and began to extend it with support for concurrency. In 1996, Doligez was part of the team that built the first version of OCaml, and has been a core maintainer of the language since (as of April 2023).
In 1994, Hal Finney issued a challenge on the cypherpunk mailing to read an encrypted SSLv2 session. Doligez used spare computers at Inria, ENS and École polytechnique to break it after scanning half the key space in 8 days. He came in a close second in the competition, with the winning team announcing their result just two hours earlier.
Since 2006, Doligez has co-developed the Zenon theorem prover for first-order classic logic with equality. Zenon is the engine that drives the Focalize programming environment which can design and develop certified programs.
The environment is based on a functional language with some object-oriented features, allowing programmers to write the formal specification and the
proofs of their code within the same setting. Proof generation is assisted using Zenon and results are formally machine checked using the Coq proof checker.
In 2008, Doligez worked with Leslie Lamport and others to build the TLA+ proof manager which supports the incremental development and checking of hierarchically structured computer-assisted proofs. The proof manager project remains actively maintained and developed as of 2022.
References
External links
Damien Doligez's home page
Computer programmers
French computer scientists
Living people
Year of birth missing (living people)
Place of birth missing (living people)
|
https://en.wikipedia.org/wiki/Whitney%27s%20planarity%20criterion
|
In mathematics, Whitney's planarity criterion is a matroid-theoretic characterization of planar graphs, named after Hassler Whitney. It states that a graph G is planar if and only if its graphic matroid is also cographic (that is, it is the dual matroid of another graphic matroid).
In purely graph-theoretic terms, this criterion can be stated as follows: There must be another (dual) graph G'=(V',E') and a bijective correspondence between the edges E' and the edges E of the original graph G, such that a subset T of E forms a spanning tree of G if and only if the edges corresponding to the complementary subset E-T form a spanning tree of G'.
Algebraic duals
An equivalent form of Whitney's criterion is that a graph G is planar if and only if it has a dual graph whose graphic matroid is dual to the graphic matroid of G.
A graph whose graphic matroid is dual to the graphic matroid of G is known as an algebraic dual of G. Thus, Whitney's planarity criterion can be expressed succinctly as: a graph is planar if and only if it has an algebraic dual.
Topological duals
If a graph is embedded into a topological surface such as the plane, in such a way that every face of the embedding is a topological disk, then the dual graph of the embedding is defined as the graph (or in some cases multigraph) H that has a vertex for every face of the embedding, and an edge for every adjacency between a pair of faces.
According to Whitney's criterion, the following conditions are equivalent:
The surface on which the embedding exists is topologically equivalent to the plane, sphere, or punctured plane
H is an algebraic dual of G
Every simple cycle in G corresponds to a minimal cut in H, and vice versa
Every simple cycle in H corresponds to a minimal cut in G, and vice versa
Every spanning tree in G corresponds to the complement of a spanning tree in H, and vice versa.
It is possible to define dual graphs of graphs embedded on nonplanar surfaces such as the torus, but these duals do not ge
|
https://en.wikipedia.org/wiki/Operad
|
In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad , one defines an algebra over to be a set together with concrete operations on this set which behave just like the abstract operations of . For instance, there is a Lie operad such that the algebras over are precisely the Lie algebras; in a sense abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations.
History
Operads originate in algebraic topology; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 and by J. Peter May in 1972.
Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads:
"The name operad and the formal definition appear first in the early 1970's in J. Peter May's "The Geometry of Iterated Loop Spaces", but a year or more earlier, Boardman and Vogt described the same concept under the name categories of operators in standard form, inspired by PROPs and PACTs of Adams and Mac Lane. In fact, there is an abundance of prehistory. Weibel [Wei] points out that the concept first arose a century ago in A.N. Whitehead's "A Treatise on Universal Algebra", published in 1898."
The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer).
Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwac
|
https://en.wikipedia.org/wiki/Inverse%20search
|
Inverse search (also called "reverse search") is a feature of some non-interactive typesetting programs, such as LaTeX and GNU LilyPond. These programs read an abstract, textual, definition of a document as input, and convert this into a graphical format such as DVI or PDF. In a windowing system, this typically means that the source code is entered in one editor window, and the resulting output is viewed in a different output window. Inverse search means that a graphical object in the output window works as a hyperlink, which brings you back to the line and column in the editor, where the clicked object was defined. The inverse search feature is particularly useful during proofreading.
Implementations
In TeX and LaTeX, the package srcltx provides an inverse search feature through DVI output files (e.g., with yap or Xdvi), while vpe, pdfsync and SyncTeX provide similar functionality for PDF output, among other techniques. The Comparison of TeX editors has a column on support of inverse search; most of them provide it nowadays.
GNU LilyPond provides an inverse search feature through PDF output files, since version 2.6. The program calls this feature Point-and-click,
Many integrated development environments for programming use inverse search to display compilation error messages, and during debugging when a breakpoint is hit.
References
Bibliography
Jérôme Laurens, ”Direct and reverse synchronization with SyncTeX”, in TUGboat 29(3), 2008, p365–371, PDF (532KB) — including an overview of synchronization techniques with TeX
External links
How to set up inverse search with xdvi
Software development
|
https://en.wikipedia.org/wiki/Boolean%20expression
|
In computer science, a Boolean expression is an expression used in programming languages that produces a Boolean value when evaluated. A Boolean value is either true or false. A Boolean expression may be composed of a combination of the Boolean constants true or false, Boolean-typed variables, Boolean-valued operators, and Boolean-valued functions.
Boolean expressions correspond to propositional formulas in logic and are a special case of Boolean circuits.
Boolean operators
Most programming languages have the Boolean operators OR, AND and NOT; in C and some languages inspired by it, these are represented by "||" (double pipe character), "&&" (double ampersand) and "!" (exclamation point) respectively, while the corresponding bitwise operations are represented by "|", "&" and "~" (tilde). In the mathematical literature the symbols used are often "+" (plus), "·" (dot) and overbar, or "∨" (vel), "∧" (et) and "¬" (not) or "′" (prime).
Some languages, e.g., Perl and Ruby, have two sets of Boolean operators, with identical functions but different precedence. Typically these languages use and, or and not for the lower precedence operators.
Some programming languages derived from PL/I have a bit string type and use BIT(1) rather than a separate Boolean type. In those languages the same operators serve for Boolean operations and bitwise operations. The languages represent OR, AND, NOT and EXCLUSIVE OR by "|", "&", "¬" (infix) and "¬" (prefix).
Short-circuit operators
Some programming languages, e.g., Ada, have short-circuit Boolean operators. These operators use a lazy evaluation, that is, if the value of the expression can be determined from the left hand Boolean expression then they do not evaluate the right hand Boolean expression. As a result, there may be side effects that only occur for one value of the left hand operand.
Examples
The expression is evaluated as .
The expression is evaluated as .
and are equivalent Boolean expressions, both of which are ev
|
https://en.wikipedia.org/wiki/Arctic%E2%80%93alpine
|
An Arctic–alpine taxon is one whose natural distribution includes the Arctic and more southerly mountain ranges, particularly the Alps. The presence of identical or similar taxa in both the tundra of the far north, and high mountain ranges much further south is testament to the similar environmental conditions found in the two locations. Arctic–alpine plants, for instance, must be adapted to the low temperatures, extremes of temperature, strong winds and short growing season; they are therefore typically low-growing and often form mats or cushions to reduce water loss through evapotranspiration.
It is often assumed that an organism which currently has an Arctic–alpine distribution was, during colder periods of the Earth's history (such as during the Pleistocene glaciations), widespread across the area between the Arctic and the Alps. This is known from pollen records to be true for Dryas octopetala, for instance. In other cases, the disjunct distribution may be the result of long-distance dispersal.
Examples of Arctic–alpine plants include:
Arabis alpina
Betula nana
Draba incana
Dryas octopetala
Gagea serotina (syn. Lloydia serotina)
Loiseleuria procumbens
Micranthes stellaris
Oxyria digyna
Ranunculus glacialis
Salix herbacea
Saussurea alpina
Saxifraga oppositifolia
Silene acaulis
Thalictrum alpinum
Veronica alpina
References
Biogeography
|
https://en.wikipedia.org/wiki/Join%20%28topology%29
|
In topology, a field of mathematics, the join of two topological spaces and , often denoted by or , is a topological space formed by taking the disjoint union of the two spaces, and attaching line segments joining every point in to every point in . The join of a space with itself is denoted by . The join is defined in slightly different ways in different contexts
Geometric sets
If and are subsets of the Euclidean space , then:,that is, the set of all line-segments between a point in and a point in .
Some authors restrict the definition to subsets that are joinable: any two different line-segments, connecting a point of A to a point of B, meet in at most a common endpoint (that is, they do not intersect in their interior). Every two subsets can be made "joinable". For example, if is in and is in , then and are joinable in . The figure above shows an example for m=n=1, where and are line-segments.
Examples
The join of two simplices is a simplex: the join of an n-dimensional and an m-dimensional simplex is an (m+n+1)-dimensional simplex. Some special cases are:
The join of two disjoint points is an interval (m=n=0).
The join of a point and an interval is a triangle (m=0, n=1).
The join of two line segments is homeomorphic to a solid tetrahedron or disphenoid, illustrated in the figure above right (m=n=1).
The join of a point and an (n-1)-dimensional simplex is an n-dimensional simplex.
The join of a point and a polygon (or any polytope) is a pyramid, like the join of a point and square is a square pyramid. The join of a point and a cube is a cubic pyramid.
The join of a point and a circle is a cone, and the join of a point and a sphere is a hypercone.
Topological spaces
If and are any topological spaces, then:
where the cylinder is attached to the original spaces and along the natural projections of the faces of the cylinder:
Usually it is implicitly assumed that and are non-empty, in which case the definition is often phrased a bi
|
https://en.wikipedia.org/wiki/Bicinchoninic%20acid%20assay
|
The bicinchoninic acid assay (BCA assay), also known as the Smith assay, after its inventor, Paul K. Smith at the Pierce Chemical Company, now part of Thermo Fisher Scientific, is a biochemical assay for determining the total concentration of protein in a solution (0.5 μg/mL to 1.5 mg/mL), similar to Lowry protein assay, Bradford protein assay or biuret reagent. The total protein concentration is exhibited by a color change of the sample solution from green to purple in proportion to protein concentration, which can then be measured using colorimetric techniques. The BCA assay was patented by Pierce Chemical Company in 1989 & the patent expired in 2006.
Mechanism
A stock BCA solution contains the following ingredients in a highly alkaline solution with a pH 11.25: bicinchoninic acid, sodium carbonate, sodium bicarbonate, sodium tartrate, and copper(II) sulfate pentahydrate.
The BCA assay primarily relies on two reactions. First, the peptide bonds in protein reduce Cu2+ ions from the copper(II) sulfate to Cu1+ (a temperature dependent reaction). The amount of Cu2+ reduced is proportional to the amount of protein present in the solution. Next, two molecules of bicinchoninic acid chelate with each Cu1+ ion, forming a purple-colored complex that strongly absorbs light at a wavelength of 562 nm.
The bicinchoninic acid Cu1+ complex is influenced in protein samples by the presence of cysteine/cystine, tyrosine, and tryptophan side chains. At higher temperatures (37 to 60 °C), peptide bonds assist in the formation of the reaction complex. Incubating the BCA assay at higher temperatures is recommended as a way to increase assay sensitivity while minimizing the variances caused by unequal amino acid composition.
The amount of protein present in a solution can be quantified by measuring the absorption spectra and comparing with protein solutions of known concentration.
Limitations
The BCA assay is largely incompatible with reducing agents and metal chelators, although tra
|
https://en.wikipedia.org/wiki/Monkey%20testing
|
In software testing, monkey testing is a technique where the user tests the application or system by providing random inputs and checking the behavior, or seeing whether the application or system will crash. Monkey testing is usually implemented as random, automated unit tests.
While the source of the name "monkey" is uncertain, it is believed by some that the name has to do with the infinite monkey theorem, which states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare. Some others believe that the name comes from the classic Mac OS application "The Monkey" developed by Steve Capps prior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs in MacPaint.
Monkey Testing is also included in Android Studio as part of the standard testing tools for stress testing.
Types of monkey testing
Monkey testing can be categorized into smart monkey tests or dumb monkey tests.
Smart monkey tests
Smart monkeys are usually identified by the following characteristics:
Have a brief idea about the application or system
Know its own location, where it can go and where it has been
Know its own capability and the system's capability
Focus to break the system
Report bugs they found
Some smart monkeys are also referred to as brilliant monkeys, which perform testing as per user's behavior and can estimate the probability of certain bugs.
Dumb monkey tests
Dumb monkeys, also known as "ignorant monkeys", are usually identified by the following characteristics:
Have no knowledge about the application or system
Don't know if their input or behavior is valid or invalid
Don't know their or the system's capabilities, nor the flow of the application
Can find fewer bugs than smart monkeys, but can also find important bugs that are hard to catch by smart monkeys
Advantages and disadvantages
Advantages
Mo
|
https://en.wikipedia.org/wiki/Isovanillin
|
Isovanillin is a phenolic aldehyde, an organic compound and isomer of vanillin. It is a selective inhibitor of aldehyde oxidase. It is not a substrate of that enzyme, and is metabolized by aldehyde dehydrogenase into isovanillic acid, which could make it a candidate drug for use in alcohol aversion therapy. Isovanillin can be used as a precursor in the chemical total synthesis of morphine. The proposed metabolism of isovanillin (and vanillin) in rat has been described in literature, and is part of the WikiPathways machine readable pathway collection.
See also
Vanillin
2-Hydroxy-5-methoxybenzaldehyde
ortho-Vanillin
2-Hydroxy-4-methoxybenzaldehyde
References
Hydroxybenzaldehydes
Flavors
Perfume ingredients
Phenol ethers
|
https://en.wikipedia.org/wiki/Opposite%20ring
|
In mathematics, specifically abstract algebra, the opposite of a ring is another ring with the same elements and addition operation, but with the multiplication performed in the reverse order. More explicitly, the opposite of a ring is the ring whose multiplication ∗ is defined by for all in R. The opposite ring can be used to define multimodules, a generalization of bimodules. They also help clarify the relationship between left and right modules (see ).
Monoids, groups, rings, and algebras can all be viewed as categories with a single object. The construction of the opposite category generalizes the opposite group, opposite ring, etc.
Relation to automorphisms and antiautomorphisms
In this section the symbol for multiplication in the opposite ring is changed from asterisk to diamond, to avoid confusion with some unary operation.
A ring having isomorphic opposite ring is called a self-opposite ring, which name indicates that is essentially the same as .
All commutative rings are self-opposite.
Let us define the antiisomorphism
, where for .
It is indeed an antiisomorphism, since .
The antiisomorphism can be defined generally for semigroups, monoids, groups, rings, rngs, algebras. In case of rings (and rngs) we obtain the general equivalence.
A ring is self-opposite if and only if it has at least one antiautomorphism.
Proof:
: Let be self-opposite. If is an isomorphism, then , being a composition of antiisomorphism and isomorphism, is an antiisomorphism from to itself, hence antiautomorphism.
: If is an antiautomorphism, then is an isomorphism as a composition of two antiisomorphisms. So is self-opposite.
and
If is self-opposite and the group of automorphisms is finite, then the number of antiautomorphisms equals the number of automorphisms.
Proof: By the assumption and the above equivalence there exist antiautomorphisms. If we pick one of them and denote it by , then the map , where runs over , is clearly injective but also surjective, sin
|
https://en.wikipedia.org/wiki/Central%20England%20temperature
|
The Central England Temperature (CET) record is a meteorological dataset originally published by Professor Gordon Manley in 1953 and subsequently extended and updated in 1974, following many decades of painstaking work. The monthly mean surface air temperatures, for the Midlands region of England, are given (in degrees Celsius) from the year 1659 to the present.
This record represents the longest series of monthly temperature observations in existence. It is a valuable dataset for meteorologists and climate scientists. It is monthly from 1659, and a daily version has been produced from 1772. The monthly means from November 1722 onwards are given to a precision of 0.1 °C. The earliest years of the series, from 1659 to October 1722 inclusive, for the most part only have monthly means given to the nearest degree or half a degree, though there is a small 'window' of 0.1 degree precision from 1699 to 1706 inclusive. This reflects the number, accuracy, reliability and geographical spread of the temperature records that were available for the years in question.
Data quality
Although best efforts have been made by Manley and subsequent researchers to quality control the series, there are data problems in the early years, with some non-instrumental data used. These problems account for the lower precision to which the early monthly means were quoted by Manley. Parker et al. (1992) addressed this by not using data prior to 1772, since their daily series required more accurate data than did the original series of monthly means. Before 1722, instrumental records do not overlap and Manley used a non-instrumental series from Utrecht compiled by Labrijn (1945), to make the monthly central England temperature (CET) series complete.
For a period early in the 21st century there were two versions of the series: the "official" version maintained by the Hadley Centre in Exeter, and a version that was maintained by the late Philip Eden which he argued was more consistent with the se
|
https://en.wikipedia.org/wiki/Pores%20of%20Kohn
|
The pores of Kohn (also known as interalveolar connections or alveolar pores) are discrete holes in walls of adjacent alveoli. Cuboidal type II alveolar cells, which produce surfactant, usually form part of aperture.
Etymology
The pores of Kohn take their name from the German physician and pathologist Hans Nathan Kohn (1866–1935) who first described them in 1893.
Development
They are absent in human newborns. They develop at 3–4 years of age along with canals of Lambert during the process of thinning of alveolar septa.
Function
The pores allow the passage of other materials such as fluid and bacteria, which is an important mechanism of spread of infection in lobar pneumonia and spread of fibrin in the grey hepatisation phase of recovery from the same. They also equalize the pressure in adjacent alveoli and, combined with increased distribution of surfactant, thus play an important role in prevention of collapse of the lung.
Unlike adults, in children these inter-alveolar connections are poorly developed which aids in limiting the spread of infection. This is thought to contribute to round pneumonia.
References
Lung anatomy
|
https://en.wikipedia.org/wiki/Ryo%20Kawasaki
|
was a Japanese jazz fusion guitarist, composer and band leader, best known as one of the first musicians to develop and popularise the fusion genre and for helping to develop the guitar synthesizer in collaboration with Roland Corporation and Korg. His album Ryo Kawasaki and the Golden Dragon Live was one of the first all-digital recordings and he created the Kawasaki Synthesizer for the Commodore 64. During the 1960s, he played with various Japanese jazz groups and also formed his own bands. In the early 1970s, he moved to New York City, where he settled and worked with Gil Evans, Elvin Jones, Chico Hamilton, Ted Curson, Joanne Brackeen amongst others. In the mid-1980s, Kawasaki drifted out of performing music in favour of writing music software for computers. He also produced several techno dance singles, formed his own record company called Satellites Records, and later returned to jazz-fusion in 1991.
Life
Early life (1947–1968)
Ryo Kawasaki was born on February 25, 1947, in Kōenji, Tokyo, while Japan was still struggling and recovering from the early post World War II period. His father, Torao Kawasaki, was a Japanese diplomat who had worked for The Japanese Ministry of Foreign Affairs since 1919. Torao worked at several Japanese consulates and embassies, including San Francisco, Honolulu, Fengtian (then capital of Manchuria, now Shenyang in China), Shanghai, and Beijing while active as an English teacher and translator for official diplomatic conferences. Ryo's mother, Hiroko, was also multilingual, and spoke German, Russian, English, and Chinese aside from her native tongue Japanese. Hiroko grew up in Manchuria and then met Torao in Shanghai. Torao was already 58 years old when Ryo was born as an only child.
Kawasaki's mother encouraged him to take piano and ballet lessons, and he took voice lessons and solfege at age four and violin lessons at five, and he was reading music before elementary school. As a grade scholar, he began a lifelong fascination with
|
https://en.wikipedia.org/wiki/Jovan%20Karamata
|
Jovan Karamata (; February 1, 1902 – August 14, 1967) was a Serbian mathematician. He is remembered for contributions to analysis, in particular, the Tauberian theory and the theory of slowly varying functions. Considered to be among the most influential Serbian mathematicians of the 20th century, Karamata was one of the founders of the Mathematical Institute of the Serbian Academy of Sciences and Arts, established in 1946.
Life
Jovan Karamata was born in Zagreb on February 1, 1902, into a family descended from merchants based in the city of Zemun, which was then in Austria-Hungary, and now in Serbia. Being of Aromanian origin, the family traced its roots back to Pyrgoi, Eordaia, West Macedonia (his father Ioannis Karamatas was the president of the "Greek Community of Zemun"); Aromanians mainly lived and still live in the area of modern Greece. Its business affairs on the borders of the Austro-Hungarian and Ottoman empires were very well known. In 1914, he finished most of his primary school in Zemun but because of constant warfare on the borderlands, Karamata's father sent him, together with his brothers and his sister, to Switzerland for their own safety. In Lausanne, 1920, he finished primary school oriented towards mathematics and sciences. In the same year he enrolled at the Engineering faculty of Belgrade University and, after several years moved to the Philosophy and Mathematicians sector, where he graduated in 1925.
He spent the years 1927–1928 in Paris, as a fellow of the Rockefeller Foundation, and in 1928 he became Assistant for Mathematics at the Faculty of Philosophy of Belgrade University. In 1930 he became Assistant Professor, in 1937 Associate Professor and, after the end of World War II, in 1950 he became Full Professor. In 1951 he was elected Full Professor at the University of Geneva. In 1933 he became a member of Yugoslav Academy of Sciences and Arts, Czech Royal Society in 1936, and Serbian Royal Academy in 1939 as well as a fellow of Serbian
|
https://en.wikipedia.org/wiki/B%C3%A9zout%20domain
|
In mathematics, a Bézout domain is a form of a Prüfer domain. It is an integral domain in which the sum of two principal ideals is again a principal ideal. This means that for every pair of elements a Bézout identity holds, and that every finitely generated ideal is principal. Any principal ideal domain (PID) is a Bézout domain, but a Bézout domain need not be a Noetherian ring, so it could have non-finitely generated ideals (which obviously excludes being a PID); if so, it is not a unique factorization domain (UFD), but still is a GCD domain. The theory of Bézout domains retains many of the properties of PIDs, without requiring the Noetherian property. Bézout domains are named after the French mathematician Étienne Bézout.
Examples
All PIDs are Bézout domains.
Examples of Bézout domains that are not PIDs include the ring of entire functions (functions holomorphic on the whole complex plane) and the ring of all algebraic integers. In case of entire functions, the only irreducible elements are functions associated to a polynomial function of degree 1, so an element has a factorization only if it has finitely many zeroes. In the case of the algebraic integers there are no irreducible elements at all, since for any algebraic integer its square root (for instance) is also an algebraic integer. This shows in both cases that the ring is not a UFD, and so certainly not a PID.
Valuation rings are Bézout domains. Any non-Noetherian valuation ring is an example of a non-noetherian Bézout domain.
The following general construction produces a Bézout domain S that is not a UFD from any Bézout domain R that is not a field, for instance from a PID; the case is the basic example to have in mind. Let F be the field of fractions of R, and put , the subring of polynomials in F[X] with constant term in R. This ring is not Noetherian, since an element like X with zero constant term can be divided indefinitely by noninvertible elements of R, which are still noninvertible in S, and
|
https://en.wikipedia.org/wiki/RNA-binding%20protein
|
RNA-binding proteins (often abbreviated as RBPs) are proteins that bind to the double or single stranded RNA in cells and participate in forming ribonucleoprotein complexes.
RBPs contain various structural motifs, such as RNA recognition motif (RRM), dsRNA binding domain, zinc finger and others.
They are cytoplasmic and nuclear proteins. However, since most mature RNA is exported from the nucleus relatively quickly, most RBPs in the nucleus exist as complexes of protein and pre-mRNA called heterogeneous ribonucleoprotein particles (hnRNPs).
RBPs have crucial roles in various cellular processes such as: cellular function, transport and localization. They especially play a major role in post-transcriptional control of RNAs, such as: splicing, polyadenylation, mRNA stabilization, mRNA localization and translation. Eukaryotic cells express diverse RBPs with unique RNA-binding activity and protein–protein interaction. According to the Eukaryotic RBP Database (EuRBPDB), there are 2961 genes encoding RBPs in humans. During evolution, the diversity of RBPs greatly increased with the increase in the number of introns. Diversity enabled eukaryotic cells to utilize RNA exons in various arrangements, giving rise to a unique RNP (ribonucleoprotein) for each RNA. Although RBPs have a crucial role in post-transcriptional regulation in gene expression, relatively few RBPs have been studied systematically.It has now become clear that RNA–RBP interactions play important roles in many biological processes among organisms.
Structure
Many RBPs have modular structures and are composed of multiple repeats of just a few specific basic domains that often have limited sequences. Different RBPs contain these sequences arranged in varying combinations. A specific protein's recognition of a specific RNA has evolved through the rearrangement of these few basic domains. Each basic domain recognizes RNA, but many of these proteins require multiple copies of one of the many common domains to fun
|
https://en.wikipedia.org/wiki/Barry%20Simon
|
Barry Martin Simon (born 16 April 1946) is an American mathematical physicist and was the IBM professor of Mathematics and Theoretical Physics at Caltech, known for his prolific contributions in spectral theory, functional analysis, and nonrelativistic quantum mechanics (particularly Schrödinger operators), including the connections to atomic and molecular physics. He has authored more than 400 publications on mathematics and physics.
His work has focused on broad areas of mathematical physics and analysis covering: quantum field theory, statistical mechanics, Brownian motion, random matrix theory, general nonrelativistic quantum mechanics (including N-body systems and resonances), nonrelativistic quantum mechanics in electric and magnetic fields, the semi-classical limit, the singular continuous spectrum, random and ergodic Schrödinger operators, orthogonal polynomials, and non-selfadjoint spectral theory.
Early life
Barry Simon's mother was a school teacher, his father was an accountant. Simon attended James Madison High School in Brooklyn.
Career
During his high school years, Simon started attending college courses for highly gifted pupils at Columbia University. In 1962, Simon won a MAA mathematics competition. The New York Times reported that in order to receive full credits for a faultless test result he had to make a submission with MAA. In this submission he proved that one of the problems posed in the test was ambiguous.
In 1962, Simon entered Harvard with a stipend. He became a Putnam Fellow in 1965 at 19 years old. He received his AB in 1966 from Harvard College and his PhD in Physics at Princeton University in 1970, supervised by Arthur Strong Wightman. His dissertation dealt with Quantum mechanics for Hamiltonians defined as quadratic forms.
Following his doctoral studies, Simon took a professorship at Princeton for several years, often working with colleague Elliott H. Lieb on the Thomas-Fermi Theory and Hartree-Fock Theory of atoms in addition
|
https://en.wikipedia.org/wiki/Subacute%20sclerosing%20panencephalitis
|
Subacute sclerosing panencephalitis (SSPE), also known as Dawson disease, is a rare form of progressive brain inflammation caused by a persistent infection with the measles virus. The condition primarily affects children, teens, and young adults. It has been estimated that about 2 in 10,000 people who get measles will eventually develop SSPE. However, a 2016 study estimated that the rate for unvaccinated infants under 15 months was as high as 1 in 609. No cure for SSPE exists, and the condition is almost always fatal. SSPE should not be confused with acute disseminated encephalomyelitis, which can also be caused by the measles virus, but has a very different timing and course.
SSPE is caused by the wild-type virus, not by vaccine strains.
Signs and symptoms
SSPE is characterized by a history of primary measles infection, followed by an asymptomatic period that lasts 7 years on average but can range from 1 month to 27 years. After the asymptomatic period, progressive neurological deterioration occurs, characterized by behavior change, intellectual problems, myoclonic seizures, blindness, ataxia, and eventually death.
Stages of Progression
Symptoms progress through the following 4 stages:
Stage 1: There may be personality changes, mood swings, or depression. Fever, headache, and memory loss may also be present. This stage may last up to 6 months.
Stage 2: This stage may involve jerking, muscle spasms, seizures, loss of vision, and dementia.
Stage 3: Jerking movements are replaced by writhing (twisting) movements and rigidity. At this stage, complications may result in blindness or death.
Stage 4: Progressive loss of consciousness into a persistent vegetative state, which may be preceded by or concomitant with paralysis, occurs in the final stage, during which breathing, heart rate, and blood pressure are affected. Death usually occurs as a result of fever, heart failure, or the brain’s inability to control the autonomic nervous system.
Pathogenesis
A large num
|
https://en.wikipedia.org/wiki/Nakai%20conjecture
|
In mathematics, the Nakai conjecture is an unproven characterization of smooth algebraic varieties, conjectured by Japanese mathematician Yoshikazu Nakai in 1961.
It states that if V is a complex algebraic variety, such that its ring of differential operators is generated by the derivations it contains, then V is a smooth variety. The converse statement, that smooth algebraic varieties have rings of differential operators that are generated by their derivations, is a result of Alexander Grothendieck.
The Nakai conjecture is known to be true for algebraic curves and Stanley–Reisner rings. A proof of the conjecture would also establish the Zariski–Lipman conjecture, for a complex variety V with coordinate ring R. This conjecture states that if the derivations of R are a free module over R, then V is smooth.
References
Algebraic geometry
Singularity theory
Conjectures
Unsolved problems in geometry
|
https://en.wikipedia.org/wiki/Universal%20remote
|
A universal remote is a remote control that can be programmed to operate various brands of one or more types of consumer electronics devices. Low-end universal remotes can only control a set number of devices determined by their manufacturer, while mid- and high-end universal remotes allow the user to program in new control codes to the remote. Many remotes sold with various electronics include universal remote capabilities for other types of devices, which allows the remote to control other devices beyond the device it came with. For example, a VCR remote may be programmed to operate various brands of televisions.
History
On May 30, 1985, Philips introduced the first universal remote (U.S. Pat. #4774511) under the Magnavox brand name.
In 1985, Robin Rumbolt, William "Russ" McIntyre, and Larry Goodson with North American Philips Consumer Electronics (Magnavox, Sylvania, and Philco) developed the first universal remote control.
In 1987, the first programmable universal remote control was released. It was called the "CORE" and was created by CL 9, a startup founded by Steve Wozniak, the inventor of the Apple I and Apple II computers.
In March 1987, Steve Ciarcia published an article in Byte magazine entitled "Build a Trainable Infrared Master Controller", describing a universal remote with the ability to upload the settings to a computer. This device had macro capabilities.
Layout and features
Most universal remotes share a number of basic design elements:
A power button, as well as a switch or series of buttons to select which device the remote is controlling at the moment. A typical selection includes TV, VCR, DVD, and CBL/SAT, along with other devices that sometimes include DVRs, audio equipment or home automation devices.
Channel and volume up/down selectors (sometimes marked with + and - signs).
A numeric keypad for entering channel numbers and some other purposes such as time and date entry.
A set button (sometimes recessed to avoid accidental pressing
|
https://en.wikipedia.org/wiki/Ehud%20Hrushovski
|
Ehud Hrushovski (; born 30 September 1959) is a mathematical logician. He is a Merton Professor of Mathematical Logic at the University of Oxford and a Fellow of Merton College, Oxford. He was also Professor of Mathematics at the Hebrew University of Jerusalem.
Early life and education
Hrushovski's father, Benjamin Harshav (Hebrew: בנימין הרשב, né Hruszowski; 1928–2015), was a literary theorist, a Yiddish and Hebrew poet and a translator, professor at Yale University and Tel Aviv University in comparative literature. Ehud Hrushovski earned his PhD from the University of California, Berkeley in 1986 under Leo Harrington; his dissertation was titled Contributions to Stable Model Theory. He was a professor of mathematics at the Massachusetts Institute of Technology until 1994, when he became a professor at the Hebrew University of Jerusalem. Hrushovski moved in 2017 to the University of Oxford, where he is the Merton Professor of Mathematical Logic.
Career
Hrushovski is well known for several fundamental contributions to model theory, in particular in the branch that has become known as geometric model theory, and its applications. His PhD thesis revolutionized stable model theory (a part of model theory arising from the stability theory introduced by Saharon Shelah). Shortly afterwards he found counterexamples to the Trichotomy Conjecture of Boris Zilber and his method of proof has become well known as Hrushovski constructions and found many other applications since.
One of his most famous results is his proof of the geometric Mordell–Lang conjecture in all characteristics using model theory in 1996. This deep proof was a landmark in logic and geometry. He has had many other famous and notable results in model theory and its applications to geometry, algebra, and combinatorics.
Honours and awards
He was an invited speaker at the 1990 International Congress of Mathematicians and a plenary speaker at the 1998 ICM. He is a recipient of the Erdős Prize of the Israel
|
https://en.wikipedia.org/wiki/Cray%20MTA-2
|
The Cray MTA-2 is a shared-memory MIMD computer marketed by Cray Inc. It is an unusual design based on the Tera computer designed by Tera Computer Company. The original Tera computer (also known as the MTA) turned out to be nearly unmanufacturable due to its aggressive packaging and circuit technology. The MTA-2 was an attempt to correct these problems while maintaining essentially the same processor architecture respun in one silicon ASIC, down from some 26 gallium arsenide ASICs in the original MTA; and while regressing the network design from a 4-D torus topology to a less efficient but more scalable Cayley graph topology. The name Cray was added to the second version after Tera Computer Company bought the remains of the Cray Research division of Silicon Graphics in 2000 and renamed itself Cray Inc.
The MTA-2 was not a commercial success, with only one moderately-sized 40-processor system ("Boomer") being sold to the United States Naval Research Laboratory in 2002, and one 4-processor system sold to the Electronic Navigation Research Institute (ENRI) in Japan.
The MTA computers pioneered several technologies, presumably to be used in future Cray Inc. products:
A simple, whole-machine-oriented programming model.
Hardware-based multithreading.
Low-overhead thread synchronization.
See also
Cray MTA
Heterogeneous Element Processor
References
External links
Utrecht University HPCG - Cray MTA-2 page
Mta-2
Supercomputers
|
https://en.wikipedia.org/wiki/Hadwiger%E2%80%93Nelson%20problem
|
In geometric graph theory, the Hadwiger–Nelson problem, named after Hugo Hadwiger and Edward Nelson, asks for the minimum number of colors required to color the plane such that no two points at distance 1 from each other have the same color. The answer is unknown, but has been narrowed down to one of the numbers 5, 6 or 7. The correct value may depend on the choice of axioms for set theory.
Relation to finite graphs
The question can be phrased in graph theoretic terms as follows. Let G be the unit distance graph of the plane: an infinite graph with all points of the plane as vertices and with an edge between two vertices if and only if the distance between the two points is 1. The Hadwiger–Nelson problem is to find the chromatic number of G. As a consequence, the problem is often called "finding the chromatic number of the plane". By the de Bruijn–Erdős theorem, a result of , the problem is equivalent (under the assumption of the axiom of choice) to that of finding the largest possible chromatic number of a finite unit distance graph.
History
According to , the problem was first formulated by Nelson in 1950, and first published by . had earlier published a related result, showing that any cover of the plane by five congruent closed sets contains a unit distance in one of the sets, and he also mentioned the problem in a later paper . discusses the problem and its history extensively.
One application of the problem connects it to the Beckman–Quarles theorem, according to which any mapping of the Euclidean plane (or any higher dimensional space) to itself that preserves unit distances must be an isometry, preserving all distances. Finite colorings of these spaces can be used to construct mappings from them to higher-dimensional spaces that preserve distances but are not isometries. For instance, the Euclidean plane can be mapped to a six-dimensional space by coloring it with seven colors so that no two points at distance one have the same color, and then mapping
|
https://en.wikipedia.org/wiki/Zariski%20geometry
|
In mathematics, a Zariski geometry consists of an abstract structure introduced by Ehud Hrushovski and Boris Zilber, in order to give a characterisation of the Zariski topology on an algebraic curve, and all its powers. The Zariski topology on a product of algebraic varieties is very rarely the product topology, but richer in closed sets defined by equations that mix two sets of variables. The result described gives that a very definite meaning, applying to projective curves and compact Riemann surfaces in particular.
Definition
A Zariski geometry consists of a set X and a topological structure on each of the sets
X, X2, X3, ...
satisfying certain axioms.
(N) Each of the Xn is a Noetherian topological space, of dimension at most n.
Some standard terminology for Noetherian spaces will now be assumed.
(A) In each Xn, the subsets defined by equality in an n-tuple are closed. The mappings
Xm → Xn
defined by projecting out certain coordinates and setting others as constants are all continuous.
(B) For a projection
p: Xm → Xn
and an irreducible closed subset Y of Xm, p(Y) lies between its closure Z and Z \ where is a proper closed subset of Z. (This is quantifier elimination, at an abstract level.)
(C) X is irreducible.
(D) There is a uniform bound on the number of elements of a fiber in a projection of any closed set in Xm, other than the cases where the fiber is X.
(E) A closed irreducible subset of Xm, of dimension r, when intersected with a diagonal subset in which s coordinates are set equal, has all components of dimension at least r − s + 1.
The further condition required is called very ample (cf. very ample line bundle). It is assumed there is an irreducible closed subset P of some Xm, and an irreducible closed subset Q of P× X2, with the following properties:
(I) Given pairs (x, y), (, ) in X2, for some t in P, the set of (t, u, v) in Q includes (t, x, y) but not (t, , )
(J) For t outside a proper closed subset of P, the set of (x, y) in X
|
https://en.wikipedia.org/wiki/Switch%20virtual%20interface
|
A switch virtual interface (SVI) represents a logical layer-3 interface on a switch.
VLANs divide broadcast domains in a LAN environment. Whenever hosts in one VLAN need to communicate with hosts in another VLAN, the traffic must be routed between them. This is known as inter-VLAN routing. On layer-3 switches it is accomplished by the creation of layer-3 interfaces (SVIs). Inter VLAN routing, in other words routing between VLANs, can be achieved using SVIs.
SVI or VLAN interface, is a virtual routed interface that connects a VLAN on the device to the Layer 3 router engine on the same device. Only one VLAN interface can be associated with a VLAN, but you need to configure a VLAN interface for a VLAN only when you want to route between VLANs or to provide IP host connectivity to the device through a virtual routing and forwarding (VRF) instance that is not the management VRF. When you enable VLAN interface creation, a switch creates a VLAN interface for the default VLAN (VLAN 1) to permit remote switch administration.
SVIs are generally configured for a VLAN for the following reasons:
Allow traffic to be routed between VLANs by providing a default gateway for the VLAN.
Provide fallback bridging (if required for non-routable protocols).
Provide Layer 3 IP connectivity to the switch.
Support bridging configurations and routing protocol.
Access Layer - 'Routed Access' Configuration (in lieu of Spanning Tree)
SVIs advantages include:
Much faster than router-on-a-stick, because everything is hardware-switched and routed.
No need for external links from the switch to the router for routing.
Not limited to one link. Layer 2 EtherChannels can be used between the switches to get more bandwidth.
Latency is much lower, because it does not need to leave the switch
An SVI can also be known as a Routed VLAN Interface (RVI) by some vendors.
References
Cisco Systems, Configure InterVLAN Routing on Layer 3 Switches
Cisco Systems, Configuring SVI
Cisco Systems, 2006,
|
https://en.wikipedia.org/wiki/Natural%20food
|
Natural food and all-natural food are terms in food labeling and marketing with several definitions, often implying foods that are not manufactured by processing. In some countries like the United Kingdom, the term "natural" is defined and regulated; in others, such as the United States, the term natural is not enforced for food labels, although there is USDA regulation of organic labeling.
The term is assumed to describe foods having ingredients that are intrinsic to an unprocessed food.
Diverse definitions
While almost all foodstuffs are derived from the natural products of plants and animals, 'natural foods' are often assumed to be foods that are not processed, or do not contain any food additives, or do not contain particular additives such as hormones, antibiotics, sweeteners, food colors, preservatives, or flavorings that were not originally in the food. In fact, many people (63%) when surveyed showed a preference for products labeled "natural" compared to the unmarked counterparts, based on the common belief (86% of polled consumers) that the term "natural" indicated that the food does not contain any artificial ingredients.
The term is variously misused on labels and in advertisements. The international Food and Agriculture Organization's Codex Alimentarius does not recognize the term 'natural' but does have a standard for organic foods.
History
The idea of eating "natural foods" was promoted by cookbook writers in the United States during the 1970s with cookbooks emphasizing "natural," "health" and "whole" foods in opposition to processed foods which were considered bad for health. In 1971, Eleanor Levitt authored The Wonderful World of Natural Food Cookery which dismissed processed foods such as readymade dinners, cookie mixes, and cold cuts as being full of preservatives and other "chemical poisons."
Jean Hewitt authored the New York Times Natural Foods Cookbook, an influential cookbook on the use of natural foods. Hewitt suggested that before larg
|
https://en.wikipedia.org/wiki/Open%20Vulnerability%20and%20Assessment%20Language
|
Open Vulnerability and Assessment Language (OVAL) is an international, information security, community standard to promote open and publicly available security content, and to standardize the transfer of this information across the entire spectrum of security tools and services. OVAL includes a language used to encode system details, and an assortment of content repositories held throughout the community. The language standardizes the three main steps of the assessment process:
representing configuration information of systems for testing;
analyzing the system for the presence of the specified machine state (vulnerability, configuration, patch state, etc.); and
reporting the results of this assessment.
The repositories are collections of publicly available and open content that utilize the language.
The OVAL community has developed three schemas written in Extensible Markup Language (XML) to serve as the framework and vocabulary of the OVAL Language. These schemas correspond to the three steps of the assessment process: an OVAL System Characteristics schema for representing system information, an OVAL Definition schema for expressing a specific machine state, and an OVAL Results schema for reporting the results of an assessment.
Content written in the OVAL Language is located in one of the many repositories found within the community. One such repository, known as the OVAL Repository, is hosted by The MITRE Corporation. It is the central meeting place for the OVAL Community to discuss, analyze, store, and disseminate OVAL Definitions. Each definition in the OVAL Repository determines whether a specified software vulnerability, configuration issue, program, or patch is present on a system.
The information security community contributes to the development of OVAL by participating in the creation of the OVAL Language on the OVAL Developers Forum and by writing definitions for the OVAL Repository through the OVAL Community Forum. An OVAL Board consisting of repr
|
https://en.wikipedia.org/wiki/Diskless%20Remote%20Boot%20in%20Linux
|
DRBL (Diskless Remote Boot in Linux) is a NFS-/NIS server providing a diskless or systemless environment for client machines.
It could be used for
cloning machines with Clonezilla software inbuilt,
providing for a network installation of Linux distributions like Fedora, Debian, etc.,
providing machines via PXE boot (or similar means) with a small size operation system (e.g., DSL, Puppy Linux, FreeDOS).
Providing a DRBL-Server
Installation on a machine running a supported Linux distribution via installation script,
Live CD.
Installation is possible on a machine with Debian, Ubuntu, Mandriva, Red Hat Linux, Fedora, CentOS or SuSE already installed. Unlike LTSP, it uses distributed hardware resources and makes it possible for clients to fully access local hardware, thus making it feasible to use server machines with less power. It also includes Clonezilla, a partitioning and disk cloning utility similar to Symantec Ghost.
DRBL comes under the terms of the GNU GPL license so providing the user with the ability to customize it.
Features
DRBL excels in two main categories.
Disk Cloning
Clonezilla (packaged with DRBL) uses Partimage to avoid copying free space, and gzip to compress Hard Disk images. The stored image can then be restored to multiple machines simultaneously using multicast packets, thus greatly reducing the time it takes to image large numbers of computers. The DRBL Live CD allows you to do all of this without actually installing anything on any of the machines, by simply booting one machine (the server) from the CD, and PXE booting the rest of the machines.
Diskless node
A diskless node is an excellent way to make use of old hardware. Using old hardware as thin clients is a good solution, but has some disadvantages that a diskless node can make up for.
Streaming audio/video - A terminal server must decompress, recompress, and send video over the network to the client. A diskless node does all decompression locally, and can make use of a
|
https://en.wikipedia.org/wiki/Default-free%20zone
|
In Internet routing, the default-free zone (DFZ) is the collection of all Internet autonomous systems (AS) that do not require a default route to route a packet to any destination. Conceptually, DFZ routers have a "complete" Border Gateway Protocol table, sometimes referred to as the Internet routing table, global routing table or global BGP table. However, internet routing changes rapidly and the widespread use of route filtering ensures that no router has a complete view of all routes. Any routing table created would look different from the perspective of different routers, even if a stable view could be achieved.
Highly connected Autonomous Systems and routers
The Weekly Routing Reports used by the ISP community come from the Asia-Pacific Network Information Centre (APNIC) router in Tokyo, which is a well-connected router that has as good a view of the Internet as any other single router. For serious routing research, however, routing information will be captured at multiple well-connected sites, including high-traffic ISPs (see the "skitter core") below.
As of May 12, 2014, there were 494,105 routes seen by the APNIC router. These came from 46,795 autonomous systems, of which only 172 were transit-only and 35787 were stub/origin-only. 6087 autonomous systems provided some level of transit.
The Idea of an "Internet core"
The term "default-free zone" is sometimes confused with an "Internet core" or Internet backbone, but there has been no true "core" since before the Border Gateway Protocol (BGP) was introduced. In pre-BGP days, when the Exterior Gateway Protocol (EGP) was the exterior routing protocol, it indeed could be assumed there was a single Internet core.
That concept, however, has been obsolete for a long time. At best, today's definition of the Internet core is statistical, with the "skitter core" being some number of AS with the greatest traffic according to the CAIDA measurements, previously made with its measuring tool called "skitter". The C
|
https://en.wikipedia.org/wiki/Low-definition%20television
|
Low-definition television (LDTV) refers to TV systems that have a lower screen resolution than standard-definition television systems. The term is usually used in reference to digital television, in particular when broadcasting at the same (or similar) resolution as low-definition analog television systems. Mobile DTV systems usually transmit in low definition, as do all slow-scan television systems.
Sources
The Video CD format uses a progressive scan LDTV signal (352×240 or 352×288), which is half the vertical and horizontal resolution of full-bandwidth SDTV. However, most players will internally upscale VCD material to 480/576 lines for playback, as this is both more widely compatible and gives a better overall appearance. No motion information is lost due to this process, as VCD video is not high-motion and only plays back at 25 or 30 frames per second, and the resultant display is comparable to consumer-grade VHS video playback.
For the first few years of its existence, YouTube offered only one, low-definition resolution of 256x144 or 144p at 30~50 fps or less, later extending first to widescreen 426×240, then to gradually higher resolutions; once the video service had become well established and had been acquired by Google, it had access to Google's radically improved storage space and transmission bandwidth, and could rely on a good proportion of its users having high-speed internet connections, giving an overall effect reminiscent of early online video streaming attempts using RealVideo or similar services, where 160×120 at single-figure framerates was deemed acceptable to cater to those whose network connections could not even sufficiently deliver 240p content.
Video games
Older video game consoles and home computers often generated a technically compliant analog 525-line NTSC or 625-line PAL signal, but only sent one field type rather than alternating between the two. This created a 262 or 312 line progressive scan signal (with half the vertical resol
|
https://en.wikipedia.org/wiki/Grothendieck%E2%80%93Katz%20p-curvature%20conjecture
|
In mathematics, the Grothendieck–Katz p-curvature conjecture is a local-global principle for linear ordinary differential equations, related to differential Galois theory and in a loose sense analogous to the result in the Chebotarev density theorem considered as the polynomial case. It is a conjecture of Alexander Grothendieck from the late 1960s, and apparently not published by him in any form.
The general case remains unsolved, despite recent progress; it has been linked to geometric investigations involving algebraic foliations.
Formulation
In a simplest possible statement the conjecture can be stated in its essentials for a vector system written as
for a vector v of size n, and an n×n matrix A of algebraic functions with algebraic number coefficients. The question is to give a criterion for when there is a full set of algebraic function solutions, meaning a fundamental matrix (i.e. n vector solutions put into a block matrix). For example, a classical question was for the hypergeometric equation: when does it have a pair of algebraic solutions, in terms of its parameters? The answer is known classically as Schwarz's list. In monodromy terms, the question is of identifying the cases of finite monodromy group.
By reformulation and passing to a larger system, the essential case is for rational functions in A and rational number coefficients. Then a necessary condition is that for almost all prime numbers p, the system defined by reduction modulo p should also have a full set of algebraic solutions, over the finite field with p elements.
Grothendieck's conjecture is that these necessary conditions, for almost all p, should be sufficient. The connection with p-curvature is that the mod p condition stated is the same as saying the p-curvature, formed by a recurrence operation on A, is zero; so another way to say it is that p-curvature of 0 for almost all p implies enough algebraic solutions of the original equation.
Katz's formulation for the Galois group
Nichol
|
https://en.wikipedia.org/wiki/Norton%20amplifier
|
A Norton amplifier or current differencing amplifier (CDA) is an electronic amplifier with two low impedance current inputs and one low impedance voltage output where the output voltage is proportional to the difference between the two input currents. A norton amplifier is a current controlled voltage source (CCVS) controlled by the difference of two input currents.
The Norton amplifier can be regarded as the dual of the operational transconductance amplifier (OTA) which takes a differential voltage input and provides a high impedance current output. The OTA has a gain measured in units of transconductance (siemens) whereas the Norton amplifier has a gain measured in units of transimpedance (ohms).
A commercial example of this circuit is the LM3900 quad operational amplifier and its high speed cousin the LM359 (400MHz gain bandwidth product).
The LM3900 was introduced in the mid 1970s, and was designed to be an easy to use single supply op amp with comparable input bias currents (~30nA) to other bi polar op-amps of the time period (LM741, LM324), while having rail to rail output and a much higher gain bandwidth product(2.5MHz). The LM3900 was popular with designers of analog synthesizers. The LM359 was introduced in the early 1990s as video capable amplifier capable of high amplification at video frequencies (10MHz).
See also
Current differencing transconductance amplifier, current difference input and differential current output
Current-feedback operational amplifier, single-ended current input and voltage output.
References
Bibliography
Carr, Joseph, Linear Integrated Circuits, Newnes, 1996 .
Bali, S.P., Linear Integrated Circuits, Tata McGraw-Hill Education, 2008 .
Terrell, David, Op Amps: Design, Application, and Troubleshooting, Newnes, 1996 .
T. M. Frederiksen, W. F. Davis and D. W. Zobel, A new current-differencing single-supply operational amplifier, in IEEE Journal of Solid-State Circuits, vol. 6, no. 6, pp. 340-347, Dec. 1971, doi: 10.1109/
|
https://en.wikipedia.org/wiki/Akira%20Haraguchi
|
(born 1946, Miyagi Prefecture), is a retired Japanese engineer known for memorizing and reciting digits of pi.
Memorization of pi
Haraguchi holds the current unofficial world record (100,000 digits) in 16 hours, starting at 9:00a.m. (16:28 GMT) on October3, 2006. He equaled his previous record of 83,500 digits by nightfall and then continued until stopping with digit number 100,000 at 1:28 a.m. on October4, 2006. The event was filmed in a public hall in Kisarazu, east of Tokyo, where he had five-minute breaks every two hours to eat onigiri to keep up his energy levels. Even his trips to the toilet were filmed to prove that the exercise was legitimate.
His previous world record of 109,836 was performed July12, 2005.
On Pi Day, 2015, he claimed to be able to recite 111,701 digits.
Despite Haraguchi's efforts and detailed documentation, the Guinness World Records have not yet accepted any of his records set.
Haraguchi views the memorization of pi as "the religion of the universe", and as an expression of his lifelong quest for eternal truth.
Haraguchi's mnemonic system
Haraguchi uses a system he developed, which assigns kana symbols to numbers, allowing for the memorization of pi as a collection of stories. The same system was developed by Lewis Carroll to assign letters from the alphabet to numbers, and creating stories to memorize numbers. This system preceded the system above which developed.
Example
0 => can be substituted by o, ra, ri, ru, re, ro, wo, on or oh;
1 => can be substituted by a, i, u, e, hi, bi, pi, an, ah, hy, hyan, bya or byan;
The same is done for each number from 2 through 9.
References
External links
BBC News, Asia-Pacific
Pi World Ranking List
Memory world records
Pi-related people
Japanese engineers
People from Miyagi Prefecture
Living people
1946 births
Hitachi people
|
https://en.wikipedia.org/wiki/High-water%20mark%20%28computer%20security%29
|
In the fields of physical security and information security, the high-water mark for access control was introduced by Clark Weissmann in 1969. It pre-dates the Bell–LaPadula security model, whose first volume appeared in 1972.
Under high-water mark, any object less than the user's security level can be opened, but the object is relabeled to reflect the highest security level currently open, hence the name.
The practical effect of the high-water mark was a gradual movement of all objects towards the highest security level in the system. If user A is writing a CONFIDENTIAL document, and checks the unclassified dictionary, the dictionary becomes CONFIDENTIAL. Then, when user B is writing a SECRET report and checks the spelling of a word, the dictionary becomes SECRET. Finally, if user C is assigned to assemble the daily intelligence briefing at the TOP SECRET level, reference to the dictionary makes the dictionary TOP SECRET, too.
Low-water mark
Low-water mark is an extension to Biba Model. In the Biba model, no-write-up and no-read-down rules are enforced. In this model, the rules are exactly opposite of the rules in Bell-La Padula model. In the low-water mark model, read down is permitted, but the subject label, after reading, will be degraded to object label. It can be classified in floating label security models.
See also
Watermark (data synchronization)
References
Computer security models
|
https://en.wikipedia.org/wiki/Systems%20biomedicine
|
Systems biomedicine, also called systems biomedical science, is the application of systems biology to the understanding and modulation of developmental and pathological processes in humans, and in animal and cellular models. Whereas systems biology aims at modeling exhaustive networks of interactions (with the long-term goal of, for example, creating a comprehensive computational model of the cell), mainly at intra-cellular level, systems biomedicine emphasizes the multilevel, hierarchical nature of the models (molecule, organelle, cell, tissue, organ, individual/genotype, environmental factor, population, ecosystem) by discovering and selecting the key factors at each level and integrating them into models that reveal the global, emergent behavior of the biological process under consideration.
Such an approach will be favorable when the execution of all the experiments necessary to establish exhaustive models is limited by time and expense (e.g., in animal models) or basic ethics (e.g., human experimentation).
In the year of 1992, a paper on system biomedicine by Kamada T. was published (Nov.-Dec.), and an article on systems medicine and pharmacology by Zeng B.J. was also published (April) in the same time period.
In 2009, the first collective book on systems biomedicine was edited by Edison T. Liu and Douglas A. Lauffenburger.
In October 2008, one of the first research groups uniquely devoted to systems biomedicine was established at the European Institute of Oncology. One of the first research centers specialized on systems biomedicine was founded by Rudi Balling. The Luxembourg Centre for Systems Biomedicine is an interdisciplinary center of the University of Luxembourg. The first centre devoted to spatial issues in systems biomedicine has been recently established at Oregon Health and Science University.
The first peer-reviewed journal on this topic, Systems Biomedicine, was recently established by Landes Bioscience.
See also
Systems biology
Systems med
|
https://en.wikipedia.org/wiki/Icyball
|
Icyball is a name given to two early refrigerators, one made by Australian Sir Edward Hallstrom in 1923, and the other design patented by David Forbes Keith of Toronto (filed 1927, granted 1929), and manufactured by American Powel Crosley Jr., who bought the rights to the device. Both devices are unusual in design in that they did not require the use of electricity for cooling. They can run for a day on a cup of kerosene, allowing rural users lacking electricity the benefits of refrigeration.
Operation (Crosley Icyball)
The Crosley Icyball is as an example of a gas-absorption refrigerator, as can be found today in recreational vehicles or campervans. Unlike most refrigerators, the Icyball has no moving parts, and instead of operating continuously, is manually cycled. Typically it is charged in the morning for 1.5 hours, and provides cooling throughout the heat of the day.
Absorption refrigerators and the more common mechanical refrigerators both cool by the evaporation of refrigerant. (Evaporation of a liquid causes cooling, as for example, liquid sweat on the skin evaporating cools, and the reverse process releases much heat.) In absorption refrigerators, the buildup of pressure due to evaporation of refrigerant is relieved not by suction at the inlet of a compressor, but by absorption into an absorptive medium (water in the case of the Icy Ball).
The Icyball system moves heat from the refrigerated cabinet to the warmer room by using ammonia as the refrigerant. It consists of two metal balls: a hot ball, which in the fully charged state contains the absorber (water) and a cold ball containing liquid ammonia. These are joined by a pipe in the shape of an inverted U. The pipe allows ammonia gas to move in either direction.
After approximately a day's use (varying depending on load), the Icyball stops cooling, and needs recharging. The Icyball is removed from the refrigerated cabinet, and the cold ball, from which all the ammonia has evaporated during the previous
|
https://en.wikipedia.org/wiki/Principal%20ideal%20theorem
|
In mathematics, the principal ideal theorem of class field theory, a branch of algebraic number theory, says that extending ideals gives a mapping on the class group of an algebraic number field to the class group of its Hilbert class field, which sends all ideal classes to the class of a principal ideal. The phenomenon has also been called principalization, or sometimes capitulation.
Formal statement
For any algebraic number field K and any ideal I of the ring of integers of K, if L is the Hilbert class field of K, then
is a principal ideal αOL, for OL the ring of integers of L and some element α in it.
History
The principal ideal theorem was conjectured by , and was the last remaining aspect of his program on class fields to be completed, in 1929.
reduced the principal ideal theorem to a question about finite abelian groups: he showed that it would follow if the transfer from a finite group to its derived subgroup is trivial. This result was proved by Philipp Furtwängler (1929).
References
Ideals (ring theory)
Group theory
Homological algebra
Theorems in algebraic number theory
|
https://en.wikipedia.org/wiki/Airespace
|
Airespace, Inc., formerly Black Storm Networks, was a networking hardware company founded in 2001, manufacturing wireless access points and Controllers. The company developed the AP-Controller model for fast deployment and the Lightweight Access Point Protocol, the precursor to the CAPWAP protocol.
Corporate history
Airespace was founded in 2001 by Pat Calhoun, Bob Friday, Bob O'Hara, and Ajay Mishra. The company was venture backed by Storm Ventures, Norwest Venture Partners and Battery Ventures. In 2003, it entered into an agreement to provide OEM equipment to NEC. In 2004 it signed an agreement with Alcatel and Nortel to provide equipment to the two companies on an OEM basis.
Airespace was first to market with integrated location tracking. Within a year and a half, the company grew rapidly into the market leader of enterprise Wi-Fi.
Cisco Systems acquired Airespace in 2005 for $450 million; this was one of 13 acquisitions Cisco made that year and the largest up to that point. Airespace products were merged into Cisco Aironet product line.
References
2001 establishments in California
2005 disestablishments in California
2005 mergers and acquisitions
American companies established in 2001
American companies disestablished in 2005
Cisco Systems acquisitions
Computer companies established in 2001
Computer companies disestablished in 2005
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct networking companies
Networking hardware companies
|
https://en.wikipedia.org/wiki/Grammar-based%20code
|
Grammar-based codes or Grammar-based compression are compression algorithms based on the idea of constructing a context-free grammar (CFG) for the string to be compressed. Examples include universal lossless data compression algorithms. To compress a data sequence , a grammar-based code transforms into a context-free grammar .
The problem of finding a smallest grammar for an input sequence (smallest grammar problem) is known to be NP-hard, so many grammar-transform algorithms are proposed from theoretical and practical viewpoints.
Generally, the produced grammar is further compressed by statistical encoders like arithmetic coding.
Examples and characteristics
The class of grammar-based codes is very broad. It includes block codes, the multilevel pattern matching (MPM) algorithm, variations of the incremental parsing Lempel-Ziv code, and many other new universal lossless compression algorithms.
Grammar-based codes are universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source with a finite alphabet.
Practical algorithms
The compression programs of the following are available from external links.
Sequitur is a classical grammar compression algorithm that sequentially translates an input text into a CFG, and then the produced CFG is encoded by an arithmetic coder.
Re-Pair is a greedy algorithm using the strategy of most-frequent-first substitution. The compressive performance is powerful, although the main memory space requirement is very large.
GLZA, which constructs a grammar that may be reducible, i.e., contain repeats, where the entropy-coding cost of "spelling out" the repeats is less than the cost creating and entropy-coding a rule to capture them. (In general, the compression-optimal SLG is not irreducible, and the Smallest Grammar Problem is different from the actual SLG compression problem.)
See also
Dictionary coder
Grammar induction
Straight-line grammar
References
External links
GLZA discuss
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.