source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Tameness%20theorem
In mathematics, the tameness theorem states that every complete hyperbolic 3-manifold with finitely generated fundamental group is topologically tame, in other words homeomorphic to the interior of a compact 3-manifold. The tameness theorem was conjectured by . It was proved by and, independently, by Danny Calegari and David Gabai. It is one of the fundamental properties of geometrically infinite hyperbolic 3-manifolds, together with the density theorem for Kleinian groups and the ending lamination theorem. It also implies the Ahlfors measure conjecture. History Topological tameness may be viewed as a property of the ends of the manifold, namely, having a local product structure. An analogous statement is well known in two dimensions, that is, for surfaces. However, as the example of Alexander horned sphere shows, there are wild embeddings among 3-manifolds, so this property is not automatic. The conjecture was raised in the form of a question by Albert Marden, who proved that any geometrically finite hyperbolic 3-manifold is topologically tame. The conjecture was also called the Marden conjecture or the tame ends conjecture. There had been steady progress in understanding tameness before the conjecture was resolved. Partial results had been obtained by Thurston, Brock, Bromberg, Canary, Evans, Minsky, Ohshika. An important sufficient condition for tameness in terms of splittings of the fundamental group had been obtained by Bonahon. The conjecture was proved in 2004 by Ian Agol, and independently, by Danny Calegari and David Gabai. Agol's proof relies on the use of manifolds of pinched negative curvature and on Canary's trick of "diskbusting" that allows to replace a compressible end with an incompressible end, for which the conjecture has already been proved. The Calegari–Gabai proof is centered on the existence of certain closed, non-positively curved surfaces that they call "shrinkwrapped". See also Tame topology References . . . . 3-m
https://en.wikipedia.org/wiki/Energy%20landscape
An energy landscape is a mapping of possible states of a system. The concept is frequently used in physics, chemistry, and biochemistry, e.g. to describe all possible conformations of a molecular entity, or the spatial positions of interacting molecules in a system, or parameters and their corresponding energy levels, typically Gibbs free energy. Geometrically, the energy landscape is the graph of the energy function across the configuration space of the system. The term is also used more generally in geometric perspectives to mathematical optimization, when the domain of the loss function is the parameter space of some system. Applications The term is useful when examining protein folding; while a protein can theoretically exist in a nearly infinite number of conformations along its energy landscape, in reality proteins fold (or "relax") into secondary and tertiary structures that possess the lowest possible free energy. The key concept in the energy landscape approach to protein folding is the folding funnel hypothesis. In catalysis, when designing new catalysts or refining existing ones, energy landscapes are considered to avoid low-energy or high-energy intermediates that could halt the reaction or demand excessive energy to reach the final products. In glassing models, the local minima of an energy landscape correspond to metastable low temperature states of a thermodynamic system. In machine learning, artificial neural networks may be analyzed using analogous approaches. For example, a neural network may be able to perfectly fit the training set, corresponding to a global minimum of zero loss, but overfitting the model ("learning the noise" or "memorizing the training set"). Understanding when this happens can be studied using the geometry of the corresponding energy landscape. Formal definition Mathematically, an energy landscape is a continuous function associating each physical state with an energy, where is a topological space. In the continuou
https://en.wikipedia.org/wiki/Transfer%20matrix
In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory. For the mask , which is a vector with component indexes from to , the transfer matrix of , we call it here, is defined as More verbosely The effect of can be expressed in terms of the downsampling operator "": Properties See also Hurwitz determinant References (contains proofs of the above properties) Wavelets Numerical analysis
https://en.wikipedia.org/wiki/Anderson%27s%20rule
Anderson's rule is used for the construction of energy band diagrams of the heterojunction between two semiconductor materials. Anderson's rule states that when constructing an energy band diagram, the vacuum levels of the two semiconductors on either side of the heterojunction should be aligned (at the same energy). It is also referred to as the electron affinity rule, and is closely related to the Schottky–Mott rule for metal–semiconductor junctions. Anderson's rule was first described by R. L. Anderson in 1960. Constructing energy band diagrams Once the vacuum levels are aligned it is possible to use the electron affinity and band gap values for each semiconductor to calculate the conduction band and valence band offsets. The electron affinity (usually given by the symbol in solid state physics) gives the energy difference between the lower edge of the conduction band and the vacuum level of the semiconductor. The band gap (usually given the symbol ) gives the energy difference between the lower edge of the conduction band and the upper edge of the valence band. Each semiconductor has different electron affinity and band gap values. For semiconductor alloys it may be necessary to use Vegard's law to calculate these values. Once the relative positions of the conduction and valence bands for both semiconductors are known, Anderson's rule allows the calculation of the band offsets of both the valence band () and the conduction band (). After applying Anderson's rule and discovering the bands' alignment at the junction, Poisson’s equation can then be used to calculate the shape of the band bending in the two semiconductors. Example: straddling gap Consider a heterojunction between semiconductor 1 and semiconductor 2. Suppose the conduction band of semiconductor 2 is closer to the vacuum level than that of semiconductor 1. The conduction band offset would then be given by the difference in electron affinity (energy from upper conducting band to vacuum level)
https://en.wikipedia.org/wiki/Brace%20notation
In several programming languages, such as Perl, brace notation is a faster way to extract bytes from a string variable. In pseudocode An example of brace notation using pseudocode which would extract the 82nd character from the string is: a_byte = a_string{82} The equivalent of this using a hypothetical function 'MID' is: a_byte = MID(a_string, 82, 1) In C In C, strings are normally represented as a character array rather than an actual string data type. The fact a string is really an array of characters means that referring to a string would mean referring to the first element in an array. Hence in C, the following is a legitimate example of brace notation: #include <stdio.h> #include <string.h> #include <stdlib.h> int main(int argc, char* argv[]) { char* a_string = "Test"; printf("%c", a_string[0]); // Would print "T" printf("%c", a_string[1]); // Would print "e" printf("%c", a_string[2]); // Would print "s" printf("%c", a_string[3]); // Would print "t" printf("%c", a_string[4]); // Would print the 'null' character (ASCII 0) for end of string return(0); } Note that each of a_string[n] would have a 'char' data type while a_string itself would return a pointer to the first element in the a_string character array. In C# C# handles brace notation differently. A string is a primitive type that returns a char when encountered with brace notation: String var = "Hello World"; char h = var[0]; char e = var[1]; String hehe = h.ToString() + e.ToString(); // string "he" hehe += hehe; // string "hehe" To change the char type to a string in C#, use the method ToString(). This allows joining individual characters with the addition symbol + which acts as a concatenation symbol when dealing with strings. In Python In Python, strings are immutable, so it's hard to modify an existing string, but it's easy to extract and concatenate strings to each other: Extracting characters is even easier: >>> var = 'hello world' >>> var[0] # Return the first c
https://en.wikipedia.org/wiki/Pile%20driver
A pile driver is a heavy-duty tool used to drive piles into soil to build piers, bridges, cofferdams, and other "pole" supported structures, and patterns of pilings as part of permanent deep foundations for buildings or other structures. Pilings may be made of wood, solid steel, or tubular steel (often later filled with concrete), and may be driven entirely underwater/underground, or remain partially aboveground as elements of a finished structure. The term "pile driver" is also used to describe members of the construction crew associated with the task, also colloquially known as "pile bucks". The most common form of pile driver uses a heavy weight situated between vertical guides placed above a pile. The weight is raised by some motive power (which may include hydraulics, steam, diesel, electrical motor, or manual labor). At its apex the weight is released, impacting the pile and driving it into the ground. History There are a number of claims to the invention of the pile driver. A mechanically sound drawing of a pile driver appeared as early as 1475 in Francesco di Giorgio Martini's treatise Trattato di Architectura. Also, several other prominent inventors—James Nasmyth (son of Alexander Nasmyth), who invented a steam-powered pile driver in 1845, watchmaker James Valoué, Count Giovan Battista Gazzola, and Leonardo da Vinci—have all been credited with inventing the device. However, there is evidence that a comparable device was used in the construction of Crannogs at Oakbank and Loch Tay in Scotland as early as 5000 years ago. In 1801 John Rennie came up with a steam pile driver in Britain. Otis Tufts is credited with inventing the steam pile driver in the United States. Types Ancient pile driving equipment used human or animal labor to lift weights, usually by means of pulleys, then dropping the weight onto the upper end of the pile. Modern piledriving equipment variously uses hydraulics, steam, diesel, or electric power to raise the weight and guide the p
https://en.wikipedia.org/wiki/CTQ%20tree
CTQ trees (critical-to-quality trees) are the key measurable characteristics of a product or process whose performance standards or specification limits must be met in order to satisfy the customer. They align improvement or design efforts with customer requirements. CTQs are used to decompose broad customer requirements into more easily quantified elements. CTQ trees are often used as part of Six Sigma methodology to help prioritize such requirements. CTQs represent the product or service characteristics as defined by the customer/user. Customers may be surveyed to elicit quality, service and performance data. They may include upper and lower specification limits or any other factors. A CTQ must be an actionable, quantitative business specification. CTQs reflect the expressed needs of the customer. The CTQ practitioner converts them to measurable terms using tools such as DFMEA. Services and products are typically not monolithic. They must be decomposed into constituent elements (tasks in the cases of services). See also Business process Design for Six Sigma Total quality management Total productive maintenance External links Six Sigma CTQ References Rath & Strong Management Consultants, Six Sigma Pocket Guide, p. 18. George, Michael L., Lean Six Sigma, p. 111. Business terms Quality management Design for X Engineering failures Reliability engineering Systems engineering Software quality
https://en.wikipedia.org/wiki/XQuery%20and%20XPath%20Data%20Model
The XQuery and XPath Data Model (XDM) is the data model shared by the XPath 2.0, XSLT 2.0, XQuery, and XForms programming languages. It is defined in a W3C recommendation. Originally, it was based on the XPath 1.0 data model which in turn is based on the XML Information Set. The XDM consists of flat sequences of zero or more items which can be typed or untyped, and are either atomic values or XML nodes (of seven kinds: document, element, attribute, text, namespace, processing instruction, and comment). Instances of the XDM can optionally be XML schema-validated. References External links IBM: XQuery and XPath data model Data modeling XML data access
https://en.wikipedia.org/wiki/Grant%20Olney
Grant Olney Passmore (born October 18, 1983) is a singer-songwriter who has recorded on the Asian Man Records label. He is considered part of the New Weird America movement along with David Dondero, Devendra Banhart, Bright Eyes, and CocoRosie. His latest full-length album, Hypnosis for Happiness, was released in July 2013 on the Friendly Police UK label. His previous full-length album, Brokedown Gospel, was released on the Asian Man Records label in July 2004. He also releases music under the pseudonym Scout You Devil and as part of the songwriting duo Olney Clark. Alongside his music, Passmore is also a mathematician and theoretical computer scientist, formerly a student at the University of Texas at Austin, the Mathematical Research Institute in the Netherlands, and the University of Edinburgh, where he earned his PhD. He is a Life Member of Clare Hall, University of Cambridge and is cofounder of the artificial intelligence company Imandra Inc. (formerly known as Aesthetic Integration) which produces technology for the formal verification of algorithms. He was paired with artist Hito Steyerl in the 2016 Rhizome Seven on Seven. As a young child and early teenager, Passmore was involved in the development of the online Bulletin Board system scene, and under the name skaboy he was the author of many applications of importance to the Bulletin Board System community, including the Infusion Bulletin Board System, Empathy Image Editor, Avenger Packer Pro, and Impulse Tracker Tosser. Passmore was head programmer for ACiD Productions while working on many of these applications. Personal life Passmore married Barbara Galletly in 2014. They have three children. Discography Albums Hypnosis for Happiness – Grant Olney – (2013 · Friendly Police UK) Olney Clark – Olney Clark – (2010 · Friendly Police UK) Let Love Be (single) – Grant Olney – (2006 · Asian Man Records) Brokedown Gospel – Grant Olney – (2004 · Asian Man Records) Sweet Wine – Grant Olney – (2003 · MyAutomat
https://en.wikipedia.org/wiki/Proximity%20marketing
Proximity marketing is the localized wireless distribution of advertising content associated with a particular place. Transmissions can be received by individuals in that location who wish to receive them and have the necessary equipment to do so. Distribution may be via a traditional localized broadcast, or more commonly is specifically targeted to devices known to be in a particular area. The location of a device may be determined by: A cellular phone being in a particular cell A Bluetooth- or Wi-Fi-enabled device being within range of a transmitter. An Internet enabled device with GPS enabling it to request localized content from Internet servers. A NFC enabled phone can read a RFID chip on a product or media and launch localized content from internet servers. Communications may be further targeted to specific groups within a given location, for example content in tourist hot spots may only be distributed to devices registered outside the local area. Communications may be both time and place specific, e.g. content at a conference venue may depend on the event in progress. Uses of proximity marketing include distribution of media at concerts, information (weblinks on local facilities), gaming and social applications, and advertising. Bluetooth-based systems Bluetooth, a short-range wireless system supported by many mobile devices, is one transmission medium used for proximity marketing. The process of Bluetooth-based proximity marketing involves setting up Bluetooth "broadcasting" equipment at a particular location and then sending information which can be text, images, audio or video to Bluetooth enabled devices within range of the broadcast server. These devices are often referred to as beacons. Other standard data exchange formats such as vCard can also be used. This form of proximity marketing is also referred to as close range marketing. It used to be the case that due to security fears, or a desire to save battery life, many users keep their Bluet
https://en.wikipedia.org/wiki/Wadge%20hierarchy
In descriptive set theory, within mathematics, Wadge degrees are levels of complexity for sets of reals. Sets are compared by continuous reductions. The Wadge hierarchy is the structure of Wadge degrees. These concepts are named after William W. Wadge. Wadge degrees Suppose and are subsets of Baire space ωω. Then is Wadge reducible to or ≤W if there is a continuous function on ωω with . The Wadge order is the preorder or quasiorder on the subsets of Baire space. Equivalence classes of sets under this preorder are called Wadge degrees, the degree of a set is denoted by []W. The set of Wadge degrees ordered by the Wadge order is called the Wadge hierarchy. Properties of Wadge degrees include their consistency with measures of complexity stated in terms of definability. For example, if ≤W and is a countable intersection of open sets, then so is . The same works for all levels of the Borel hierarchy and the difference hierarchy. The Wadge hierarchy plays an important role in models of the axiom of determinacy. Further interest in Wadge degrees comes from computer science, where some papers have suggested Wadge degrees are relevant to algorithmic complexity. Wadge's lemma states that under the axiom of determinacy (AD), for any two subsets of Baire space, ≤W or ≤W ωω\. The assertion that the Wadge lemma holds for sets in Γ is the semilinear ordering principle for Γ or SLO(Γ). Any defines a linear order on the equivalence classes modulo complements. Wadge's lemma can be applied locally to any pointclass Γ, for example the Borel sets, Δ1n sets, Σ1n sets, or Π1n sets. It follows from determinacy of differences of sets in Γ. Since Borel determinacy is proved in ZFC, ZFC implies Wadge's lemma for Borel sets. Wadge's lemma is similar to the cone lemma from computability theory. Wadge's lemma via Wadge and Lipschitz games The Wadge game is a simple infinite game discovered by William Wadge (pronounced "wage"). It is used to investigate the notion of co
https://en.wikipedia.org/wiki/Five%20whys
Five whys (or 5 whys) is an iterative interrogative technique used to explore the cause-and-effect relationships underlying a particular problem. The primary goal of the technique is to determine the root cause of a defect or problem by repeating the question "Why?" five times. The answer to the fifth why should reveal the root cause of the problem. The technique was described by Taiichi Ohno at Toyota Motor Corporation. Others at Toyota and elsewhere have criticized the five whys technique for various reasons (see ). Example An example of a problem is: the vehicle will not start. Why? – The battery is dead. Why? – The alternator is not functioning. Why? – The alternator belt has broken. Why? – The alternator belt was well beyond its useful service life and not replaced. Why? – The vehicle was not maintained according to the recommended service schedule. (A root cause) The questioning for this example could be taken further to a sixth, seventh, or higher level, but five iterations of asking why is generally sufficient to get to a root cause. The key is to encourage the troubleshooter to avoid assumptions and logic traps and instead trace the chain of causality in direct increments from the effect through any layers of abstraction to a root cause that still has some connection to the original problem. In this example, the fifth "why" suggests a broken process or an alterable behavior, which is indicative of reaching the root-cause level. The last answer points to a process. This is one of the most important aspects in the five why approach – the real root cause should point toward a process that is not working well or does not exist. Untrained facilitators will often observe that answers seem to point towards classical answers such as not enough time, not enough investments, or not enough resources. These answers may be true, but they are out of our control. Therefore, instead of asking why?, ask why did the process fail? History The technique was originally
https://en.wikipedia.org/wiki/Write%20buffer
A write buffer is a type of data buffer that can be used to hold data being written from the cache to main memory or to the next cache in the memory hierarchy to improve performance and reduce latency. It is used in certain CPU cache architectures like Intel's x86 and AMD64. In multi-core systems, write buffers destroy sequential consistency. Some software disciplines, like C11's data-race-freedom, are sufficient to regain a sequentially consistent view of memory. A variation of write-through caching is called buffered write-through. Use of a write buffer in this manner frees the cache to service read requests while the write is taking place. It is especially useful for very slow main memory in that subsequent reads are able to proceed without waiting for long main memory latency. When the write buffer is full (i.e. all buffer entries are occupied), subsequent writes still have to wait until slots are freed. Subsequent reads could be served from the write buffer. To further mitigate this stall, one optimization called write buffer merge may be implemented. Write buffer merge combines writes that have consecutive destination addresses into one buffer entry. Otherwise, they would occupy separate entries which increases the chance of pipeline stall. A victim buffer is a type of write buffer that stores dirty evicted lines in write-back caches so that they get written back to main memory. Besides reducing pipeline stall by not waiting for dirty lines to write back as a simple write buffer does, a victim buffer may also serve as a temporary backup storage when subsequent cache accesses exhibit locality, requesting those recently evicted lines, which are still in the victim buffer. The store buffer was invented by IBM during Project ACS between 1964 and 1968, but it was first implemented in commercial products in the 1990s. Notes References Computer memory
https://en.wikipedia.org/wiki/Flower
A flower, sometimes known as a bloom or blossom, is the reproductive structure found in flowering plants (plants of the division Angiospermae). Flowers produce gametophytes, which in flowering plants consist of a few haploid cells which produce gametes. The "male" gametophyte, which produces non-motile sperm, is enclosed within pollen grains; the "female" gametophyte is contained within the ovule. When pollen from the anther of a flower is deposited on the stigma, this is called pollination. Some flowers may self-pollinate, producing seed using pollen from the same flower or a different flower of the same plant, but others have mechanisms to prevent self-pollination and rely on cross-pollination, when pollen is transferred from the anther of one flower to the stigma of another flower on a different individual of the same species. Self-pollination happens in flowers where the stamen and carpel mature at the same time, and are positioned so that the pollen can land on the flower's stigma. This pollination does not require an investment from the plant to provide nectar and pollen as food for pollinators. Some flowers produce diaspores without fertilization (parthenocarpy). Flowers contain sporangia and are the site where gametophytes develop. Most flowering plants depend on animals, such as bees, moths, and butterflies, to transfer their pollen between different flowers, and have evolved to attract these pollinators by various strategies, including brightly colored, conspicuous petals, attractive scents, and the production of nectar, a food source for pollinators. In this way, many flowering plants have co-evolved with pollinators to be mutually dependent on services they provide to one another—in the plant's case, a means of reproduction; in the pollinator's case, a source of food. After fertilization, the ovary of the flower develops into fruit containing seeds. Flowers have long been appreciated by humans for their beauty and pleasant scents, and also hold cultu
https://en.wikipedia.org/wiki/Classification%20theorem
In mathematics, a classification theorem answers the classification problem "What are the objects of a given type, up to some equivalence?". It gives a non-redundant enumeration: each object is equivalent to exactly one class. A few issues related to classification are the following. The equivalence problem is "given two objects, determine if they are equivalent". A complete set of invariants, together with which invariants are solves the classification problem, and is often a step in solving it. A (together with which invariants are realizable) solves both the classification problem and the equivalence problem. A canonical form solves the classification problem, and is more data: it not only classifies every class, but provides a distinguished (canonical) element of each class. There exist many classification theorems in mathematics, as described below. Geometry Classification of Euclidean plane isometries Classification theorems of surfaces Classification of two-dimensional closed manifolds Enriques–Kodaira classification of algebraic surfaces (complex dimension two, real dimension four) Nielsen–Thurston classification which characterizes homeomorphisms of a compact surface Thurston's eight model geometries, and the geometrization conjecture Berger classification Classification of Riemannian symmetric spaces Classification of 3-dimensional lens spaces Classification of manifolds Algebra Classification of finite simple groups Classification of Abelian groups Classification of Finitely generated abelian group Classification of Rank 3 permutation group Classification of 2-transitive permutation groups Artin–Wedderburn theorem — a classification theorem for semisimple rings Classification of Clifford algebras Classification of low-dimensional real Lie algebras Classification of Simple Lie algebras and groups Classification of simple complex Lie algebras Classification of simple real Lie algebras Classification of centerless simple Lie gro
https://en.wikipedia.org/wiki/Plurix
Plurix is a Unix-like operating system developed in Brazil in the early 1980s. Overview Plurix was developed in the Federal University of Rio de Janeiro (UFRJ), at the Electronic Computing Center (NCE). The NCE researchers, after returning from postgraduate courses in the USA, attempted to license the UNIX source code from AT&T in the late 1970s without success. In 1982, due to AT&T refusing to license the code, a development team led by Newton Faller decided to initiate the development of an alternative system, called Plurix (**), using as reference UNIX Version 7, the most recent at the time, that they had running on an old Motorola computer system. In 1985, the Plurix system was up and running on the Pegasus 32-X, a shared-memory, multi-processor computer also designed at NCE. Plurix was licensed to some Brazilian companies in 1988. Two other Brazilian universities also developed their own UNIX systems: Universidade Federal de Minas Gerais (UFMG) developed the DCC-IX operating system, and University of São Paulo (USP) developed the REAL operating system in 1987. The NCE/UFRJ also offered technical courses on OS design and implementation to local computer companies, some of which later produced their own proprietary UNIX systems. In fact, these Brazilian companies first created an organization of companies interested in UNIX (called API) and tried to license UNIX from AT&T. Their attempts were frustrated at the end of 1986, when AT&T canceled negotiations with API. Some of these companies, EDISA, COBRA, and SOFTEC, invested in the development of their own systems, EDIX, SOX and ANALIX, respectively. AT&T License When AT&T finally licensed their code to Brazilian companies, the majority of them decided to drop their local development, use the licensed code, and just "localize" the system for their purposes. COBRA and NCE/UFRJ kept developing, and tried to convince the Brazilian government to prohibit the further entrance of AT&T UNIX into Brazil, since the
https://en.wikipedia.org/wiki/Optical%20manufacturing%20and%20testing
Optical manufacturing and testing spans an enormous range of manufacturing procedures and optical test configurations. The manufacture of a conventional spherical lens typically begins with the generation of the optic's rough shape by grinding a glass blank. This can be done, for example, with ring tools. Next, the lens surface is polished to its final form. Typically this is done by lapping—rotating and rubbing the rough lens surface against a tool with the desired surface shape, with a mixture of abrasives and fluid in between. Typically a carved pitch tool is used to polish the surface of a lens. The mixture of abrasive is called slurry and it is typically made from cerium or zirconium oxide in water with lubricants added to facilitate pitch tool movement without sticking to the lens. The particle size in the slurry is adjusted to get the desired shape and finish. During polishing, the lens may be tested to confirm that the desired shape is being produced, and to ensure that the final shape has the correct form to within the allowed precision. The deviation of an optical surface from the correct shape is typically expressed in fractions of a wavelength, for some convenient wavelength of light (perhaps the wavelength at which the lens is to be used, or a visible wavelength for which a source is available). Inexpensive lenses may have deviations of form as large as several wavelengths (λ, 2λ, etc.). More typical industrial lenses would have deviations no larger than a quarter wavelength (λ/4). Precision lenses for use in applications such as lasers, interferometers, and holography have surfaces with a tenth of a wavelength (λ/10) tolerance or better. In addition to surface profile, a lens must meet requirements for surface quality (scratches, pits, specks, etc.) and accuracy of dimensions. Fabrication techniques Glass blank manufacturing Batch mixing Casting techniques Annealing schedules and equipment Physical characterization techniques Index of ref
https://en.wikipedia.org/wiki/Elementary%20divisors
In algebra, the elementary divisors of a module over a principal ideal domain (PID) occur in one form of the structure theorem for finitely generated modules over a principal ideal domain. If is a PID and a finitely generated -module, then M is isomorphic to a finite sum of the form where the are nonzero primary ideals. The list of primary ideals is unique up to order (but a given ideal may be present more than once, so the list represents a multiset of primary ideals); the elements are unique only up to associatedness, and are called the elementary divisors. Note that in a PID, the nonzero primary ideals are powers of prime ideals, so the elementary divisors can be written as powers of irreducible elements. The nonnegative integer is called the free rank or Betti number of the module . The module is determined up to isomorphism by specifying its free rank , and for class of associated irreducible elements and each positive integer the number of times that occurs among the elementary divisors. The elementary divisors can be obtained from the list of invariant factors of the module by decomposing each of them as far as possible into pairwise relatively prime (non-unit) factors, which will be powers of irreducible elements. This decomposition corresponds to maximally decomposing each submodule corresponding to an invariant factor by using the Chinese remainder theorem for R. Conversely, knowing the multiset of elementary divisors, the invariant factors can be found, starting from the final one (which is a multiple of all others), as follows. For each irreducible element such that some power occurs in , take the highest such power, removing it from , and multiply these powers together for all (classes of associated) to give the final invariant factor; as long as is non-empty, repeat to find the invariant factors before it. See also Invariant factors References Chap.11, p.182. Chap. III.7, p.153 of Module theory
https://en.wikipedia.org/wiki/Geometric%20modeling
Geometric modeling is a branch of applied mathematics and computational geometry that studies methods and algorithms for the mathematical description of shapes. The shapes studied in geometric modeling are mostly two- or three-dimensional (solid figures), although many of its tools and principles can be applied to sets of any finite dimension. Today most geometric modeling is done with computers and for computer-based applications. Two-dimensional models are important in computer typography and technical drawing. Three-dimensional models are central to computer-aided design and manufacturing (CAD/CAM), and widely used in many applied technical fields such as civil and mechanical engineering, architecture, geology and medical image processing. Geometric models are usually distinguished from procedural and object-oriented models, which define the shape implicitly by an opaque algorithm that generates its appearance. They are also contrasted with digital images and volumetric models which represent the shape as a subset of a fine regular partition of space; and with fractal models that give an infinitely recursive definition of the shape. However, these distinctions are often blurred: for instance, a digital image can be interpreted as a collection of colored squares; and geometric shapes such as circles are defined by implicit mathematical equations. Also, a fractal model yields a parametric or implicit model when its recursive definition is truncated to a finite depth. Notable awards of the area are the John A. Gregory Memorial Award and the Bézier award. See also 2D geometric modeling Architectural geometry Computational conformal geometry Computational topology Computer-aided engineering Computer-aided manufacturing Digital geometry Geometric modeling kernel List of interactive geometry software Parametric equation Parametric surface Solid modeling Space partitioning References Further reading General textbooks: This book is out of print
https://en.wikipedia.org/wiki/Apollonius%27s%20theorem
In geometry, Apollonius's theorem is a theorem relating the length of a median of a triangle to the lengths of its sides. It states that "the sum of the squares of any two sides of any triangle equals twice the square on half the third side, together with twice the square on the median bisecting the third side". Specifically, in any triangle if is a median, then It is a special case of Stewart's theorem. For an isosceles triangle with the median is perpendicular to and the theorem reduces to the Pythagorean theorem for triangle (or triangle ). From the fact that the diagonals of a parallelogram bisect each other, the theorem is equivalent to the parallelogram law. The theorem is named for the ancient Greek mathematician Apollonius of Perga. Proof The theorem can be proved as a special case of Stewart's theorem, or can be proved using vectors (see parallelogram law). The following is an independent proof using the law of cosines. Let the triangle have sides with a median drawn to side Let be the length of the segments of formed by the median, so is half of Let the angles formed between and be and where includes and includes Then is the supplement of and The law of cosines for and states that Add the first and third equations to obtain as required. See also References External links David B. Surowski: Advanced High-School Mathematics. p. 27 Euclidean geometry Articles containing proofs Theorems about triangles
https://en.wikipedia.org/wiki/Jan%20L.%20A.%20van%20de%20Snepscheut
Johannes Lambertus Adriana van de Snepscheut (; 12 September 195323 February 1994) was a computer scientist and educator. He was a student of Martin Rem and Edsger Dijkstra. At the time of his death he was the executive officer of the computer science department at the California Institute of Technology. He was also developing an editor for proving theorems called "Proxac". In the early morning hours of February 23, 1994, van de Snepscheut attacked his sleeping wife, Terre, with an axe. He then set their house on fire, and died as it burned around him. Terre and their three children escaped their burning home. Bibliography Jan L. A. Van De Snepscheut, Gerrit A. Slavenburg, Introducing the notion of processes to hardware, ACM SIGARCH Computer Architecture News, April 1979. Jan L. A. Van De Snepscheut, Trace Theory and VLSI Design,, Lecture Notes in Computer Science, Volume 200, Springer, 1985. Jan L. A. Van De Snepscheut, What computing is all about. Springer, 1993. References External links Article based on the back story to these events 1953 births 1994 deaths Van De Snepscheut, Jan L. A. Dutch computer scientists Eindhoven University of Technology alumni People from Oosterhout Software engineering researchers Academic staff of the University of Groningen People from La Cañada Flintridge, California
https://en.wikipedia.org/wiki/History%20of%20penicillin
The history of penicillin follows observations and discoveries of evidence of antibiotic activity of the mould Penicillium that led to the development of penicillins that became the first widely used antibiotics. Following the production of a relatively pure compound in 1942, penicillin was the first naturally-derived antibiotic. Ancient societies used moulds to treat infections, and in the following centuries many people observed the inhibition of bacterial growth by moulds. While working at St Mary's Hospital in London in 1928, Scottish physician Alexander Fleming was the first to experimentally determine that a Penicillium mould secretes an antibacterial substance, which he named "penicillin". The mould was found to be a variant of Penicillium notatum (now called Penicillium rubens), a contaminant of a bacterial culture in his laboratory. The work on penicillin at St Mary's ended in 1929. In 1939, a team of scientists at the Sir William Dunn School of Pathology at the University of Oxford, led by Howard Florey that included Edward Abraham, Ernst Chain, Mary Ethel Florey, Norman Heatley and Margaret Jennings, began researching penicillin. They developed a method for cultivating the mould and extracting, purifying and storing penicillin from it, together with an assay for measuring its purity. They carried out experiments with animals to determine penicillin's safety and effectiveness before conducting clinical trials and field tests. They derived its chemical structure and determined how it works. The private sector and the United States Department of Agriculture located and produced new strains and developed mass production techniques. During the Second World War penicillin became an important part of the Allied war effort, saving thousands of lives. Alexander Fleming, Howard Florey and Ernst Chain shared the 1945 Nobel Prize in Physiology or Medicine for the discovery and development of penicillin. After the end of the war in 1945, penicillin became widely a
https://en.wikipedia.org/wiki/Power%20module
A power module or power electronic module provides the physical containment for several power components, usually power semiconductor devices. These power semiconductors (so-called dies) are typically soldered or sintered on a power electronic substrate that carries the power semiconductors, provides electrical and thermal contact and electrical insulation where needed. Compared to discrete power semiconductors in plastic housings as TO-247 or TO-220, power packages provide a higher power density and are in many cases more reliable. Module Topologies Besides modules that contain a single power electronic switch (as MOSFET, IGBT, BJT, Thyristor, GTO or JFET) or diode, classical power modules contain multiple semiconductor dies that are connected to form an electrical circuit of a certain structure, called topology. Modules also contain other components such as ceramic capacitors to minimize switching voltage overshoots and NTC thermistors to monitor the module's substrate temperature. Examples of broadly available topologies implemented in modules are: switch (MOSFET, IGBT), with antiparallel Diode; bridge rectifier containing four (1-phase) or six (3-phase) diodes half bridge (inverter leg, with two switches and their corresponding antiparallel diodes) H-Bridge (four switches and the corresponding antiparallel diodes) boost or power factor correction (one (or two) switches with one (or two) high frequency rectifying diodes) ANPFC (power factor correction leg with two switches and their corresponding antiparallel diodes and four high frequency rectifying diodes) three level NPC (I-Type) (multilevel inverter leg with four switches and their corresponding antiparallel diodes) three level MNPC (T-Type) (multilevel inverter leg with four switches and their corresponding antiparallel diodes) three level ANPC (multilevel inverter leg with six switches and their corresponding antiparallel diodes) three level H6.5 - (consisting of six switches (four fast IGBTs/tw
https://en.wikipedia.org/wiki/Wireless%20site%20survey
A wireless site survey, sometimes called an RF (Radio Frequency) site survey or wireless survey, is the process of planning and designing a wireless network, to provide a wireless solution that will deliver the required wireless coverage, data rates, network capacity, roaming capability and quality of service (QoS). The survey usually involves a site visit to test for RF interference, and to identify optimum installation locations for access points. This requires analysis of building floor plans, inspection of the facility, and use of site survey tools. Interviews with IT management and the end users of the wireless network are also important to determine the design parameters for the wireless network. As part of the wireless site survey, the effective range boundary is set, which defines the area over which signal levels needed support the intended application. This involves determining the minimum signal-to-noise ratio (SNR) needed to support performance requirements. Wireless site survey can also mean the walk-testing, auditing, analysis or diagnosis of an existing wireless network, particularly one which is not providing the level of service required. Wireless site survey process Wireless site surveys are typically conducted using computer software that collects and analyses WLAN metrics and/or RF spectrum characteristics. Before a survey, a floor plan or site map is imported into a site survey application and calibrated to set scale. During a survey, a surveyor walks the facility with a portable computer that continuously records the data. The surveyor either marks the current position on the floor plan manually, by clicking on the floor plan, or uses a GPS receiver that automatically marks the current position if the survey is conducted outdoors. After a survey, data analysis is performed and survey results are documented in site survey reports generated by the application. All these data collection, analysis, and visualization tasks are highly automated
https://en.wikipedia.org/wiki/Contactless%20smart%20card
A contactless smart card is a contactless credential whose dimensions are credit card size. Its embedded integrated circuits can store (and sometimes process) data and communicate with a terminal via NFC. Commonplace uses include transit tickets, bank cards and passports. There are two broad categories of contactless smart cards. Memory cards contain non-volatile memory storage components, and perhaps some specific security logic. Contactless smart cards contain read-only RFID called CSN (Card Serial Number) or UID, and a re-writeable smart card microchip that can be transcribed via radio waves. Overview A contactless smart card is characterized as follows: Dimensions are normally credit card size. The ID-1 of ISO/IEC 7810 standard defines them as 85.60 × 53.98 × 0.76 mm (3.370 × 2.125 × 0.030 in). Contains a security system with tamper-resistant properties (e.g. a secure cryptoprocessor, secure file system, human-readable features) and is capable of providing security services (e.g. confidentiality of information in the memory). Assets managed by way of a central administration systems, or applications, which receive or interchange information with the card, such as card hotlisting and updates for application data. Card data is transferred via radio waves to the central administration system through card read-write devices, such as point of sales devices, doorway access control readers, ticket readers, ATMs, USB-connected desktop readers, etc. Benefits Contactless smart cards can be used for identification, authentication, and data storage. They also provide a means of effecting business transactions in a flexible, secure, standard way with minimal human intervention. History Contactless smart cards were first used for electronic ticketing in 1995 in Seoul, South Korea. Since then, smart cards with contactless interfaces have been increasingly popular for payment and ticketing applications such as mass transit. Globally, contactless fare collection is bei
https://en.wikipedia.org/wiki/Linear%20energy%20transfer
In dosimetry, linear energy transfer (LET) is the amount of energy that an ionizing particle transfers to the material traversed per unit distance. It describes the action of radiation into matter. It is identical to the retarding force acting on a charged ionizing particle travelling through the matter. By definition, LET is a positive quantity. LET depends on the nature of the radiation as well as on the material traversed. A high LET will slow down the radiation more quickly, generally making shielding more effective and preventing deep penetration. On the other hand, the higher concentration of deposited energy can cause more severe damage to any microscopic structures near the particle track. If a microscopic defect can cause larger-scale failure, as is the case in biological cells and microelectronics, the LET helps explain why radiation damage is sometimes disproportionate to the absorbed dose. Dosimetry attempts to factor in this effect with radiation weighting factors. Linear energy transfer is closely related to stopping power, since both equal the retarding force. The unrestricted linear energy transfer is identical to linear electronic stopping power, as discussed below. But the stopping power and LET concepts are different in the respect that total stopping power has the nuclear stopping power component, and this component does not cause electronic excitations. Hence nuclear stopping power is not contained in LET. The appropriate SI unit for LET is the newton, but it is most typically expressed in units of kiloelectronvolts per micrometre (keV/μm) or megaelectronvolts per centimetre (MeV/cm). While medical physicists and radiobiologists usually speak of linear energy transfer, most non-medical physicists talk about stopping power. Restricted and unrestricted LET The secondary electrons produced during the process of ionization by the primary charged particle are conventionally called delta rays, if their energy is large enough so that they thems
https://en.wikipedia.org/wiki/Virtual%20Physiological%20Human
The Virtual Physiological Human (VPH) is a European initiative that focuses on a methodological and technological framework that, once established, will enable collaborative investigation of the human body as a single complex system. The collective framework will make it possible to share resources and observations formed by institutions and organizations, creating disparate but integrated computer models of the mechanical, physical and biochemical functions of a living human body. VPH is a framework which aims to be descriptive, integrative and predictive. Clapworthy et al. state that the framework should be descriptive by allowing laboratory and healthcare observations around the world "to be collected, catalogued, organized, shared and combined in any possible way." It should be integrative by enabling those observations to be collaboratively analyzed by related professionals in order to create "systemic hypotheses." Finally, it should be predictive by encouraging interconnections between extensible and scalable predictive models and "systemic networks that solidify those systemic hypotheses" while allowing observational comparison. The framework is formed by large collections of anatomical, physiological, and pathological data stored in digital format, typically by predictive simulations developed from these collections and by services intended to support researchers in the creation and maintenance of these models, as well as in the creation of end-user technologies to be used in the clinical practice. VPH models aim to integrate physiological processes across different length and time scales (multi-scale modelling). These models make possible the combination of patient-specific data with population-based representations. The objective is to develop a systemic approach which avoids a reductionist approach and seeks not to subdivide biological systems in any particular way by dimensional scale (body, organ, tissue, cells, molecules), by scientific discipline (
https://en.wikipedia.org/wiki/Physiome
The physiome of an individual's or species' physiological state is the description of its functional behavior. The physiome describes the physiological dynamics of the normal intact organism and is built upon information and structure (genome, proteome, and morphome). The term comes from "physio-" (nature) and "-ome" (as a whole). The concept of a physiome project was presented to the International Union of Physiological Sciences (IUPS) by its Commission on Bioengineering in Physiology in 1993. A workshop on designing the Physiome Project was held in 1997. At its world congress in 2001, the IUPS designated the project as a major focus for the next decade. The project is led by the Physiome Commission of the IUPS. Other research initiatives related to the physiome include: The EuroPhysiome Initiative The NSR Physiome Project of the National Simulation Resource (NSR) at the University of Washington, supporting the IUPS Physiome Project The Wellcome Trust Heart Physiome Project, a collaboration between the University of Auckland and the University of Oxford, part of the wider IUPS Physiome Project See also Physiomics Living Human Project Virtual Physiological Human Virtual Physiological Rat Cytome Human Genome Project List of omics topics in biology Cardiophysics References External links National Resource for Cell Analysis and Modeling (NRCAM) Physiology Biophysics
https://en.wikipedia.org/wiki/Retrotransposon%20marker
Retrotransposon markers are components of DNA which are used as cladistic markers. They assist in determining the common ancestry, or not, of related taxa. The "presence" of a given retrotransposon in related taxa suggests their orthologous integration, a derived condition acquired via a common ancestry, while the "absence" of particular elements indicates the plesiomorphic condition prior to integration in more distant taxa. The use of presence/absence analyses to reconstruct the systematic biology of mammals depends on the availability of retrotransposons that were actively integrating before the divergence of a particular species. Details The analysis of SINEs – Short INterspersed Elements – LINEs – Long INterspersed Elements – or truncated LTRs – Long Terminal Repeats – as molecular cladistic markers represents a particularly interesting complement to DNA sequence and morphological data. The reason for this is that retrotransposons are assumed to represent powerful noise-poor synapomorphies. The target sites are relatively unspecific so that the chance of an independent integration of exactly the same element into one specific site in different taxa is not large and may even be negligible over evolutionary time scales. Retrotransposon integrations are currently assumed to be irreversible events; this might change since no eminent biological mechanisms have yet been described for the precise re-excision of class I transposons, but see van de Lagemaat et al. (2005). A clear differentiation between ancestral and derived character state at the respective locus thus becomes possible as the absence of the introduced sequence can be with high confidence considered ancestral. In combination, the low incidence of homoplasy together with a clear character polarity make retrotransposon integration markers ideal tools for determining the common ancestry of taxa by a shared derived transpositional event. The "presence" of a given retrotransposon in related taxa suggests
https://en.wikipedia.org/wiki/Molecular%20beacon
Molecular beacons, or molecular beacon probes, are oligonucleotide hybridization probes that can report the presence of specific nucleic acids in homogenous solutions. Molecular beacons are hairpin-shaped molecules with an internally quenched fluorophore whose fluorescence is restored when they bind to a target nucleic acid sequence. This is a novel non-radioactive method for detecting specific sequences of nucleic acids. They are useful in situations where it is either not possible or desirable to isolate the probe-target hybrids from an excess of the hybridization probes. Molecular beacon probes A typical molecular beacon probe is 25 nucleotides long. The middle 15 nucleotides are complementary to the target DNA or RNA and do not base pair with one another, while the five nucleotides at each terminus are complementary to each other rather than to the target DNA. A typical molecular beacon structure can be divided in 4 parts: 1) loop, an 18–30 base pair region of the molecular beacon that is complementary to the target sequence; 2) stem formed by the attachment to both termini of the loop of two short (5 to 7 nucleotide residues) oligonucleotides that are complementary to each other; 3) 5' fluorophore at the 5' end of the molecular beacon, a fluorescent dye is covalently attached; 4) 3' quencher (non fluorescent) dye that is covalently attached to the 3' end of the molecular beacon. When the beacon is in closed loop shape, the quencher resides in proximity to the fluorophore, which results in quenching the fluorescent emission of the latter. If the nucleic acid to be detected is complementary to the strand in the loop, the event of hybridization occurs. The duplex formed between the nucleic acid and the loop is more stable than that of the stem because the former duplex involves more base pairs. This causes the separation of the stem and hence of the fluorophore and the quencher. Once the fluorophore is no longer next to the quencher, illumination of the hybrid
https://en.wikipedia.org/wiki/CER-200
CER ( – Digital Electronic Computer) model 200 is an early digital computer developed by Mihajlo Pupin Institute (Serbia) in 1966. See also CER Computers Mihajlo Pupin Institute History of computer hardware in the SFRY One-of-a-kind computers CER computers
https://en.wikipedia.org/wiki/Sun%20Cloud
Sun Cloud was an on-demand Cloud computing service operated by Sun Microsystems prior to its acquisition by Oracle Corporation. The Sun Cloud Compute Utility provided access to a substantial computing resource over the Internet for US$1 per CPU-hour. It was launched as Sun Grid in March 2006. It was based on and supported open source technologies such as Solaris 10, Sun Grid Engine, and the Java platform. Sun Cloud delivered enterprise computing power and resources over the Internet, enabling developers, researchers, scientists and businesses to optimize performance, speed time to results, and accelerate innovation without investment in IT infrastructure. In early 2010 Oracle announced it was discontinuing the Sun Cloud project. Since Sunday, March 7, 2010, the network.com web site has been inaccessible. Suitable applications A typical application that could run on the Compute Utility fit the following parameters: must be self-contained runs on the Solaris 10 Operating System (OS) is implemented with standard object libraries included with the Solaris 10 OS or user libraries packaged with the executable all executable code must be available on the Compute Utility at time of execution runs to completion under control of shell scripts (no requirement for interactive access) has a total maximum size of applications and data that does not exceed 10 gigabytes can be packaged for upload to Sun Cloud as one or more ZIP files of 300 megabytes or smaller Resources, jobs, and runs Resources are collections of files that contain the user's data and executable. Jobs are a Compute Utility concept that define the elements of the unit of work that is submitted to the Sun Cloud Compute Utility. The major elements of a job include the name of the shell script controlling program execution, required arguments to the shell script, and a list of resources that must be in place for the job to run. A run is a specific instantiation of a Job description submitted to the Su
https://en.wikipedia.org/wiki/HRS-100
HRS-100, ХРС-100, GVS-100 or ГВС-100, (see Ref.#1, #2, #3 and #4) (, , ) was a third generation hybrid computer developed by Mihajlo Pupin Institute (Serbia, then SFR Yugoslavia) and engineers from USSR in the period from 1968 to 1971. Three systems HRS-100 were deployed in Academy of Sciences of USSR in Moscow and Novosibirsk (Akademgorodok) in 1971 and 1978. More production was contemplated for use in Czechoslovakia and German Democratic Republic (DDR), but that was not realised. HRS-100 was invented and developed to study the dynamical systems in real and accelerated scale time and for efficient solving of wide array of scientific tasks at the institutes of the A.S. of USSR (in the fields: Aerospace-nautics, Energetics, Control engineering, Microelectronics, Telecommunications, Bio-medical investigations, Chemical industry etc.). Overview HRS-100 was composed of: Digital computer: central processor 16 kilowords of 0.9 μs 36-bit magnetic core primary memory, expandable to 64 kilowords. secondary disk storage peripheral devices (teleprinters, punched tape reader/punchers, parallel printers and punched card readers). multiple Analog computer modules Interconnection devices multiple analog and digital Peripheral devices Central processing unit HRS-100 has a 32-bit TTL MSI processor with following capabilities: four basic arithmetic operations are implemented in hardware for both fixed point and floating point operations Addressing modes: immediate/literal, absolute/direct, relative, unlimited-depth multi-level memory indirect and relative-indirect 7 index registers and dedicated "index arithmetic" hardware 32 interrupt "channels" (10 from within the CPU, 10 from peripherals and 12 from interconnection devices and analog computer) Primary memory Primary memory was made up of 0.9 μs cycle time magnetic core modules. Each 36-bit word is organized as follows: 32 data bits 1 parity bit 3 program protection bits specifying which program (Operating Sys
https://en.wikipedia.org/wiki/TIM-100
The TIM-100 was a PTT teller microcomputer developed by Mihajlo Pupin Institute (Serbia) in 1985 (Ref.lit. #1). It was based on the Intel microprocessors types 80x86 and VLSI circuitry. RAM had capacity max.8MB, and the external memory were floppy disks of 5.25 or 3.50 inch. (Ref.literature #2, #3 and #4). Multiuser, multitasking Operating system was real-time NRT and also TRANOS (developed by PTT office). See also Mihajlo Pupin Institute History of computer hardware in the SFRY Microcomputers References 1. D.Milicevic, D.Starcevic, D.Hristovic: "Architecture and Applications of the TIM Computers", Primenjena nauka journal, No 14, pp. 23–30, Belgrade May 1988. (in Serbian) 2. Dragoljub Milicevic, Dusan Hristovic(Ed): RACUNARI TIM (TIM Computers), Naucna Knjiga,Belgrade 1990. 3. Computing Technology in Serbia, by Dusan Hristovic, Phlogiston, No 18/19, pp. 89–105, Museum MNT-SANU, Belgrade 2010/2011. (in Serbian) 4. 50 Years of Computing in Serbia- Hronika digitalnih decenija, by D.B.Vujaklija, N. Markovic-Ed, pp. 37–44 and 75–86 in this book, PC Press, Belgrade 2011. (in Serbian). Fig.4. TIM designers from M.P.Institute-Belgrade, Wikimedia Mihajlo Pupin Institute IBM PC compatibles Computing by computer model
https://en.wikipedia.org/wiki/Biomarker
In biomedical contexts, a biomarker, or biological marker, is a measurable indicator of some biological state or condition. Biomarkers are often measured and evaluated using blood, urine, or soft tissues to examine normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. Biomarkers are used in many scientific fields. Medicine Biomarkers used in the medical field, are a part of a relatively new clinical toolset categorized by their clinical applications. The four main classes are molecular, physiologic, histologic and radiographic biomarkers. All four types of biomarkers have a clinical role in narrowing or guiding treatment decisions and follow a sub-categorization of being either predictive, prognostic, or diagnostic. Predictive Predictive molecular, cellular, or imaging biomarkers that pass validation can serve as a method of predicting clinical outcomes. Predictive biomarkers are used to help optimize ideal treatments, and often indicate the likelihood of benefiting from a specific therapy. For example, molecular biomarkers situated at the interface of pathology-specific molecular process architecture and drug mechanism of action promise capturing aspects allowing assessment of an individual treatment response. This offers a dual approach to both seeing trends in retrospective studies and using biomarkers to predict outcomes. For example, in metastatic colorectal cancer predictive biomarkers can serve as a way of evaluating and improving patient survival rates and in the individual case by case scenario, they can serve as a way of sparing patients from needless toxicity that arises from cancer treatment plans. Common examples of predictive biomarkers are genes such as ER, PR and HER2/neu in breast cancer, BCR-ABL fusion protein in chronic myeloid leukaemia, c-KIT mutations in GIST tumours and EGFR1 mutations in NSCLC. Diagnostic Diagnostic biomarkers that meet a burden of proof can serve a role in narrowi
https://en.wikipedia.org/wiki/Biomarker%20%28medicine%29
In medicine, a biomarker is a measurable indicator of the severity or presence of some disease state. It may be defined as a "cellular, biochemical or molecular alteration in cells, tissues or fluids that can be measured and evaluated to indicate normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention." More generally a biomarker is anything that can be used as an indicator of a particular disease state or some other physiological state of an organism. According to the WHO, the indicator may be chemical, physical, or biological in nature - and the measurement may be functional, physiological, biochemical, cellular, or molecular. A biomarker can be a substance that is introduced into an organism as a means to examine organ function or other aspects of health. For example, rubidium chloride is used in isotopic labeling to evaluate perfusion of heart muscle. It can also be a substance whose detection indicates a particular disease state, for example, the presence of an antibody may indicate an infection. More specifically, a biomarker indicates a change in expression or state of a protein that correlates with the risk or progression of a disease, or with the susceptibility of the disease to a given treatment. Biomarkers can be characteristic biological properties or molecules that can be detected and measured in parts of the body like the blood or tissue. They may indicate either normal or diseased processes in the body. Biomarkers can be specific cells, molecules, or genes, gene products, enzymes, or hormones. Complex organ functions or general characteristic changes in biological structures can also serve as biomarkers. Although the term biomarker is relatively new, biomarkers have been used in pre-clinical research and clinical diagnosis for a considerable time. For example, body temperature is a well-known biomarker for fever. Blood pressure is used to determine the risk of stroke. It is also widely known that
https://en.wikipedia.org/wiki/IBM%203270%20PC
The IBM 3270 PC (IBM System Unit 5271), released in October 1983, is an IBM PC XT containing additional hardware that, in combination with software, can emulate the behaviour of an IBM 3270 terminal. It can therefore be used both as a standalone computer, and as a terminal to a mainframe. IBM later released the 3270 AT (IBM System Unit 5273), which is a similar design based on the IBM PC AT. They also released high-end graphics versions of the 3270 PC in both XT and AT variants. The XT-based versions are called 3270 PC/G and 3270 PC/GX and they use a different System Unit 5371, while their AT counterparts (PC AT/G and PC AT/GX) have System Unit 5373. Technology The additional hardware occupies nearly all the free expansion slots in the computer. It includes a video card which occupies 1-3 ISA slots (depending on what level of graphics support is required), and supports CGA and MDA video modes. The display resolution is 720×350, either on the matching 14-inch color monitor (model 5272) or in monochrome on an MDA monitor. A further expansion card intercepts scancodes from the 122-key 3270 keyboard, translating them into XT scancodes which are then sent to the normal keyboard connector. This keyboard, officially called the 5271 Keyboard Element, weighs 9.3 pounds. The final additional card (a 3278 emulator) provides the communication interface to the host mainframe. Models 3270 PC (System Unit 5271) - original 3270 PC, initially offered in three different Models numbered 2, 4, and 6. Model 2 has non-expandable memory of 256 KB and a single floppy drive. Model 4 has expandable memory, a second floppy drive, and a parallel port. Model 6 replaces one of the floppy drives with a 10 MB hard disk. Model 6 had a retail price of at its launch (with 512KB RAM), not including display, cables and software; a working configuration with an additional 192KB RAM, color display (model 5272) and the basic cabling and software (but without support for host/mainframe-side graph
https://en.wikipedia.org/wiki/Human%20error
Human error is an action that has been done but that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". Human error has been cited as a primary cause contributing factor in disasters and accidents in industries as diverse as nuclear power (e.g., the Three Mile Island accident), aviation, space exploration (e.g., the Space Shuttle Challenger disaster and Space Shuttle Columbia disaster), and medicine. Prevention of human error is generally seen as a major contributor to reliability and safety of (complex) systems. Human error is one of the many contributing causes of risk events. Definition Human error refers to something having been done that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits". In short, it is a deviation from intention, expectation or desirability. Logically, human actions can fail to achieve their goal in two different ways: the actions can go as planned, but the plan can be inadequate (leading to mistakes); or, the plan can be satisfactory, but the performance can be deficient (leading to slips and lapses). However, a mere failure is not an error if there had been no plan to accomplish something in particular. Performance Human error and performance are two sides of the same coin: "human error" mechanisms are the same as "human performance" mechanisms; performance later categorized as 'error' is done so in hindsight: therefore actions later termed "human error" are actually part of the ordinary spectrum of human behaviour. The study of absent-mindedness in everyday life provides ample documentation and categorization of such aspects of behavior. While human error is firmly entrenched in the classical approaches to accident investigation and risk assessment, it has no role in newer approaches such as resilience engineering. Categories There are many w
https://en.wikipedia.org/wiki/Non-cellular%20life
Non-cellular life, also known as acellular life, is life that exists without a cellular structure for at least part of its life cycle. Historically, most definitions of life postulated that an organism must be composed of one or more cells, but this is no longer considered necessary, and modern criteria allow for forms of life based on other structural arrangements. The primary candidates for non-cellular life are viruses. Some biologists consider viruses to be organisms, but others do not. Their primary objection is that no known viruses are capable of autonomous reproduction; they must rely on cells to copy them. Viruses as non-cellular life The nature of viruses was unclear for many years following their discovery as pathogens. They were described as poisons or toxins at first, then as "infectious proteins", but with advances in microbiology it became clear that they also possessed genetic material, a defined structure, and the ability to spontaneously assemble from their constituent parts. This spurred extensive debate as to whether they should be regarded as fundamentally organic or inorganic — as very small biological organisms or very large biochemical molecules — and since the 1950s many scientists have thought of viruses as existing at the border between chemistry and life; a gray area between living and nonliving. Viral replication and self-assembly has implications for the study of the origin of life, as it lends further credence to the hypotheses that cells and viruses could have started as a pool of replicators where selfish genetic information was parasitizing on producers in RNA world, as two strategies to survive, gained in response to environmental conditions, or as self-assembling organic molecules. Viroids Viroids are the smallest infectious pathogens known to biologists, consisting solely of short strands of circular, single-stranded RNA without protein coats. They are mostly plant pathogens and some are animal pathogens, from which some ar
https://en.wikipedia.org/wiki/Packet%20Clearing%20House
Packet Clearing House (PCH) is the international nonprofit organization responsible for providing operational support and security to critical internet infrastructure, including Internet exchange points and the core of the domain name system. The organization also works in the areas of cybersecurity coordination, regulatory policy and Internet governance. Overview Packet Clearing House (PCH) was formed in 1994 by Chris Alan and Mark Kent to provide efficient regional and local network interconnection alternatives for the West Coast of the United States. It has grown to become a leading proponent of neutral independent network interconnection and provider of route-servers at major exchange points worldwide. PCH provides equipment, training, data, and operational support to organizations and individual researchers seeking to improve the quality, robustness, and Internet accessibility. , major PCH projects include Building and supporting nearly half of the world's approximately 700 Internet exchange points (IXPs), and maintaining the canonical index of Internet exchange points, with data going back to 1994; Operating the world's largest anycast Domain Name System (DNS) server platform, including two root nameservers, more than 400 top-level domains (TLDs) including the country-code domains of more than 130 countries, and the Quad9 recursive resolver; Operating the only FIPS 140-2 Level 4 global TLD DNSSEC key management and signing infrastructure, with facilities in Singapore, Zurich, and San Jose; Implementing network research data collection initiatives in more than 100 countries; Publishing original research and policy guidance in the areas of telecommunications regulation, including the 2011 and 2016 Interconnection Surveys, country reports such as those for Canada in 2012 and 2016 and Paraguay in 2012, and a survey of critical infrastructure experts for the GCSC; and Developing and presenting educational materials to foster a better under
https://en.wikipedia.org/wiki/INOC-DBA
The INOC-DBA (Inter-Network Operations Center Dial-By-ASN) hotline phone system is a global voice telephony network that connects the network operations centers and security incident response teams of critical Internet infrastructure providers such as backbone carriers, Internet service providers, and Internet exchanges as well as critical individuals within the policy, regulatory, Internet governance, security and vendor communities. It was built by Packet Clearing House in 2001, was publicly announced at NANOG in October of 2002, and the secretariat function was transferred from PCH to the Brazilian CERT in 2015. INOC-DBA is a closed system, ensuring secure and authenticated communications, and uses a combination of redundant directory services and direct peer-to-peer communications between stations to create a resilient, high-survivability network. It carries both routine operational traffic and emergency-response traffic. The INOC-DBA network uses IETF-standard SIP Voice over IP protocols to ensure interoperability between thousands of users across more than 2,800 NOCs and CERTs, which use dozens of different varieties of station and switch devices. It was the first production implementation of inter-carrier SIP telephony, when voice over IP had previously consisted exclusively of H.323 gateway-to-gateway call transport. INOC-DBA became the first telephone network of any kind to provide service on all seven continents when Michael Holstine of Raytheon Polar Services installed terminals at the South Pole Station in March of 2001. References External links INOC-DBA directory. A directory of the subset of INOC-DBA participants who choose to be publicly listed. Sobre INOC-DBA. A Portuguese introduction to the INOC-DBA system, maintained by NIC-BR, the Brazilian national Internet registry. INOC-DBA technical discussion mailing list archives INOC-DBA announcement presentation in PDF format INOC-DBA announcement demonstration in QuickTime format The initial
https://en.wikipedia.org/wiki/Green%27s%20matrix
In mathematics, and in particular ordinary differential equations, a Green's matrix helps to determine a particular solution to a first-order inhomogeneous linear system of ODEs. The concept is named after George Green. For instance, consider where is a vector and is an matrix function of , which is continuous for , where is some interval. Now let be linearly independent solutions to the homogeneous equation and arrange them in columns to form a fundamental matrix: Now is an matrix solution of . This fundamental matrix will provide the homogeneous solution, and if added to a particular solution will give the general solution to the inhomogeneous equation. Let be the general solution. Now, This implies or where is an arbitrary constant vector. Now the general solution is The first term is the homogeneous solution and the second term is the particular solution. Now define the Green's matrix The particular solution can now be written External links An example of solving an inhomogeneous system of linear ODEs and finding a Green's matrix from www.exampleproblems.com. Ordinary differential equations Matrices
https://en.wikipedia.org/wiki/Body%20composition
In physical fitness, body composition refers to quantifying the different components (or "compartments") of a human body. The selection of compartments varies by model but may include fat, bone, water, and muscle. Two people of the same gender, height, and body weight may have completely different body types as a consequence of having different body compositions. This may be explained by a person having low or high body fat, dense muscles, or big bones. Compartment models Body composition models typically use between 2 and 6 compartments to describe the body. Common models include: 2 compartment: Fat mass (FM) and fat-free mass (FFM) 3 compartment: Fat mass (FM), water, and fat-free dry mass 4 compartment: Fat mass (FM), water, protein, and mineral 5 compartment: Fat mass (FM), water, protein, bone mineral content, and non-osseous mineral content 6 compartment: Fat mass (FM), water, protein, bone mineral content, non-osseous mineral content, and glycogen As a rule, the compartments must sum to the body weight. The proportion of each compartment as a percent is often reported, found by dividing the compartment weight by the body weight. Individual compartments may be estimated based on population averages or measured directly or indirectly. Many measurement methods exist with varying levels of accuracy. Typically, the higher compartment models are more accurate, as they require more data and thus account for more variation across individuals. The four compartment model is considered the reference model for assessment of body composition as it is robust to most variation and each of its components can be measured directly. Measurement methods A wide variety of body composition measurement methods exist. The "gold standard" measurement technique for the 4-compartment model consists of a weight measurement, body density measurement using hydrostatic weighing or air displacement plethysmography, total body water calculation using isotope dilution analysis, a
https://en.wikipedia.org/wiki/Longest%20increasing%20subsequence
In computer science, the longest increasing subsequence problem aims to find a subsequence of a given sequence in which the subsequence's elements are sorted in an ascending order and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous or unique. The longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics. The longest increasing subsequence problem is solvable in time where denotes the length of the input sequence. Example In the first 16 terms of the binary Van der Corput sequence 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15 one of the longest increasing subsequences is 0, 2, 6, 9, 11, 15. This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not the only solution: for instance, 0, 4, 6, 9, 11, 15 0, 2, 6, 9, 13, 15 0, 4, 6, 9, 13, 15 are other increasing subsequences of equal length in the same input sequence. Relations to other algorithmic problems The longest increasing subsequence problem is closely related to the longest common subsequence problem, which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence is the longest common subsequence of and where is the result of sorting However, for the special case in which the input is a permutation of the integers this approach can be made much more efficient, leading to time bounds of the form The largest clique in a permutation graph corresponds to the longest decreasing subsequence of the permutation that defines the graph (assuming the original non-permuted sequence is sorted from lowest value to highest). Similarly, the maximum independent set in a permutation graph corresponds to the longest non-decreasing subsequence. Therefore, longest increasing subsequence algorithms can be
https://en.wikipedia.org/wiki/Product%20category
In the mathematical field of category theory, the product of two categories C and D, denoted and called a product category, is an extension of the concept of the Cartesian product of two sets. Product categories are used to define bifunctors and multifunctors. Definition The product category has: as objects: pairs of objects , where A is an object of C and B of D; as arrows from to : pairs of arrows , where is an arrow of C and is an arrow of D; as composition, component-wise composition from the contributing categories: ; as identities, pairs of identities from the contributing categories: 1(A, B) = (1A, 1B). Relation to other categorical concepts For small categories, this is the same as the action on objects of the categorical product in the category Cat. A functor whose domain is a product category is known as a bifunctor. An important example is the Hom functor, which has the product of the opposite of some category with the original category as domain: Hom : Cop × C → Set. Generalization to several arguments Just as the binary Cartesian product is readily generalized to an n-ary Cartesian product, binary product of two categories can be generalized, completely analogously, to a product of n categories. The product operation on categories is commutative and associative, up to isomorphism, and so this generalization brings nothing new from a theoretical point of view. References Definition 1.6.5 in Category theory
https://en.wikipedia.org/wiki/Fever%20of%20unknown%20origin
Fever of unknown origin (FUO) refers to a condition in which the patient has an elevated temperature (fever) but, despite investigations by a physician, no explanation is found. If the cause is found it is usually a diagnosis of exclusion, eliminating all possibilities until only the correct explanation remains. Causes Worldwide, infection is the leading cause of FUO with prevalence varying by country and geographic region. Extrapulmonary tuberculosis is the most frequent cause of FUO. Drug-induced hyperthermia, as the sole symptom of an adverse drug reaction, should always be considered. Disseminated granulomatoses such as tuberculosis, histoplasmosis, coccidioidomycosis, blastomycosis and sarcoidosis are associated with FUO. Lymphomas are the most common cause of FUO in adults. Thromboembolic disease (i.e. pulmonary embolism, deep venous thrombosis) occasionally shows fever. Although infrequent, its potentially lethal consequences warrant evaluation of this cause. Endocarditis, although uncommon, is possible. Bartonella infections are also known to cause fever of unknown origin. Human herpes viruses are a common cause of fever of unknown origin with one study showing Cytomegalovirus, Epstein–Barr virus, human herpesvirus 6 (HHV-6), human herpesvirus 7 (HHV-7) being present in 15%, 10%, 14% and 4.8% respectively with 10% of people presenting with co-infection (infection with two or more human herpes viruses). Infectious mononucleosis, most commonly caused by EBV, may present as a fever of unknown origin. Other symptoms of infectious mononucleosis vary with age with middle aged adults and the elderly more likely to have a longer duration of fever and leukopenia, and younger adults and adolescents more likely to have splenomegaly, pharyngitis and lymphadenopathy. Endemic mycoses such as histoplasmosis, blastomycosis, coccidiomycosis and paracoccidioidomycosis can cause a fever of unknown origin in immunocompromised as well as immunocompetent people. These endemic
https://en.wikipedia.org/wiki/Data%20General%20AOS
Data General AOS (an abbreviation for Advanced Operating System) was the name of a family of operating systems for Data General 16-bit Eclipse C, M, and S minicomputers, followed by AOS/VS and AOS/RT32 (1980) and later AOS/VS II (1988) for the 32-bit Eclipse MV line. Overview AOS/VS exploited the 8-ring protection architecture of the Eclipse MV hardware with ring 7 being the least privileged and ring 0 being the most privileged. The AOS/VS kernel ran in ring 0 and used ring-1 addresses for data structures related to virtual address translations. Ring 2 was unused and reserved for future use by the kernel. The Agent, which performed much of the system call validation for the AOS/VS kernel, as well as some I/O buffering and many compatibility functions, ran in ring 3 of each process. Ring 4 was used by various D.G. products such as the INFOS II DBMS. Rings 5 and 6 were reserved for use by user programs but rarely used except for large software such as the MV/UX inner-ring emulator and Oracle which used ring 5. All user programs ran in ring 7. The AOS software was far more advanced than competing PDP-11 operating systems. 16-bit AOS applications ran natively under AOS/VS and AOS/VS II on the 32-bit Eclipse MV line. AOS/VS (Advanced Operating System/Virtual Storage) was the most commonly used DG software product, and included a command-line interpreter (CLI) allowing for complex scripting, DUMP/LOAD, and other custom components. The 16-bit version of the CLI is famous for including an Easter egg meant to honor Xyzzy (which was pronounced "magic"). This was the internal code name of what externally became known as the AOS/VS 32-bit operating system. A user typing in the command "xyzzy" would get back a response from the CLI of "Nothing Happens". When a 32-bit version of the CLI became available under AOS/VS II, the same command instead reported "Twice As Much Happens". A modified version of System V.2 Unix called MV/UX hosted under AOS/VS was also available. A modi
https://en.wikipedia.org/wiki/Ricochet%20%28Internet%20service%29
Ricochet was one of the first wireless Internet access services in the United States, before Wi-Fi, 3G, and other technologies were available to the general public. It was developed and first offered by Metricom Incorporated, which shut down in 2001. The service was originally known as the Micro Cellular Data Network, or MCDN, gaining the Ricochet name when the service was launched to the public. History Metricom was founded in 1985, initially selling radios to electric, gas, oil, and water industrial customers. The company was founded by Dr. David M. Elliott and Paul Baran. Paul Allen took a controlling stake in Metricom in 1997. Service began in 1994 in Cupertino, California, and was deployed throughout Silicon Valley (the northern part of Santa Clara Valley) by 1995, the rest of the San Francisco Bay Area by 1996, and to other cities throughout the end of the 1990s. By this time, the service was operating at roughly the speed of a 56 kbit/s dialup modem. Ricochet introduced a higher-speed 128 kbit/s, service in 1999, however, monthly fees for this service were more than double those for the original service. At its height in early 2001, Ricochet service was available in many areas, including Atlanta, Baltimore, and Dallas. Over 51,000 subscribers paid for the service. In July 2001, however, Ricochet's owner, Metricom, filed for Chapter 11 bankruptcy and shut down its service. Like many companies during the dot-com boom, Metricom had spent more money than it took in and concentrated on a nationwide rollout and marketing instead of developing select markets. Ricochet was reportedly officially utilized in the immediate disaster recovery situation of the September 11, 2001 terrorist attacks, partially operated by former employees as volunteers, when even cell phone networks were overloaded. Aftermath After bankruptcy, in November 2001, Aerie Networks, a Denver-based broadband firm, purchased the assets of the company at a liquidation sale. Service was restored t
https://en.wikipedia.org/wiki/182%20%28number%29
182 (one hundred [and] eighty-two) is the natural number following 181 and preceding 183. In mathematics 182 is an even number 182 is a composite number, as it is a positive integer with a positive divisor other than one or itself 182 is a deficient number, as the sum of its proper divisors, 154, is less than 182 182 is a member of the Mian–Chowla sequence: 1, 2, 4, 8, 13, 21, 31, 45, 66, 81, 97, 123, 148, 182 182 is a nontotient number, as there is no integer with exactly 182 coprimes below it 182 is an odious number 182 is a pronic number, oblong number or heteromecic number, a number which is the product of two consecutive integers (13 × 14) 182 is a repdigit in the D'ni numeral system (77), and in base 9 (222) 182 is a sphenic number, the product of three prime factors 182 is a square-free number 182 is an Ulam number Divisors of 182: 1, 2, 7, 13, 14, 26, 91, 182 In astronomy 182 Elsa is a S-type main belt asteroid OGLE-TR-182 is a star in the constellation Carina In the military JDS Ise (DDH-182), a Hyūga-class helicopter destroyer of the Japan Maritime Self-Defense Force The United States Air Force 182d Airlift Wing unit at Greater Peoria Regional Airport, Peoria, Illinois was a United States Navy troop transport during World War II was a United States Navy yacht during World War I was a United States Navy Alamosa-class cargo ship during World War II was a United States Navy during World War II was a United States Navy during World War II was a United States Navy following World War I 182nd Fighter Squadron, Texas Air National Guard unit of the Texas Air National Guard 182nd Infantry Regiment, now known as the 182nd Cavalry Squadron (RSTA), is the oldest combat regiment in the United States Army 182nd Battalion, Canadian Expeditionary Force during World War I In music Blink-182, an American pop punk band Blink-182 (album), their 2003 eponymous album In transportation Alfa Romeo 182 Formula One car 182nd–183rd Street
https://en.wikipedia.org/wiki/IBM%20BladeCenter
The IBM BladeCenter was IBM's blade server architecture, until it was replaced by Flex System in 2012. The x86 division was later sold to Lenovo in 2014. History Introduced in 2002, based on engineering work started in 1999, the IBM eServer BladeCenter was relatively late to the blade server market. It differed from prior offerings in that it offered a range of x86 Intel server processors and input/output (I/O) options. The naming was changed to IBM BladeCenter in 2005. In February 2006, IBM introduced the BladeCenter H with switch capabilities for 10 Gigabit Ethernet and InfiniBand 4X. A web site called Blade.org was available for the blade computing community through about 2009. In 2012, the replacement Flex System was introduced. Enclosures IBM BladeCenter (E) The original IBM BladeCenter was later marketed as BladeCenter E. Power supplies have been upgraded through the life of the chassis from the original 1200 to 1400, 1800, 2000 and 2320 watt. The BladeCenter (E) was co-developed by IBM and Intel and included: 14 blade slots in 7U Shared Media tray with Optical drive, floppy drive and USB 1.1 port 1 (upgradable to 2) Management modules Two slots for Gigabit Ethernet switches (can also have optical or copper pass-through) Two slots for optional switch or pass-through modules, can have additional Ethernet, Fibre Channel, InfiniBand or Myrinet 2000 functions Power: Two (upgradable to four) power supplies, C19/C20 connectors Two redundant high-speed blowers IBM BladeCenter T BladeCenter T is the telecommunications company version of the original BladeCenter, available with either AC or DC (48 V) power. Has 8 blade slots in 8U, but uses the same switches and blades as the regular BladeCenter E. To keep NEBS Level 3 / ETSI compliant special Network Equipment-Building System (NEBS) compliant blades are available. IBM BladeCenter H Upgraded BladeCenter design with high-speed fabric options, announced in 2006. Backwards compatible with older BladeCente
https://en.wikipedia.org/wiki/Aerodynamic%20center
In aerodynamics, the torques or moments acting on an airfoil moving through a fluid can be accounted for by the net lift and net drag applied at some point on the airfoil, and a separate net pitching moment about that point whose magnitude varies with the choice of where the lift is chosen to be applied. The aerodynamic center is the point at which the pitching moment coefficient for the airfoil does not vary with lift coefficient (i.e. angle of attack), making analysis simpler. where is the aircraft lift coefficient. The lift and drag forces can be applied at a single point, the center of pressure, about which they exert zero torque. However, the location of the center of pressure moves significantly with a change in angle of attack and is thus impractical for aerodynamic analysis. Instead the aerodynamic center is used and as a result the incremental lift and drag due to change in angle of attack acting at this point is sufficient to describe the aerodynamic forces acting on the given body. Theory Within the assumptions embodied in thin airfoil theory, the aerodynamic center is located at the quarter-chord (25% chord position) on a symmetric airfoil while it is close but not exactly equal to the quarter-chord point on a cambered airfoil. From thin airfoil theory: where is the section lift coefficient, is the angle of attack in radian, measured relative to the chord line. where is the moment taken at quarter-chord point and is a constant. Differentiating with respect to angle of attack For symmetrical airfoils , so the aerodynamic center is at 25% of chord. But for cambered airfoils the aerodynamic center can be slightly less than 25% of the chord from the leading edge, which depends on the slope of the moment coefficient, . These results obtained are calculated using the thin airfoil theory so the use of the results are warranted only when the assumptions of thin airfoil theory are realistic. In precision experimentation with real airfoils and ad
https://en.wikipedia.org/wiki/Pitching%20moment
In aerodynamics, the pitching moment on an airfoil is the moment (or torque) produced by the aerodynamic force on the airfoil if that aerodynamic force is considered to be applied, not at the center of pressure, but at the aerodynamic center of the airfoil. The pitching moment on the wing of an airplane is part of the total moment that must be balanced using the lift on the horizontal stabilizer. More generally, a pitching moment is any moment acting on the pitch axis of a moving body. The lift on an airfoil is a distributed force that can be said to act at a point called the center of pressure. However, as angle of attack changes on a cambered airfoil, there is movement of the center of pressure forward and aft. This makes analysis difficult when attempting to use the concept of the center of pressure. One of the remarkable properties of a cambered airfoil is that, even though the center of pressure moves forward and aft, if the lift is imagined to act at a point called the aerodynamic center, the moment of the lift force changes in proportion to the square of the airspeed. If the moment is divided by the dynamic pressure, the area and chord of the airfoil, the result is known as the pitching moment coefficient. This coefficient changes only a little over the operating range of angle of attack of the airfoil. The moment coefficient for a whole airplane is not the same as that of its wing. The figure on the right shows the variation of moment with AoA for a stable airplane. The negative slope for positive α indicates stability in pitch. The combination of the two concepts of aerodynamic center and pitching moment coefficient make it relatively simple to analyse some of the flight characteristics of an aircraft. Measurement The aerodynamic center of an airfoil is usually close to 25% of the chord behind the leading edge of the airfoil. When making tests on a model airfoil, such as in a wind-tunnel, if the force sensor is not aligned with the quarter-chord
https://en.wikipedia.org/wiki/RoboWar
RoboWar is an open-source video game in which the player programs onscreen icon-like robots to battle each other with animation and sound effects. The syntax of the language in which the robots are programmed is a relatively simple stack-based one, based largely on IF, THEN, and simply-defined variables. 25 RoboWar tournaments were held in the past between 1990 until roughly 2003, when tournaments became intermittent and many of the major coders moved on. All robots from all tournaments are available on the RoboWar website. The RoboWar programming language, RoboTalk, is a stack-oriented programming language and is similar in structure to FORTH. Programming features RoboWar for the Macintosh was notable among the genre of autonomous robot programming games for the powerful programming model it exposed to the gamer. By the early 1990s, RoboWar included an integrated debugger that permitted stepping through code and setting breakpoints. Later editions of the RoboTalk language used by the robots (a cognate of the HyperTalk language for Apple's HyperCard) included support for interrupts as well. History RoboWar was originally released as a closed source shareware game in 1990 by David Harris for the Apple Macintosh platform. The source code has since been released and implementations are now also available for Microsoft Windows. It was based upon the same concepts as the 1981 Apple II game RobotWar. Initially tournaments were run by David Harris himself, but were eventually run by Eric Foley. See also Core War RobotWar Crobots Robot Battle References External links RoboWar 5 - Home of the recent Microsoft Windows version and original Macintosh version RoboWar project page at SourceForge.net RoboWarX - An implementation written in C# JSRoboWar - Runs in a HTML5-compatible web browser RoboWar on the Programming Games Wiki Classic Mac OS games Open-source video games Programming games Windows games Artificial life models Programming contests 1990 video ga
https://en.wikipedia.org/wiki/Computational%20problem
In theoretical computer science, a computational problem is a problem that may be solved by an algorithm. For example, the problem of factoring "Given a positive integer n, find a nontrivial prime factor of n." is a computational problem. A computational problem can be viewed as a set of instances or cases together with a, possibly empty, set of solutions for every instance/case. For example, in the factoring problem, the instances are the integers n, and solutions are prime numbers p that are the nontrivial prime factors of n. Computational problems are one of the main objects of study in theoretical computer science. The field of computational complexity theory attempts to determine the amount of resources (computational complexity) solving a given problem will require and explain why some problems are intractable or undecidable. Computational problems belong to complexity classes that define broadly the resources (e.g. time, space/memory, energy, circuit depth) it takes to compute (solve) them with various abstract machines. For example, the complexity classes P, problems that consume polynomial time for deterministic classical machines BPP, problems that consume polynomial time for probabilistic classical machines (e.g. computers with random number generators) BQP, problems that consume polynomial time for probabilistic quantum machines. Both instances and solutions are represented by binary strings, namely elements of {0, 1}*. For example, natural numbers are usually represented as binary strings using binary encoding. This is important since the complexity is expressed as a function of the length of the input representation. Types Decision problem A decision problem is a computational problem where the answer for every instance is either yes or no. An example of a decision problem is primality testing: "Given a positive integer n, determine if n is prime." A decision problem is typically represented as the set of all instances for which the answer
https://en.wikipedia.org/wiki/Water%E2%80%93cement%20ratio
The water–cement ratio (w/c ratio, or water-to-cement ratio, sometimes also called the Water-Cement Factor, ) is the ratio of the mass of water () to the mass of cement () used in a concrete mix: The typical values of this ratio = are generally comprised in the interval 0.40 and 0.60. The water-cement ratio of the fresh concrete mix is one of the main, if not the most important, factors determining the quality and properties of hardened concrete, as it directly affects the concrete porosity, and a good concrete is always a concrete as compact and as dense as possible. A good concrete must be therefore prepared with as little water as possible, but with enough water to hydrate the cement minerals and to properly handle it. A lower ratio leads to higher strength and durability, but may make the mix more difficult to work with and form. Workability can be resolved with the use of plasticizers or super-plasticizers. A higher ratio gives a too fluid concrete mix resulting in a too porous hardened concrete of poor quality. Often, the concept also refers to the ratio of water to cementitious materials, w/cm. Cementitious materials include cement and supplementary cementitious materials such as ground granulated blast-furnace slag (GGBFS), fly ash (FA), silica fume (SF), rice husk ash (RHA), metakaolin (MK), and natural pozzolans. Most of supplementary cementitious materials (SCM) are byproducts of other industries presenting interesting hydraulic binding properties. After reaction with alkalis (GGBFS activation) and portlandite (), they also form calcium silicate hydrates (C-S-H), the "gluing phase" present in the hardened cement paste. These additional C-S-H are filling the concrete porosity and thus contribute to strengthen concrete. SCMs also help reducing the clinker content in concrete and therefore saving energy and minimizing costs, while recycling industrial wastes otherwise aimed to landfill. The effect of the water-to-cement (w/c) ratio onto the mechan
https://en.wikipedia.org/wiki/Gardner%E2%80%93Salinas%20braille%20codes
The Gardner–Salinas braille codes are a method of encoding mathematical and scientific notation linearly using braille cells for tactile reading by the visually impaired. The most common form of Gardner–Salinas braille is the 8-cell variety, commonly called GS8. There is also a corresponding 6-cell form called GS6. The codes were developed as a replacement for Nemeth Braille by John A. Gardner, a physicist at Oregon State University, and Norberto Salinas, an Argentinian mathematician. The Gardner–Salinas braille codes are an example of a compact human-readable markup language. The syntax is based on the LaTeX system for scientific typesetting. Table of Gardner–Salinas 8-dot (GS8) braille The set of lower-case letters, the period, comma, semicolon, colon, exclamation mark, apostrophe, and opening and closing double quotes are the same as in Grade-2 English Braille. Digits Apart from 0, this is the same as the Antoine notation used in French and Luxembourgish Braille. Upper-case letters GS8 upper-case letters are indicated by the same cell as standard English braille (and GS8) lower-case letters, with dot #7 added. Compare Luxembourgish Braille. Greek letters Dot 8 is added to the letter forms of International Greek Braille to derive Greek letters: Characters differing from English Braille ASCII symbols and mathematical operators Text symbols Math and science symbols Markup * Encodes the fraction-slash for the single adjacent digits/letters as numerator and denominator. * Used for any > 1 digit radicand. ** Used for markup to represent inkprint text. Typeface indicators Shape symbols Set theory References Blind physicist creates better Braille — a CNN news item, November 9, 1995 The world of blind mathematicians — article in Notices of the AMS, November 2002 Braille symbols 8-dot braille scripts Mathematical notation
https://en.wikipedia.org/wiki/187%20%28number%29
187 (one hundred [and] eighty-seven) is the natural number following 186 and preceding 188. In mathematics There are 187 ways of forming a sum of positive integers that adds to 11, counting two sums as equivalent when they are cyclic permutations of each other. There are also 187 unordered triples of 5-bit binary numbers whose bitwise exclusive or is zero. Per Miller's rules, the triakis tetrahedron produces 187 distinct stellations. It is the smallest Catalan solid, dual to the truncated tetrahedron, which only has 9 distinct stellations. In other fields There are 187 chapters in the Hebrew Torah. See also 187 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Food%20fortification
Food fortification or enrichment is the process of adding micronutrients (essential trace elements and vitamins) to food. It can be carried out by food manufacturers, or by governments as a public health policy which aims to reduce the number of people with dietary deficiencies within a population. The predominant diet within a region can lack particular nutrients due to the local soil or from inherent deficiencies within the staple foods; the addition of micronutrients to staples and condiments can prevent large-scale deficiency diseases in these cases. As defined by the World Health Organization (WHO) and the Food and Agricultural Organization of the United Nations (FAO), fortification refers to "the practice of deliberately increasing the content of an essential micronutrient, i.e. vitamins and minerals (including trace elements) in a food, to improve the nutritional quality of the food supply and to provide a public health benefit with minimal risk to health", whereas enrichment is defined as "synonymous with fortification and refers to the addition of micronutrients to a food which are lost during processing". Food fortification has been identified as the second strategy of four by the WHO and FAO to begin decreasing the incidence of nutrient deficiencies at the global level. As outlined by the FAO, the most commonly fortified foods are cereals and cereal-based products; milk and dairy products; fats and oils; accessory food items; tea and other beverages; and infant formulas. Undernutrition and nutrient deficiency is estimated globally to cause the deaths of between 3 and 5 million people per year. Types Fortification is present in common food items in two different ways: adding back and addition. Flour loses nutritional value due to the way grains are processed; Enriched Flour has iron, folic acid, niacin, riboflavin, and thiamine added back to it. Conversely, other fortified foods have micronutrients added to them that don't naturally occur in those subs
https://en.wikipedia.org/wiki/TV2Me
TV2Me is a device that allows TV viewers to watch their home's cable or satellite television programs on their own computers, mobile phones, television sets and projector screens anywhere in the world. "This technology gives users the ability to shift space, and to watch all the cable or satellite TV channels of any place they choose - live, in full motion, with unparalleled television-quality - on any Internet connected device." History TV2Me was invented by Ken Schaffer, who began working on it in 2001, when he was working overseas. His goal was to watch his favorite American shows through any kind of device from wherever he was. With a team of Turkish and Russian programmers he developed circuitry that allows the MPEG-4 encoder to operate more efficiently and to generate a better picture. Schaffer, who was known for having previously invented the Schaffer–Vega diversity system, the first practical wireless guitar and microphone system for major rock bands, and for developing satellite tracking systems that allowed U.S. agencies and universities to monitor internal television of the then Soviet Union, launched TV2Me on December, 2003. TV2Me introduced the concept of placeshifting and started an entire industry. Operation To set up TV2Me, the cable or satellite box and a broadband internet connection are plugged in the device. "The server requires an Internet connection with an upstream speed of 512 kb/s or higher." On the receiving end (for example the computer), any browser can be used to view in real-time or with a 6-second delay. The delayed mode uses the extra time to produce a slightly better picture. No additional software needs to be installed. "The "target" (receiving location) can be anywhere on earth − anywhere there's wired or wireless broadband. The viewer can use virtually any PC running Windows, Mac, Linux - even Solaris." Copyrights No copyright infringement has been set for this placeshifting device but this technology is problematic to many cop
https://en.wikipedia.org/wiki/Fermi%20point
The term Fermi point has two applications but refers to the same phenomena (special relativity): Fermi point (quantum field theory) Fermi point (nanotechnology) For both applications count that the symmetry between particles and anti-particles in weak interactions is violated: At this point the particle energy is zero. In nanotechnology this concept can be applied to electron behavior. An electron as a single particle is a fermion obeying the Pauli exclusion principle. Fermi point (quantum field theory) Fermionic systems that have a Fermi surface (FS) belong to a universality class in quantum field theory. Any collection of fermions with weak repulsive interactions belongs to this class. At the Fermi point, the break of symmetry can be explained by assuming a vortex or singularity will appear as a result of the spin of a fermi particle (quasiparticle, fermion) in one dimension of the three-dimensional momentum space. Fermi point (nanoscience) The Fermi point is one particular electron state. The Fermi point refers to an event chirality of electrons is involved and the diameter of a carbon nanotube for which the nanotube becomes metallic. As the structure of a carbon nanotube determines the energy levels that the carbon's electrons may occupy, the structure affects macroscopic properties of the nanotube structure, most notably electrical and thermal conductivity. Flat graphite is a conductor except when rolled up into small cylinders. This circular structure inhibits the internal flow of electrons and the graphite becomes a semiconductor; a transition point forms between the valence band and conduction band. This point is called the Fermi point. If the diameter of the carbon nanotube is sufficiently great, the necessary transition phase disappears and the nanotube may be considered a conductor. See also Fermi energy Fermi surface Bandgap Notes Critical phenomena Nanoelectronics Condensed matter physics Quantum field theory Special relativity
https://en.wikipedia.org/wiki/William%20Poduska
John William Poduska Sr. is an American engineer and entrepreneur. He was a founder of Prime Computer, Apollo Computer, and Stellar Computer. Prior to that he headed the Electronics Research Lab at NASA's Cambridge, Massachusetts, facility and also worked at Honeywell. Poduska has been involved in a number of other high-tech startups. He also has served on the boards of Novell, Anadarko Petroleum, Anystream, Boston Ballet, Wang Center and the Boston Lyric Opera. Poduska was elected a member of the National Academy of Engineering in 1986 for technical and entrepreneurial leadership in computing, including development of Prime, the first virtual memory minicomputer, and Apollo, the first distributed, co-operating workstation. Education Poduska was born in Memphis, Tennessee. In 1955, he graduated from Central High School in Memphis. He went on to earn a S.B. and S.M. in electrical engineering, both in 1960, from MIT. He also earned a Sc.D. in EECS from MIT in 1962. Awards Recipient of the McDowell Award, National Academy of Engineering, 1986 References External links 1937 births American computer businesspeople American scientists Computer hardware engineers Living people MIT School of Engineering alumni
https://en.wikipedia.org/wiki/National%20Wind%20Institute
The National Wind Institute (NWI) at Texas Tech University (TTU) was established in December 2012, and is intended to serve as Texas Tech University's intellectual hub for interdisciplinary and transdisciplinary research, commercialization and education related to wind science, wind energy, wind engineering and wind hazard mitigation and serves faculty affiliates, students, and external partners. In 2003, with support from the National Science Foundation, the first interdisciplinary Ph.D. program dedicated to wind science and engineering was developed. Later, the Texas Wind Energy Institute (TWEI) was established, and is a partnership between TTU and Texas State Technical College designed to develop education and career pathways to meet workforce and educational needs of the expanding wind energy industry. Partly funded by the Texas Workforce Commission. In an effort to streamline and to promote synergy, both WiSE and TWEI have now integrated to form the National Wind Institute. NWI organizes and administers large multi-dimensional TTU wind-related research projects and serves as the contact point for major project sponsors and other external partners. History The Wind Science and Engineering (WiSE) Research Center was established in 1970 as the Institute for Disaster Research, following the F5 Lubbock tornado that caused 26 fatalities and over $100 million in damage. Following the aftermath of the tornado, the WISE center developed the first comprehensive wind engineering report of its kind. In 2006, the Enhanced Fujita scale was developed at TTU to update the original Fujita scale that was first introduced in 1971. In 2003, with support from the National Science Foundation, the first interdisciplinary Ph.D. program dedicated to wind science and engineering was developed. Later, the Texas Wind Energy Institute (TWEI) was established as a partnership between TTU and Texas State Technical College designed to develop education and career pathways to meet workforce
https://en.wikipedia.org/wiki/Leigh%20Canham
Leigh Canham is a British scientist who has pioneered the optoelectronic and biomedical applications of porous silicon. Leigh Canham graduated from University College London in 1979 with a BSc in Physics and completed his PhD at King's College London in 1983. His early work in this area took place at the Royal Signals and Radar Establishment in Malvern, Worcestershire. Canham and his colleagues showed that electrochemically etched silicon could be made porous. This porous material could emit visible light when a current was passed through it (electroluminescence). Later the group demonstrated the biocompatibility of porous silicon. Canham now works as Chief Scientific Officer of psiMedica (part of pSiVida). According to the pSiVida web site, Canham is the most cited author on porous silicon. In a study of most cited physicists up to 1997 Canham ranked at 771. Bibliography Selected papers Porous silicon-based scaffolds for tissue engineering and other biomedical applications, Jeffery L. Coffer, Melanie A. Whitehead, Dattatri K. Nagesha, Priyabrata Mukherjee, Giridhar Akkaraju, Mihaela Totolici, Roghieh S. Saffie, Leigh T. Canham, Physica Status Solidi A Vol. 202, Issue 8, Pages 1451 - 1455 Gaining light from silicon, Leigh Canham, Nature vol. 408, pp. 411 – 412 (2000) Progress towards silicon optoelectronics using porous silicon technology, L. T. Canham, T. I. Cox, A. Loni and A. J. Simons, Applied Surface Science, Volume 102, Pages 436-441 (1996) Porous silicon multilayer optical waveguides, A. Loni, L. T. Canham, M. G. Berger, R. Arens-Fischer, H. Munder, H. Luth, H. F. Arrand and T. M. Benson, Thin Solid Films, Vol. 276, Issues 1-2, pages 143-146 (1996) The origin of efficient luminescence in highly porous silicon, K. J. Nash, P. D. J. Calcott, L. T. Canham, M. J. Kane and D. Brumhead, J. of Luminescence, Volumes 60-61, Pages 297-301 (1994) Electronic quality of vapour phase epitaxial Si grown at reduced temperature, W. Y. Leong, L. T. Canham, I. M. Yo
https://en.wikipedia.org/wiki/Autocorrelation%20technique
The autocorrelation technique is a method for estimating the dominating frequency in a complex signal, as well as its variance. Specifically, it calculates the first two moments of the power spectrum, namely the mean and variance. It is also known as the pulse-pair algorithm in radar theory. The algorithm is both computationally faster and significantly more accurate compared to the Fourier transform, since the resolution is not limited by the number of samples used. Derivation The autocorrelation of lag 1 can be expressed using the inverse Fourier transform of the power spectrum : If we model the power spectrum as a single frequency , this becomes: where it is apparent that the phase of equals the signal frequency. Implementation The mean frequency is calculated based on the autocorrelation with lag one, evaluated over a signal consisting of N samples: The spectral variance is calculated as follows: Applications Estimation of blood velocity and turbulence in color flow imaging used in medical ultrasonography. Estimation of target velocity in pulse-doppler radar External links A covariance approach to spectral moment estimation, Miller et al., IEEE Transactions on Information Theory. Doppler Radar Meteorological Observations Doppler Radar Theory. Autocorrelation technique described on p.2-11 Real-Time Two-Dimensional Blood Flow Imaging Using an Autocorrelation Technique, by Chihiro Kasai, Koroku Namekawa, Akira Koyano, and Ryozo Omoto, IEEE Transactions on Sonics and Ultrasonics, Vol. SU-32, No.3, May 1985. Radar theory Signal processing Autocorrelation
https://en.wikipedia.org/wiki/MU%20puzzle
The MU puzzle is a puzzle stated by Douglas Hofstadter and found in Gödel, Escher, Bach involving a simple formal system called "MIU". Hofstadter's motivation is to contrast reasoning within a formal system (i.e., deriving theorems) against reasoning about the formal system itself. MIU is an example of a Post canonical system and can be reformulated as a string rewriting system. The puzzle Suppose there are the symbols , , and which can be combined to produce strings of symbols. The MU puzzle asks one to start with the "axiomatic" string and transform it into the string using in each step one of the following transformation rules: {| |- | Nr.           | COLSPAN=3 | Formal rule | Informal explanation | COLSPAN=3 | Example |- | 1. | ALIGN=RIGHT | x || → || x | Add a to the end of any string ending in | ALIGN=RIGHT | || to || |- | 2. | ALIGN=RIGHT | x || → || xx | Double the string after the | ALIGN=RIGHT | || to || |- | 3. | ALIGN=RIGHT | xy || → || xy | Replace any with a | ALIGN=RIGHT | || to || |- | 4. | ALIGN=RIGHT | xy || → || xy | Remove any | ALIGN=RIGHT | || to || |} Solution The puzzle cannot be solved: it is impossible to change the string into by repeatedly applying the given rules. In other words, MU is not a theorem of the MIU formal system. To prove this, one must step "outside" the formal system itself. In order to prove assertions like this, it is often beneficial to look for an invariant; that is, some quantity or property that doesn't change while applying the rules. In this case, one can look at the total number of in a string. Only the second and third rules change this number. In particular, rule two will double it while rule three will reduce it by 3. Now, the invariant property is that the number of is not divisible by 3: In the beginning, the number of s is 1 which is not divisible by 3. Doubling a number that is not divisible by 3 does not make it divisible by 3. Subtracting 3 from a number that is not divisible
https://en.wikipedia.org/wiki/Zoophyte
A zoophyte (animal-plant) is an obsolete term for an organism thought to be intermediate between animals and plants, or an animal with plant-like attributes or appearance. In the 19th century they were reclassified as Radiata which included various taxa, a term superseded by Coelenterata referring more narrowly to the animal phyla Cnidaria (coral animals, true jellies, sea anemones, sea pens, and their allies), sponges, and Ctenophora (comb jellies). A group of strange creatures that exist somewhere on, or between, the boundaries of plants and animals kingdoms were the subject of considerable debate in the eighteenth century. Some naturalists believed that they were a blend of plant and animal; other naturalists considered them to be entirely either plant or animal (such as sea anemones). Ancient and medieval to early modern era In Eastern cultures such as Ancient China fungi were classified as plants in the Traditional Chinese Medicine texts, and cordyceps, and in particular Ophiocordyceps sinensis, were considered zoophytes. Zoophytes are common in medieval and renaissance era herbals, notable examples including the Tartar Lamb, a legendary plant which grew sheep as fruit. Zoophytes appeared in many influential early medical texts, such as Dioscorides's De Materia Medica and subsequent adaptations and commentaries on that work, notably Mattioli's Discorsi. Zoophytes are frequently seen as medieval attempts to explain the origins of exotic, unknown plants with strange properties (such as cotton, in the case of the Tartar Lamb). Reports of zoophytes continued into the seventeenth century and were commented on by many influential thinkers of the time period, including Francis Bacon. It was not until 1646 that claims of zoophytes began to be concretely refuted, and skepticism towards claims of zoophytes mounted throughout the seventeenth and eighteenth centuries. 18th to 19th century, natural history As natural history and natural philosophy developed in the
https://en.wikipedia.org/wiki/Herv%C3%A9%20This
Hervé This (; born 5 June 1955 in Suresnes, Hauts-de-Seine, sometimes named Hervé This-Benckhard, or Hervé This vo Kientza) is a French physical chemist who works for the Institut National de la Recherche Agronomique at AgroParisTech, in Paris, France. His main area of scientific research is molecular gastronomy, that is the science of culinary phenomena (more precisely, looking for the mechanisms of phenomena occurring during culinary transformations). Career With the late Nicholas Kurti, he coined the scientific term "Molecular and Physical Gastronomy" in 1988, which he shortened to "Molecular Gastronomy" after Kurti's death in 1998. Graduated from ESPCI Paris, he obtained a Ph.D from the Pierre and Marie Curie University, under the title "La gastronomie moléculaire et physique". He has written many scientific publications, as well as several books on the subject, which can be understood even by those who have little or no knowledge of chemistry, but so far only four have been translated into English. He also collaborates with the magazine Pour la Science (the French edition ofScientific American), the aim of which is to present scientific concepts to the general public. Member of the Académie d'agriculture de France since 2010, he was the president of the Section "Human Food" for 9 years. In 2004, he was invited by the French Academy of sciences to create the Foundation "Food Science & Culture", of which he was appointed the Scientific Director. The same year, he was asked to create the Institute for Advanced Studies of Taste ("Hautes Etudes du Goût") with the University of Reims Champagne Ardenne, of which he is the President of the Educational Program. In 2011, he was appointed as a Consulting Professor of AgroParisTech, and he was also asked to create courses on science and technology at Sciences Po Paris. The 3 June 2014, he was asked to create the "International Center for Molecular Gastronomy AgroParisTech-Inrae", to which he was appointed director. The
https://en.wikipedia.org/wiki/Horseshoe%20lemma
In homological algebra, the horseshoe lemma, also called the simultaneous resolution theorem, is a statement relating resolutions of two objects and to resolutions of extensions of by . It says that if an object is an extension of by , then a resolution of can be built up inductively with the nth item in the resolution equal to the coproduct of the nth items in the resolutions of and . The name of the lemma comes from the shape of the diagram illustrating the lemma's hypothesis. Formal statement Let be an abelian category with enough projectives. If is a diagram in such that the column is exact and the rows are projective resolutions of and respectively, then it can be completed to a commutative diagram where all columns are exact, the middle row is a projective resolution of , and for all n. If is an abelian category with enough injectives, the dual statement also holds. The lemma can be proved inductively. At each stage of the induction, the properties of projective objects are used to define maps in a projective resolution of . Then the snake lemma is invoked to show that the simultaneous resolution constructed so far has exact rows. See also Nine lemma References Homological algebra Lemmas in category theory
https://en.wikipedia.org/wiki/Monogenism
Monogenism or sometimes monogenesis is the theory of human origins which posits a common descent for all human races. The negation of monogenism is polygenism. This issue was hotly debated in the Western world in the nineteenth century, as the assumptions of scientific racism came under scrutiny both from religious groups and in the light of developments in the life sciences and human science. It was integral to the early conceptions of ethnology. Modern scientific views favor this theory, with the most widely accepted model for human origins being the "Out of Africa" theory. In the Abrahamic religions The belief that all humans are descended from Adam is central to traditional Judaism, Christianity and Islam. Christian monogenism played an important role in the development of an African-American literature on race, linked to theology rather than science, up to the time of Martin Delany and his Principia of Ethnology (1879). Scriptural ethnology is a term applied to debate and research on the biblical accounts, both of the early patriarchs and migration after Noah's Flood, to explain the diverse peoples of the world. Monogenism as a Bible-based theory required both the completeness of the narratives and the fullness of their power of explanation. These time-honored debates were sharpened by the rise of polygenist skeptical claims; when Louis Agassiz set out his polygenist views in 1847, they were opposed on biblical grounds by John Bachman, and by Thomas Smyth in his Unity of the Human Races. The debates also saw the participation of Delany, and George Washington Williams defended monogenesis as the starting point of his pioneer history of African-Americans. Environmentalist monogenism Environmentalist monogenism describes a theory current in the first half of the nineteenth century, in particular, according to which there was a single human origin, but that subsequent migration of groups of humans had subjected them to different environmental conditions. Envir
https://en.wikipedia.org/wiki/188%20%28number%29
188 (one hundred [and] eighty-eight) is the natural number following 187 and preceding 189. In mathematics There are 188 different four-element semigroups, and 188 ways a chess queen can move from one corner of a board to the opposite corner by a path that always moves closer to its goal. The sides and diagonals of a regular dodecagon form 188 equilateral triangles. In other fields The number 188 figures prominently in the film The Parallel Street (1962) by German experimental film director . The opening frame of the film is just an image of this number. See also The year AD 188 or 188 BC List of highways numbered 188 References Integers
https://en.wikipedia.org/wiki/MicrobeLibrary
MicrobeLibrary is a permanent collection of over 1400 original peer-reviewed resources for teaching undergraduate microbiology. It is provided by the American Society for Microbiology, Washington DC, United States. Contents include curriculum activities; images and animations; reviews of books, websites and other resources; and articles from Focus on Microbiology Education, Microbiology Education and Microbe. Around 40% of the materials are free to educators and students, the remainder require a subscription. the service is suspended with the message to: "Please check back with us in 2017". External links MicrobeLibrary Microbiology
https://en.wikipedia.org/wiki/Commutation%20cell
The commutation cell is the basic structure in power electronics. It is composed of two electronic switches (today, a high-power semiconductor, not a mechanical switch). It was traditionally referred to as a chopper, but since switching power supplies became a major form of power conversion, this new term has become more popular. The purpose of the commutation cell is to "chop" DC power into square wave alternating current. This is done so that an inductor and a capacitor can be used in an LC circuit to change the voltage. This is, in theory, a lossless process; in practice, efficiencies above 80-90% are routinely achieved. The output is usually run through a filter to produce clean DC power. By controlling the on and off times (the duty cycle) of the switch in the commutation cell, the output voltage can be regulated. This basic principle is the core of most modern power supplies, from tiny DC-DC converters in portable devices to massive switching stations for high voltage DC power transmission. Connection of two power elements A Commutation cell connects two power elements, often referred to as sources, although they can either produce or absorb power. Some requirements to connect power sources exist. The impossible configurations are listed in figure 1. They are basically: a voltage source cannot be shorted, as the short circuit would impose a zero voltage which would contradict the voltage generated by the source; in an identical way, a current source cannot be placed in an open circuit; two (or more) voltage sources cannot be connected in parallel, as each of them would try to impose the voltage on the circuit; two (or more) current sources cannot be connected in series, as each of them would try to impose the current in the loop. This applies to classical sources (battery, generator) and capacitors and inductors: At a small time scale, a capacitor is identical to a voltage source and an inductor to a current source. Connecting two capacitors with dif
https://en.wikipedia.org/wiki/Diagonal%20functor
In category theory, a branch of mathematics, the diagonal functor is given by , which maps objects as well as morphisms. This functor can be employed to give a succinct alternate description of the product of objects within the category : a product is a universal arrow from to . The arrow comprises the projection maps. More generally, given a small index category , one may construct the functor category , the objects of which are called diagrams. For each object in , there is a constant diagram that maps every object in to and every morphism in to . The diagonal functor assigns to each object of the diagram , and to each morphism in the natural transformation in (given for every object of by ). Thus, for example, in the case that is a discrete category with two objects, the diagonal functor is recovered. Diagonal functors provide a way to define limits and colimits of diagrams. Given a diagram , a natural transformation (for some object of ) is called a cone for . These cones and their factorizations correspond precisely to the objects and morphisms of the comma category , and a limit of is a terminal object in , i.e., a universal arrow . Dually, a colimit of is an initial object in the comma category , i.e., a universal arrow . If every functor from to has a limit (which will be the case if is complete), then the operation of taking limits is itself a functor from to . The limit functor is the right-adjoint of the diagonal functor. Similarly, the colimit functor (which exists if the category is cocomplete) is the left-adjoint of the diagonal functor. For example, the diagonal functor described above is the left-adjoint of the binary product functor and the right-adjoint of the binary coproduct functor. See also Diagram (category theory) Cone (category theory) Diagonal morphism References Category theory
https://en.wikipedia.org/wiki/Waterfall%20plot
Waterfall plots are often used to show how two-dimensional phenomena change over time. A three-dimensional spectral waterfall plot is a plot in which multiple curves of data, typically spectra, are displayed simultaneously. Typically the curves are staggered both across the screen and vertically, with "nearer" curves masking the ones behind. The result is a series of "mountain" shapes that appear to be side by side. The waterfall plot is often used to show how two-dimensional information changes over time or some other variable such as rotational speed. Waterfall plots are also often used to depict spectrograms or cumulative spectral decay (CSD). Uses The results of spectral density estimation, showing the spectrum of the signal at successive intervals of time. The delayed response from a loudspeaker or listening room produced by impulse response testing or MLSSA. Spectra at different engine speeds when testing engines. See also Loudspeaker acoustics Loudspeaker measurement References External links Typical engine vibration waterfall Waterfall FFT Matlab script Plots (graphics) Audio engineering Broadcast engineering Sound production technology Sound recording Acoustics
https://en.wikipedia.org/wiki/UNIVAC%20Series%2090
The Univac Series 90 is an obsolete family of mainframe class computer systems from UNIVAC first introduced in 1973. The low end family members included the 90/25, 90/30 and 90/40 that ran the OS/3 operating system. The intermediate members of the family were the 90/60 and 90/70, while the 90/80, announced in 1976, was the high end system. The 90/60 through 90/80 systems all ran the Univac’s virtual memory operating system, VS/9. The Series 90 systems were the replacement for the UNIVAC 9000 series of low end, mainframe systems marketed by Sperry Univac during the 1960s. The 9000 series systems were byte-addressable machines with an instruction set that was compatible with the IBM System/360. The family included the 9200, 9300, 9400, and 9480 systems. The 9200 and 9300 ran the Minimum Operating system. This system was loaded from cards, but thereafter also supported magnetic tape or magnetic disk for programs and data. The 9400 and 9480 ran a real memory operating system called OS/4. As Sperry moved into the 1970s, they expanded the 9000 family with the introduction of the 9700 system in 1971. They were also developing a new real memory operating system for the 9700 called OS/7. In January 1972, Sperry officially took over the RCA customer base, offering the Spectra 70 and RCA Series computers as the UNIVAC Series 70. They redesigned the 9700, adding virtual memory, and renamed the processor the 90/70. They cancelled development of OS/7 in favor of VS/9, a renamed RCA VMOS. A number of the RCA customers continued with Sperry, and the 90/60 and 90/70 would provide an upgrade path for the customers with 70/45, 70/46, RCA 2 and 3 systems. In 1976, Sperry added the 90/80 at the top end of the Series 90 Family, based on an RCA design, providing an upgrade path for the 70/60, 70/61, RCA 6 and 7 systems. The RCA base was very profitable for Sperry and Sperry was able to put together a string of 40 quarters of profit. Sperry also offered their own 1100 family of s
https://en.wikipedia.org/wiki/Jacobson%20ring
In algebra, a Hilbert ring or a Jacobson ring is a ring such that every prime ideal is an intersection of primitive ideals. For commutative rings primitive ideals are the same as maximal ideals so in this case a Jacobson ring is one in which every prime ideal is an intersection of maximal ideals. Jacobson rings were introduced independently by , who named them after Nathan Jacobson because of their relation to Jacobson radicals, and by , who named them Hilbert rings after David Hilbert because of their relation to Hilbert's Nullstellensatz. Jacobson rings and the Nullstellensatz Hilbert's Nullstellensatz of algebraic geometry is a special case of the statement that the polynomial ring in finitely many variables over a field is a Hilbert ring. A general form of the Nullstellensatz states that if R is a Jacobson ring, then so is any finitely generated R-algebra S. Moreover, the pullback of any maximal ideal J of S is a maximal ideal I of R, and S/J is a finite extension of the field R/I. In particular a morphism of finite type of Jacobson rings induces a morphism of the maximal spectrums of the rings. This explains why for algebraic varieties over fields it is often sufficient to work with the maximal ideals rather than with all prime ideals, as was done before the introduction of schemes. For more general rings such as local rings, it is no longer true that morphisms of rings induce morphisms of the maximal spectra, and the use of prime ideals rather than maximal ideals gives a cleaner theory. Examples Any field is a Jacobson ring. Any principal ideal domain or Dedekind domain with Jacobson radical zero is a Jacobson ring. In principal ideal domains and Dedekind domains, the nonzero prime ideals are already maximal, so the only thing to check is if the zero ideal is an intersection of maximal ideals. Asking for the Jacobson radical to be zero guarantees this. In principal ideal domains and Dedekind domains, the Jacobson radical vanishes if and only if there a
https://en.wikipedia.org/wiki/Australian%20Mathematics%20Competition
The Australian Mathematics Competition is a mathematics competition run by the Australian Maths Trust for students from year 3 up to year 12 in Australia, and their equivalent grades in other countries. Since its inception in 1976 in the Australian Capital Territory, the participation numbers have increased to around 600,000, with around 100,000 being from outside Australia, making it the world's largest mathematics competition. History The forerunner of the competition, first held in 1976, was open to students within the Australian Capital Territory, and attracted 1,200 entries. In 1976 and 1977 the outstanding entrants were awarded the Burroughs medal. In 1978, the competition became a nationwide event, and became known as the Australian Mathematics Competition for the Wales awards with 60,000 students from Australia and New Zealand participating. In 1983 the medals were renamed the Westpac awards following a change to the name of the title sponsor Westpac. Other sponsors since the inception of the competition have been the Canberra Mathematical Association and the University of Canberra (previously known as the Canberra College of Advanced Education). The competition has since spread to countries such as New Zealand, Singapore, Fiji, Tonga, Taiwan, China and Malaysia, which submit thousands of entries each. A French translation of the paper has been available since the current competition was established in 1978, with Chinese translation being made available to Hong Kong (Traditional Chinese Characters) and Taiwan (Traditional Chinese Characters) students in 2000. Large print and braille versions are also available. In 2004, the competition was expanded to allow two more divisions, one for year five and six students, and another for year three and four students. In 2005, students from 38 different countries entered the competition. Format The competition paper consists of twenty-five multiple-choice questions and five integer questions, which are ordered
https://en.wikipedia.org/wiki/Home%20construction
Home construction or residential construction is the process of constructing a house, apartment building, or similar residential building generally referred to as a 'home' when giving consideration to the people who might now or someday reside there. Beginning with simple pre-historic shelters, home construction techniques have evolved to produce the vast multitude of living accommodations available today. Different levels of wealth and power have warranted various sizes, luxuries, and even defenses in a "home". Environmental considerations and cultural influences have created an immensely diverse collection of architectural styles, creating a wide array of possible structures for homes. The cost of housing and access to it is often controlled by the modern realty trade, which frequently has a certain level of market force speculation. The level of economic activity in the home-construction section is reported as housing starts, though this is contrarily denominated in terms of distinct habitation units, rather than distinct construction efforts. 'Housing' is also the chosen term in the related concepts of housing tenure, affordable housing, and housing unit (aka dwelling). Four of the primary trades involved in home construction are carpenters, masons, electricians and plumbers, but there are many others as well. Global access to homes is not consistent around the world, with many economies not providing adequate support for the right to housing. Sustainable Development Goal 11 includes a goal to create "Adequate, safe, and affordable housing and basic services and upgrade slums". Based on current and expected global population growth, UN habitat projects needing 96,000 new dwelling units built each day to meet global demands. An important part of housing construction to meet this global demand, is upgrading and retrofitting existing buildings to provide adequate housing. History While homes may have originated in pre-history, there are many notable stages th
https://en.wikipedia.org/wiki/The%20Duel%3A%20Test%20Drive%20II
The Duel: Test Drive II is a 1989 racing video game developed by Distinctive Software and published by Accolade for Amiga, Amstrad CPC, Apple IIGS, Commodore 64, MS-DOS, MSX, ZX Spectrum, Atari ST, Sega Genesis and SNES. Gameplay Like the original Test Drive, the focus of The Duel is driving exotic cars through dangerous highways, evading traffic, and trying to escape police pursuits. While the first game in the series had the player simply racing for time in a single scenario, Test Drive II improves upon its predecessor by introducing varied scenery, and giving the player the option of racing against the clock or competing against a computer-controlled opponent. The player initially is given the opportunity to choose a car to drive and a level of difficulty, which in turn determines whether the car will use an automatic or manual transmission—the number of difficulty options varies between gaming platforms. Levels begin with the player's car (and the computer opponent, if selected) idling on a roadway. Primarily these are two to four lane public highways with many turns; each level is different, and they include obstacles such as bridges, cliffs, and tunnels in addition to the other cars already on the road. Each level also has one or more police cars along the course. The goal of each level is to reach the gas station at the end of the course in the least amount of time. Stopping at the gas station is not mandatory, and one could drive past it if inattentive. The consequence of not stopping is running out of gas, and thus losing a car (life). The player begins the game with five lives, one of which is lost each time the player crashes into something. The player is awarded a bonus life for completing a level without crashing or running out of gas. In addition to losing a life, crashing adds thirty seconds to the player's time. Cars could crash into other traffic or off-road obstacles such as trees or by falling off the cliff on one of the mountain levels. They c
https://en.wikipedia.org/wiki/Mobile%20media
Mobile media has been defined as: "as a personal, interactive, internet-enabled and user-controlled portable platform that provides for the exchange of and sharing of personal and non-personal information among users who are inter-connected." The notion of making media mobile can be traced back to the “first time someone thought to write on a tablet that could be lifted and hauled – rather than on a cave wall, a cliff face, a monument that usually was stuck in place, more or less forever”. In his book Cellphone, Paul Levinson refers to mobile media as “the media-in-motion business.” Since their incarnation, mobile phones as a means of communication have been a focus of great fascination as well as debate. In the book, Studying Mobile Media: Cultural Technologies, Mobile Communication, and the iPhone, Gerard Goggin notes how the ability of portable voice communication to provide ceaseless contact complicates the relationship between the public and private spheres of society. Lee Humphreys' explains in her book that now, "more people in the world today have a mobile phone than have an Internet connection". The development of the portable telephone can be traced back to its use by the military in the late nineteenth-century. By the 1930s, police cars in several major U.S. cities were equipped with one-way mobile radios. In 1931, the Galvin Manufacturing Corporation designed a mass market two-way radio. This radio was named Motorola, which also became the new name for the company in 1947. In 1943, Motorola developed the first portable radiotelephone, the Walkie-Talkie, for use by the American forces during World War II. After the war, two-way radio technology was developed for civilian use. In 1946, AT&T and Southwestern Bell made available the first commercial mobile radiotelephone. This service allowed calls to be made from a fixed phone to a mobile one. "Many scholars have noted and praised the mobility of reading brought about the emergence of the book and the adv
https://en.wikipedia.org/wiki/Root%20cellar
A root cellar (American English), fruit cellar (Mid-Western American English) or earth cellar (British English) is a structure, usually underground or partially underground, used for storage of vegetables, fruits, nuts, or other foods. Its name reflects the traditional focus on root crops stored in an underground cellar, which is still often true; but the scope is wider, as a wide variety of foods can be stored for weeks to months, depending on the crop and conditions, and the structure may not always be underground. Root cellaring has been vitally important in various eras and places for winter food supply. Although present-day food distribution systems and refrigeration have rendered root cellars unnecessary for many people, they remain important for those who value self-sufficiency, whether by economic necessity or by choice and for personal satisfaction. Thus, they are popular among diverse audiences, including gardeners, organic farmers, DIY fans, homesteaders, anyone seeking some emergency preparedness (most extensively, preppers), subsistence farmers, and enthusiasts of local food, slow food, heirloom plants, and traditional culture. Function Root cellars are for keeping food supplies at controlled temperatures and steady humidity. Many crops keep longest just above freezing () and at high humidity (90–95%), but the optimal temperature and humidity ranges vary by crop, and various crops keep well at temperatures further above near-freezing but below room temperature, which is usually . A few crops keep better in low humidity. Root cellars keep food from freezing during the winter and keep food cool during the summer to prevent the spoiling and rotting of the roots, for example, potatoes, onions, garlic, carrots, parsnips, etc.. Typically, a variety of vegetables is placed in the root cellar in the autumn after harvesting. A secondary use for the root cellar is as a place to store wine, beer, or other homemade alcoholic beverages. Vegetables stored in the
https://en.wikipedia.org/wiki/Abstract%20algebraic%20logic
In mathematical logic, abstract algebraic logic is the study of the algebraization of deductive systems arising as an abstraction of the well-known Lindenbaum–Tarski algebra, and how the resulting algebras are related to logical systems. History The archetypal association of this kind, one fundamental to the historical origins of algebraic logic and lying at the heart of all subsequently developed subtheories, is the association between the class of Boolean algebras and classical propositional calculus. This association was discovered by George Boole in the 1850s, and then further developed and refined by others, especially C. S. Peirce and Ernst Schröder, from the 1870s to the 1890s. This work culminated in Lindenbaum–Tarski algebras, devised by Alfred Tarski and his student Adolf Lindenbaum in the 1930s. Later, Tarski and his American students (whose ranks include Don Pigozzi) went on to discover cylindric algebra, whose representable instances algebraize all of classical first-order logic, and revived relation algebra, whose models include all well-known axiomatic set theories. Classical algebraic logic, which comprises all work in algebraic logic until about 1960, studied the properties of specific classes of algebras used to "algebraize" specific logical systems of particular interest to specific logical investigations. Generally, the algebra associated with a logical system was found to be a type of lattice, possibly enriched with one or more unary operations other than lattice complementation. Abstract algebraic logic is a modern subarea of algebraic logic that emerged in Poland during the 1950s and 60s with the work of Helena Rasiowa, Roman Sikorski, Jerzy Łoś, and Roman Suszko (to name but a few). It reached maturity in the 1980s with the seminal publications of the Polish logician Janusz Czelakowski, the Dutch logician Wim Blok and the American logician Don Pigozzi. The focus of abstract algebraic logic shifted from the study of specific classes of a
https://en.wikipedia.org/wiki/Genevestigator
Genevestigator is an application consisting of a gene expression database and tools to analyse the data. It exists in two versions, biomedical and plant, depending on the species of the underlying microarray and RNAseq as well as single-cell RNA-sequencing data. It was started in January 2004 by scientists from ETH Zurich and is currently developed and commercialized by Nebion AG. Researchers and scientists from academia and industry use it to identify, characterize and validate novel drug targets and biomarkers, identify appropriate research models and in general to understand how gene expression changes with different treatments. Gene expression database The Genevestigator database comprises transciptomic data from numerous public repositories including GEO, Array Express and renowned cancer research projects as TCGA. Depending on the license agreement, it may also contain data from private gene expression studies. All data are manually curated, quality-controlled and enriched for sample and experiment descriptions derived from corresponding scientific publications. The number of species from where the samples are derived is constantly increasing. Currently, the biomedical version contains data from human, mouse, and rat used in biomedical research. Gene expression studies are from various research areas including oncology, immunology, neurology, dermatology and cardiovascular diseases. Samples comprise tissue biopsies and cell lines. The plant version (no longer available) contained both, widely used model species such as arabidopsis and medicago as well as major crop species such as maize, rice, wheat and soybean. After the acquisition of Nebion AG by Immunai Inc. in July 2021, plant data began to be phased out as the biotech company prioritized their focus on biopharma data. As of 2023, the plant data is being maintained on a separate server for remaining users with a license to the plant version of Genevestigator. Gene expression tools More than 60,000 s
https://en.wikipedia.org/wiki/Connect%3ADirect
Connect:Direct—originally named Network Data Mover (NDM)— is a computer software product that transfers files between mainframe computers and/or midrange computers. It was developed for mainframes, with other platforms being added as the product grew. NDM was renamed to Connect:Direct in 1993, following the acquisition of Systems Center, Inc. by Sterling Software. In 1996, Sterling Software executed a public spinoff of a new entity called Sterling Commerce, which consisted of the Communications Software Group (the business unit responsible for marketing the Connect:Direct product and other file transfer products sourced from the pre-1993 Sterling Software (e.g. Connect:Mailbox)) and the Sterling EDI Network business. In 2000, SBC Communications acquired Sterling Commerce and held it until 2010. AT&T merged with SBC effective November 2005. In 2010, IBM completed the purchase of Sterling Commerce from AT&T. Technology Traditionally, Sterling Connect:Direct used IBM's Systems Network Architecture (SNA) via dedicated private lines between the parties involved to transfer the data. In the early 1990s TCP/IP support was added. Connect:Direct's primary advantage over FTP was that it made file transfers routine and reliable. IBM Sterling Connect:Direct is used within the financial services industry, government agencies and other large organizations that have multiple computing platforms: mainframes, midrange, Linux or Windows systems. In terms of speed, Connect:Direct typically performs slightly faster than FTP, reaching the maximum that the interconnecting link can support. If CPU cycles are available, Connect:Direct has several compression modes that can greatly enhance the throughput of the transfer, but care must be exercised in multi-processing environments as Connect:Direct can consume large amounts of processing cycles, impacting other workloads. Connect:Direct originally did not support encrypted and secure data transfers, however an add-on, Connect:Direc
https://en.wikipedia.org/wiki/Meta-learning%20%28computer%20science%29
Meta learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn. Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood. By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Jürgen Schmidhuber's early work (1987) and Yoshua Bengio et al.'s work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain. In an open-ended hierarchical meta learning system using genetic programming, better evolutionary methods can be learned by meta evolution, which itself can be improved by meta meta evolution, etc. Definition A proposed definition for a meta learning system combines three
https://en.wikipedia.org/wiki/Hydride%20vapour-phase%20epitaxy
Hydride vapour-phase epitaxy (HVPE) is an epitaxial growth technique often employed to produce semiconductors such as GaN, GaAs, InP and their related compounds, in which hydrogen chloride is reacted at elevated temperature with the group-III metals to produce gaseous metal chlorides, which then react with ammonia to produce the group-III nitrides. Carrier gasses commonly used include ammonia, hydrogen and various chlorides. HVPE technology can significantly reduce the cost of production compared to the most common method of vapor deposition of organometallic compounds (MOCVD). Cost reduction is achieved by significantly reducing the consumption of NH3, cheaper source materials than in MOCVD, reducing the capital equipment costs, due to the high growth rate. Developed in the 1960s, it was the first epitaxial method used for the fabrication of single GaN crystals. Hydride vapour-phase epitaxy (HVPE) is the only III–V and III–N semiconductor crystal growth process working close to equilibrium. This means that the condensation reactions exhibit fast kinetics: one observes immediate reactivity to an increase of the vapour-phase supersaturation towards condensation. This property is due to the use of chloride vapour precursors GaCl and InCl, of which dechlorination frequency is high enough so that there is no kinetic delay. A wide range of growth rates, from 1 to 100 micrometers per hour, can then be set as a function of the vapour-phase supersaturation. Another HVPE feature is that growth is governed by surface kinetics: adsorption of gaseous precursors, decomposition of ad-species, desorption of decomposition products, surface diffusion towards kink sites. This property is of benefit when it comes to selective growth on patterned substrates for the synthesis of objects and structures exhibiting a 3D morphology. The morphology is only dependent on the intrinsic growth anisotropy of crystals. By setting experimental growth parameters of temperature and composition of
https://en.wikipedia.org/wiki/Comparison%20of%20VoIP%20software
This is a comparison of voice over IP (VoIP) software used to conduct telephone-like voice conversations across Internet Protocol (IP) based networks. For residential markets, voice over IP phone service is often cheaper than traditional public switched telephone network (PSTN) service and can remove geographic restrictions to telephone numbers, e.g., have a PSTN phone number in a New York area code ring in Tokyo. For businesses, VoIP obviates separate voice and data pipelines, channelling both types of traffic through the IP network while giving the telephony user a range of advanced abilities. Softphones are client devices for making and receiving voice and video calls over the IP network with the standard functions of most original telephones and usually allow integration with VoIP phones and USB phones instead of using a computer's microphone and speakers (or headset). Most softphone clients run on the open Session Initiation Protocol (SIP) supporting various codecs. Skype runs on a closed proprietary networking protocol but additional business telephone system (PBX) software can allow a SIP based telephone system to connect to the Skype network. Online chat programs now also incorporate voice and video communications. Other VoIP software applications include conferencing servers, intercom systems, virtual foreign exchange services (FXOs) and adapted telephony software which concurrently support VoIP and public switched telephone network (PSTN) like Interactive Voice Response (IVR) systems, dial in dictation, on hold and call recording servers. Some entries below are Web-based VoIP; most are standalone Desktop applications. Desktop applications Discontinued softphone service Mobile phones For mobile VoIP clients: Frameworks and libraries Server software Secure VoIP software VoIP software with client-to-client encryption The following table is an overview of those VoIP clients which (can) provide end-to-end encryption. VoIP software with client-to-
https://en.wikipedia.org/wiki/Leyland%20number
In number theory, a Leyland number is a number of the form where x and y are integers greater than 1. They are named after the mathematician Paul Leyland. The first few Leyland numbers are 8, 17, 32, 54, 57, 100, 145, 177, 320, 368, 512, 593, 945, 1124 . The requirement that x and y both be greater than 1 is important, since without it every positive integer would be a Leyland number of the form x1 + 1x. Also, because of the commutative property of addition, the condition x ≥ y is usually added to avoid double-covering the set of Leyland numbers (so we have 1 < y ≤ x). Leyland primes A Leyland prime is a Leyland number that is also a prime. The first such primes are: 17, 593, 32993, 2097593, 8589935681, 59604644783353249, 523347633027360537213687137, 43143988327398957279342419750374600193, ... corresponding to 32+23, 92+29, 152+215, 212+221, 332+233, 245+524, 563+356, 3215+1532. One can also fix the value of y and consider the sequence of x values that gives Leyland primes, for example x2 + 2x is prime for x = 3, 9, 15, 21, 33, 2007, 2127, 3759, ... (). By November 2012, the largest Leyland number that had been proven to be prime was 51226753 + 67535122 with 25050 digits. From January 2011 to April 2011, it was the largest prime whose primality was proved by elliptic curve primality proving. In December 2012, this was improved by proving the primality of the two numbers 311063 + 633110 (5596 digits) and 86562929 + 29298656 (30008 digits), the latter of which surpassed the previous record. There are many larger known probable primes such as 3147389 + 9314738, but it is hard to prove primality of large Leyland numbers. Paul Leyland writes on his website: "More recently still, it was realized that numbers of this form are ideal test cases for general purpose primality proving programs. They have a simple algebraic description but no obvious cyclotomic properties which special purpose algorithms can exploit." There is a project called XYYXF to factor composite
https://en.wikipedia.org/wiki/Telenet
Telenet was an American commercial packet-switched network which went into service in 1975. It was the first FCC-licensed public data network in the United States. Various commercial and government interests paid monthly fees for dedicated lines connecting their computers and local networks to this backbone network. Free public dialup access to Telenet, for those who wished to access these systems, was provided in hundreds of cities throughout the United States. The original founding company, Telenet Inc., was established by Bolt Beranek and Newman (BBN) and recruited Larry Roberts (former head of the ARPANet) as President of the company, and Barry Wessler. GTE acquired Telenet in 1979. It was later acquired by Sprint and called "Sprintnet". Sprint migrated customers from Telenet to the modern-day Sprintlink IP network, one of many networks composing today's Internet. Telenet had its first offices in downtown Washington, D.C., then moved to McLean, Virginia. It was acquired by GTE while in McLean, and then moved to offices in Reston, Virginia. History After establishing that commercial operation of "value added carriers" was legal in the U.S., Bolt Beranek and Newman (BBN), who were the private contractors for the ARPANET, set out to create a private sector version. In January 1975, Telenet Communications Corporation announced that they had acquired the necessary venture capital after a two-year quest, and on August 16 of the same year they began operating the first public data network. Coverage Originally, the public network had switching nodes in seven US cities: Washington, D.C. (network operations center as well as switching) Boston, Massachusetts New York, New York Chicago, Illinois Dallas, Texas San Francisco, California Los Angeles, California The switching nodes were fed by Telenet Access Controller (TAC) terminal concentrators both colocated and remote from the switches. By 1980, there were over 1000 switches in the public network. At that time, t
https://en.wikipedia.org/wiki/Caenogenesis
Caenogenesis (also variously spelled cenogenesis, kainogenesis, kenogenesis) is the introduction during embryonic development of characters or structure not present in the earlier evolutionary history of the strain or species, as opposed to palingenesis. Notable examples include the addition of the placenta in mammals. Caenogenesis constitutes a violation to Ernst Haeckel's biogenetic law and was explained by Haeckel as adaptation to the peculiar conditions of the organism's individual development. Other authorities, such as Wilhelm His, Sr., on the contrary saw embryonic differences as precursors to adult differences. See also Ontogeny Recapitulation theory References Bibliography Gould, S.J. 1977. Ontogeny and Phylogeny. Cambridge: Harvard University Press. Developmental biology
https://en.wikipedia.org/wiki/Zapple%20Monitor
The Zapple Monitor was a firmware-based product developed by Roger Amidon at Technical Design Laboratories (also known as TDL). TDL was based in Princeton, New Jersey, USA in the 1970s and early 1980s. The Zapple monitor was a primitive operating system which could be expanded and used as a Basic Input/Output Services (BIOS) 8080 and Z80 based computers. Much of the functionality of Zapple would find its way into applications like 'Debug' in MS-DOS. Zapple commands would allow a user to examine and modify memory, I/O, execute software (Goto or Call) and had a variety of other commands. The program required little in the way of then expensive Read Only Memory or RAM. An experienced user could use Zapple to test and debug code, verify hardware function, test memory, and so on. A typical command line would start with a letter such as 'X' (examine memory) followed by a hexadecimal word (the memory address 01AB) and [enter] or [space]. After this sequence the content of the memory location would be shown [FF] and the user could enter a hexadecimal byte [00] to replace the contents of the address, or hit [space] or [enter] to move to the next address [01AB]. An experienced user could enter a small program in this manner, entering machine language from memory. Because of the simple structure of the program, consisting of a vector table (one for each letter) and a small number of subroutines, and because the source code was readily available, adding or modifying Zapple was straightforward. The dominant operating system of the era, CP/M, required the computer manufacturer or hobbyist to develop hardware specific BIOS. Many users tested their BIOS subroutines using Zapple to verify, for example, a floppy disk track seek command, or read sector command, etc., was functioning correctly by extending Zapple to accommodate these operations in the hardware environment. The general structure of Zapple lives on in the code of many older programmers working on embedded systems a
https://en.wikipedia.org/wiki/Image%20gradient
An image gradient is a directional change in the intensity or color in an image. The gradient of the image is one of the fundamental building blocks in image processing. For example, the Canny edge detector uses image gradient for edge detection. In graphics software for digital image editing, the term gradient or color gradient is also used for a gradual blend of color which can be considered as an even gradation from low to high values, as used from white to black in the images to the right. Another name for this is color progression. Mathematically, the gradient of a two-variable function (here the image intensity function) at each image point is a 2D vector with the components given by the derivatives in the horizontal and vertical directions. At each image point, the gradient vector points in the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction. Since the intensity function of a digital image is only known at discrete points, derivatives of this function cannot be defined unless we assume that there is an underlying continuous intensity function which has been sampled at the image points. With some additional assumptions, the derivative of the continuous intensity function can be computed as a function on the sampled intensity function, i.e., the digital image. Approximations of these derivative functions can be defined at varying degrees of accuracy. The most common way to approximate the image gradient is to convolve an image with a kernel, such as the Sobel operator or Prewitt operator. Image gradients are often utilized in maps and other visual representations of data in order to convey additional information. GIS tools use color progressions to indicate elevation and population density, among others. Computer vision In computer vision, image gradients can be used to extract information from images. Gradient images are created from the original image (generally
https://en.wikipedia.org/wiki/Community%20Memory
Community Memory (CM) was the first public computerized bulletin board system. Established in 1973 in Berkeley, California, it used an SDS 940 timesharing system in San Francisco connected via a 110 baud link to a teleprinter at a record store in Berkeley to let users enter and retrieve messages. Individuals could place messages in the computer and then look through the memory for a specific notice. While initially conceived as an information and resource sharing network linking a variety of counter-cultural economic, educational, and social organizations with each other and the public, Community Memory was soon generalized to be an information flea market, by providing unmediated, two-way access to message databases through public computer terminals. Once the system became available, the users demonstrated that it was a general communications medium that could be used for art, literature, journalism, commerce, and social chatter. People Community Memory was created by Lee Felsenstein, Efrem Lipkin, Ken Colstad, Jude Milhon, and Mark Szpakowski, acting as The Community Memory Project within the Resource One computer center at Project One in San Francisco. This group of computer savvy friends and partners wanted to create a simple system that could function as a source of community information. Felsenstein took care of hardware, Lipkin software, and Szpakowski user interface and information husbandry. Community Memory in its first phase (1973–1975) was an experiment to see how people would react to using a computer to exchange information. At that time few people had any direct contact with computers. CM was conceived as a tool to help strengthen the Berkeley community. Their brochure states that "strong, free, non-hierarchical channels of communication--whether by computer and modem, pen and ink, telephone, or face-to-face--are the front line of reclaiming and revitalizing our communities." The creators and founders of Community Memory shared the values of north
https://en.wikipedia.org/wiki/Neural%20Audio%20Corporation
Neural Audio Corporation was an audio research company based in Kirkland, Washington. The company specialized in high-end audio research. It helped XM Satellite Radio launch their service using the Neural Codec Pre-Conditioner, which was designed to provide higher quality audio at lower bitrates. History The company was co-founded by two audio engineers, Paul Hubert and Robert Reams in 2000. In 2009 the company was acquired by DTS Inc. for $15 million in cash. Products Neural was mostly known for its work in the field of audio processing and its "Neural Surround" sound format. ESPN, FOX, NBC, CBS, Sony, Universal, Warner Bros, THX, Yamaha, Pioneer Electronics, Ford, Honda, Nissan, Vivendi and SiriusXM were partners and customers in connection with sound for movies, broadcasting applications, music reproduction and video games. "Neural Surround" is a technology similar to MPEG Surround, where a 5.1 stream is downmixed into stereo than recovered using cues – encoded into the downmixed stereo. NPR participated in a trial of the "Neural Surround" technology in 2004, using the Harris NeuStar 5225. XM HD Surround was based on the same technology. Neural provides its "Codec Pre-Conditioner" in at least two types of devices, a "NeuStar UltraLink digital radio audio conditioner" built as a physical device and a "Neustar SW4.0" built as a piece of software on Windows XP. Manual of the software indicates that the pre-conditioner works by analyzing noise in each frequency bin and masking them to not exceed predefined limits, so that they would not overwhelm a codec. Harris Broadcast acted as a redistributor of Neural technology. References External links Celebrity Voice Changer Mixing & Mastering Services Audio engineering
https://en.wikipedia.org/wiki/Portapak
A Portapak is a battery-powered, self-contained video tape analog recording system. Introduced to the market in 1967, it could be carried and operated by one person. Earlier television cameras were large and heavy, required a specialized vehicle for transportation, and were mounted on a pedestal. The Portapak made it possible to shoot and record video easily outside of the studio without requiring a crew. Although it recorded at a lower quality than television studio cameras, the Portapak was adopted by both professionals and amateurs as a new method of video recording. Before Portapak cameras, remote television news footage was routinely photographed on 16mm film and telecined for broadcast. The first portapak system, the Sony DV-2400 Video Rover, was a two-piece set consisting of a black-and-white composite video video camera and a separate record-only helical scan ½″ video tape recorder (VTR) unit. It required a Sony CV series VTR (such as the CV-2000) to play back the video. Following Sony’s introduction of the Video Rover, numerous other manufacturers sold their own versions of Portapak technology. Although it was light enough for a single person to carry and use, it was usually operated by a crew of two: one carrying and controlling the camera, and one carrying and operating the VTR. This model was followed up by the AV-3400/AVC-3400, which used the EIAJ-1 format, and had 30-minute capacity, as well as playback capability. Later Portapaks by Sony, JVC, and others used such formats as U-Matic videocassettes (with reduced-size 20-minute "U-Matic S" cassettes) and Betacam SP (for which a Portapak, unlike a camera-mounted deck, allowed the use of the larger "L" cassettes, for up to 90-minute recording time). The introduction of the Portapak had a great influence on the development of video art, guerrilla television, and activism. Video collectives such as TVTV and the Videofreex utilized Portapak technology to document countercultural movements apart from the
https://en.wikipedia.org/wiki/Transient%20response
In electrical engineering and mechanical engineering, a transient response is the response of a system to a change from an equilibrium or a steady state. The transient response is not necessarily tied to abrupt events but to any event that affects the equilibrium of the system. The impulse response and step response are transient responses to a specific input (an impulse and a step, respectively). In electrical engineering specifically, the transient response is the circuit’s temporary response that will die out with time. It is followed by the steady state response, which is the behavior of the circuit a long time after an external excitation is applied. Damping The response can be classified as one of three types of damping that describes the output in relation to the steady-state response. Underdamped An underdamped response is one that oscillates within a decaying envelope. The more underdamped the system, the more oscillations and longer it takes to reach steady-state. Here damping ratio is always less than one. Critically damped A critically damped response is the response that reaches the steady-state value the fastest without being underdamped. It is related to critical points in the sense that it straddles the boundary of underdamped and overdamped responses. Here, the damping ratio is always equal to one. There should be no oscillation about the steady-state value in the ideal case. Overdamped An overdamped response is the response that does not oscillate about the steady-state value but takes longer to reach steady-state than the critically damped case. Here damping ratio is greater than one. Properties Transient response can be quantified with the following properties. Rise time Rise time refers to the time required for a signal to change from a specified low value to a specified high value. Typically, these values are 10% and 90% of the step height. Overshoot Overshoot is when a signal or function exceeds its target. It is often associated with ri
https://en.wikipedia.org/wiki/List%20of%20longest-living%20organisms
This is a list of the longest-living biological organisms: the individual(s) (or in some instances, clones) of a species with the longest natural maximum life spans. For a given species, such a designation may include: The oldest known individual(s) that are currently alive, with verified ages. Verified individual record holders, such as the longest-lived human, Jeanne Calment, or the longest-lived domestic cat, Creme Puff. The definition of "longest-living" used in this article considers only the observed or estimated length of an individual organism's natural lifespan – that is, the duration of time between its birth or conception, or the earliest emergence of its identity as an individual organism, and its death – and does not consider other conceivable interpretations of "longest-living", such as the length of time between the earliest appearance of a species in the fossil record and the present (the historical "age" of the species as a whole), the time between a species' first speciation and its extinction (the phylogenetic "lifespan" of the species), or the range of possible lifespans of a species' individuals. This list includes long-lived organisms that are currently still alive as well as those that are dead. Determining the length of an organism's natural lifespan is complicated by many problems of definition and interpretation, as well as by practical difficulties in reliably measuring age, particularly for extremely old organisms and for those that reproduce by asexual cloning. In many cases the ages listed below are estimates based on observed present-day growth rates, which may differ significantly from the growth rates experienced thousands of years ago. Identifying the longest-living organisms also depends on defining what constitutes an "individual" organism, which can be problematic, since many asexual organisms and clonal colonies defy one or both of the traditional colloquial definitions of individuality (having a distinct genotype and havin
https://en.wikipedia.org/wiki/Nominal%20analogue%20blanking
Nominal analogue blanking is the outermost part of the overscan of a standard definition digital television image. It consists of a gap of black (or nearly black) pixels at the left and right sides, which correspond to the end and start of the horizontal blanking interval: the front porch at the right side (the end of a line, before the sync pulse), and the back porch at the left side (the start of a line, after the sync pulse and before drawing the next line). Digital television ordinarily contains 720 pixels per line, but only 702 (PAL) to 704 (NTSC) of them contain picture content. The location is variable, since analogue equipment may shift the picture sideways in an unexpected amount or direction. The exact width is determined by taking the definition of the time required for an active line in PAL or NTSC, and multiplying it by the pixel clock of 13.5 MHz of Digital SDTV. PAL is exactly 52μs, so it will equate to exactly 702 pixels. Notably, screen shapes and aspect ratios were defined in an era of purely analogue broadcasting for TV. This means that any picture with nominal analogue blanking, whether it be 702, around 704, or less, will be — by definition — a 4:3 picture. Therefore, when cross-converting into a square-pixel environment (like MPEG-4 and its variants), this width must always scale to 768 (PAL) or 640 (NTSC). This has the outcome of causing a full picture of 720x576 or 720x480 to be wider than 4:3. In fact, a purely digitally sourced SDTV image, with no analogue blanking, will be close to 788x576 or 655x480 once stretched to square pixels. Standard definition widescreen pictures were also defined in an analogue environment and must also be treated as such. This means that a purely digitally sourced widescreen SDTV image, with no analogue blanking, will be close to 1050x576 or 873x480. For details, see the technical specifications of overscan amounts. References ITU-R BT.601: Studio encoding parameters of digital television for stand
https://en.wikipedia.org/wiki/Secretory%20protein
A secretory protein is any protein, whether it be endocrine or exocrine, which is secreted by a cell. Secretory proteins include many hormones, enzymes, toxins, and antimicrobial peptides. Secretory proteins are synthesized in the endoplasmic reticulum. Production The production of a secretory protein starts like any other protein. The mRNA is produced and transported to the cytosol where it interacts with a free cytosolic ribosome. The part that is produced first, the N-terminal, contains a signal sequence consisting of 6 to 12 amino acids with hydrophobic side chains. This sequence is recognised by a cytosolic protein, SRP (Signal Recognition Particle), which stops the translation and aids in the transport of the mRNA-ribosome complex to an SRP receptor found in the membrane of the endoplasmic reticulum. When it arrives at the ER, the signal sequence is transferred to the translocon, a protein-conducting channel in the membrane that allows the newly synthesized polypeptide to be translocated to the ER lumen. The dissociation of SRP from the ribosome restores the translation of the secretory protein. The signal sequence is removed and the translation continues while the produced chain moves through the translocon (cotranslational translocation). Modification After the production of the protein is completed, it interacts with several other proteins to gain its final state. Endoplasmic reticulum After translation, proteins within the ER make sure that the protein is folded correctly. If after a first attempt the folding is unsuccessful, a second folding is attempted. If this fails too the protein is exported to the cytosol and labelled for destruction. Aside from the folding, there is also a sugar chain added to the protein. After these changes, the protein is transported to the Golgi apparatus by a coated vesicle using coating protein COPII. Golgi apparatus In the Golgi apparatus, the sugar chains are modified by adding or removing certain sugars. The secretory
https://en.wikipedia.org/wiki/Megget%20Reservoir
Megget Reservoir is an impounding reservoir in the Megget valley in Ettrick Forest, in the Scottish Borders. The reservoir is held back by the largest earth dam in Scotland. The reservoir collects water from the Tweedsmuir Hills, which is then conveyed via underground pipelines and tunnels to Edinburgh. The pipelines are routed through the Manor Valley and the Meldon Hills, to Gladhouse Reservoir and Glencorse Reservoir in the Pentland Hills. These two reservoirs store the water until such times as it is required. Excess water which overflows from the reservoir is returned to the Megget Water, and hence into St. Mary's Loch. History The Megget Reservoir Scheme was first seriously considered in 1963. In 1974, the then water authority Lothian Regional Council applied for and received authority from the Secretary of State to proceed. Design was carried out by chartered civil engineers Robert H Cuthbertson & Partners on behalf of the water authority, and construction started in 1976. The dam which holds the reservoir is concrete with an asphalt impermeable core. The reservoir was officially opened on 30 September 1983. It has a capacity of , and a maximum water level of above Ordnance Datum. The embankment is high and its crest is long. In 1983, Lothian Regional Council commissioned a short film - "A Different Valley" - on the construction of the dam and associated works. A copy of this is held by the National Library of Scotland and can be viewed online. Cramalt Tower The site of Cramalt or Cramald tower or castle was covered by the water of the reservoir. The excavated foundations have been reconstructed near the shoreline of the reservoir. Cramalt Tower was used by James V when he came to hunt deer in the area in september 1529. His masons worked on the building in 1533. There were two towers. When James V came to hunt in September 1538 his servant John Tennent brought bedding from Linlithgow Palace and Malcolm Gourlay brought tents stored at Holyro