source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Christofilos%20effect
The Christofilos effect, sometimes known as the Argus effect, refers to the entrapment of electrons from nuclear weapons in the Earth's magnetic field. It was first predicted in 1957 by Nicholas Christofilos, who suggested the effect had defensive potential in a nuclear war, with so many beta particles becoming trapped that warheads flying through the region would experience huge electrical currents that would destroy their trigger electronics. The concept that a few friendly warheads could disrupt an enemy attack was so promising that a series of new nuclear tests was rushed into the US schedule before a testing moratorium came into effect in late 1958. These tests demonstrated that the effect was not nearly as strong as predicted, and not enough to damage a warhead. However, the effect is strong enough to be used to black out radar systems and disable satellites. Concept Electrons from nuclear explosions Among the types of energy released by a nuclear explosion are a large number of beta particles, or high energy electrons. These are primarily the result of beta decay within the debris from the fission portions of the bomb, which, in most designs, represents about 50% of the total yield. Because electrons are electrically charged, they induce electrical currents in surrounding atoms as they pass them at high speed. This causes the atoms to ionize while also causing the beta particles to slow down. In the lower atmosphere, this reaction is so powerful that the beta particles slow to thermal speeds within a few tens of meters at most. This is well within a typical nuclear explosion fireball, so the effect is too small to be seen. At high altitudes, the much less-dense atmosphere means the electrons are free to travel long distances. They have enough energy that they will not be recaptured by the proton that is created in the beta decay, so they can, in theory, last indefinitely. Mirror effect In 1951, as part of the first wave of research into fusion energy, U
https://en.wikipedia.org/wiki/Bent%27s%20rule
In chemistry, Bent's rule describes and explains the relationship between the orbital hybridization of central atoms in molecules and the electronegativities of substituents. The rule was stated by Henry A. Bent as follows: The chemical structure of a molecule is intimately related to its properties and reactivity. Valence bond theory proposes that molecular structures are due to covalent bonds between the atoms and that each bond consists of two overlapping and typically hybridised atomic orbitals. Traditionally, p-block elements in molecules are assumed to hybridise strictly as spn, where n is either 1, 2, or 3. In addition, the hybrid orbitals are all assumed to be equivalent (i.e. the spn orbitals have the same p character). Predictions using this approach are usually good, but they can be improved by allowing isovalent hybridization, in which the hybridised orbitals may have noninteger and unequal p character. Bent's rule provides a qualitative estimate as to how these hybridised orbitals should be constructed. Bent's rule is that in a molecule, a central atom bonded to multiple groups will hybridise so that orbitals with more s character are directed towards electropositive groups, while orbitals with more p character will be directed towards groups that are more electronegative. By removing the assumption that all hybrid orbitals are equivalent spn orbitals, better predictions and explanations of properties such as molecular geometry and bond strength can be obtained. Bent's rule has been proposed as an alternative to VSEPR theory as an elementary explanation for observed molecular geometries of simple molecules with the advantages of being more easily reconcilable with modern theories of bonding and having stronger experimental support. The validity of Bent's rule for 75 bond types between the main group elements was examined recently. For bonds with the larger atoms from the lower periods, trends in orbital hybridization depend strongly on both electr
https://en.wikipedia.org/wiki/The%20Human%20Zoo%20%28book%29
The Human Zoo is a book written by the British zoologist Desmond Morris, published in 1969. It is a follow-up to his earlier book The Naked Ape; both books examine how the biological nature of the human species has shaped the character of the cultures of the contemporary world. The Human Zoo examines the nature of civilised society, especially in the cities. Morris compares the human inhabitants of a city to the animal inhabitants of a zoo, which have their survival needs provided for, but at the cost of living in an unnatural environment. Humans in their cities, and animals in their zoos, both have food and shelter provided for them, and have considerable free time on their hands. But they have to live in an unnatural environment, and are both likely to have problems in developing healthy social relationships, both are liable to suffer from isolation and boredom, and both live in a limited amount of physical space. The book explains how the inhabitants of cities and zoos have invented ways to deal with these problems, and the consequences that follow when they fail at dealing with them. From this point of view, Morris examines why civilised society is the way it is. He offers explanations of the best and the worst features of civilised society. He examines the magnificent achievements of civilised society, the sublime explorations that make up science and the humanities, as well as the horrible behaviours of this same society such as war, slavery and rape. This book, and Morris's earlier book The Naked Ape, are two of the early works in the field of sociobiology, which have both contributed much to contemporary understandings of society. The Unabomber, Ted Kaczynski, was heavily influenced by The Human Zoo. Kaczynski’s concept of “surrogate activities” comes from Morris’s concept of “survival-substitute activities,” while Kaczynski's concept of “the power process” is based on Morris’s concept of “the Stimulus Struggle”, though he disagreed with Morris on the ex
https://en.wikipedia.org/wiki/Constant%20fraction%20discriminator
A constant fraction discriminator (CFD) is an electronic signal processing device, designed to mimic the mathematical operation of finding a maximum of a pulse by finding the zero of its slope. Some signals do not have a sharp maximum, but short rise times . Typical input signals for CFDs are pulses from plastic scintillation counters, such as those used for lifetime measurement in positron annihilation experiments. The scintillator pulses have identical rise times that are much longer than the desired temporal resolution. This forbids simple threshold triggering, which causes a dependence of the trigger time on the signal's peak height, an effect called time walk (see diagram). Identical rise times and peak shapes permit triggering not on a fixed threshold but on a constant fraction of the total peak height, yielding trigger times independent from peak heights. From another point of view A time-to-digital converter assigns timestamps. The time-to-digital converter needs fast rising edges with normed height. The plastic scintillation counter delivers fast rising edge with varying heights. Theoretically, the signal could be split into two parts. One part would be delayed and the other low pass filtered, inverted and then used in a variable-gain amplifier to amplify the original signal to the desired height. Practically, it is difficult to achieve a high dynamic range for the variable-gain amplifier, and analog computers have problems with the inverse value. Principle of operation The incoming signal is split into three components. One component is delayed by a time , with it may be multiplied by a small factor to put emphasis on the leading edge of the pulse and connected to the noninverting input of a comparator. One component is connected to the inverting input of this comparator. One component is connected to the noninverting input of another comparator. A threshold value is connected to the inverting input of the other comparator. The output of both compara
https://en.wikipedia.org/wiki/Compartmentalization%20of%20decay%20in%20trees
Compartmentalization of decay in trees (CODIT) is a concept created by plant pathologist Alex Shigo after studying wood-decay fungus patterns. Theoretical background In keeping with the theory of spontaneous generation, in which living things can develop from non-living things, scientists traditionally believed that tree decay led to fungal growth. With the advent of germ theory, however, German forester Robert Hartig in the early 20th century theorized the opposite was the case, and developed a new model for tree decay: when trees are wounded, fungi infect the wounds, and the result is decayed wood. Shigo expanded this theory to claim that when trees are wounded, they respond to the infected wood with both chemical and physical changes to limit the decay, which he called compartmentalization. Process According to CODIT, when a tree is wounded cells undergo changes to form "walls" around the wound, slowing or preventing the spread of disease and decay to the rest of the tree. Wall 1. The first wall is formed by plugging up normally conductive vascular tissue above and below the wound. This tissue runs up and down the length of the stem, so plugging it slows the vertical spread of decay. Tissues are plugged in various ways, such as with tylosis, polyphenolic deposits, anti-fungal substances and (in conifers) by the closure of the bordered pits linking vessel cells. This wall is the weakest. Wall 2. The second wall is formed by the thick-walled, lignin-rich cells of the latewood growth ring interior and exterior to the wound, thus slowing the radial spread of decay. This wall is the second weakest, and is continuous except where intersected by ray cells (see next section). Wall 3. The third wall is formed by ray cells, which are groups of radiating cells oriented perpendicularly to the stem axis, dividing the stem into segments not entirely unlike the slices of a pie. These groups of cells are not continuous and vary in length, height and thickness, form
https://en.wikipedia.org/wiki/Spatial%20normalization
In neuroimaging, spatial normalization is an image processing step, more specifically an image registration method. Human brains differ in size and shape, and one goal of spatial normalization is to deform human brain scans so one location in one subject's brain scan corresponds to the same location in another subject's brain scan. It is often performed in research-based functional neuroimaging where one wants to find common brain activation across multiple human subjects. The brain scan can be obtained from magnetic resonance imaging (MRI) or positron emission tomography (PET) scanners. There are two steps in the spatial normalization process: Specification/estimation of warp-field Application of warp-field with resampling The estimation of the warp-field can be performed in one modality, e.g., MRI, and be applied in another modality, e.g., PET, if MRI and PET scans exist for the same subject and they are coregistered. Spatial normalization typically employs a 3-dimensional nonrigid transformation model (a "warp-field") for warping a brain scan to a template. The warp-field might be parametrized by basis functions such as cosine and polynomia. Diffeomorphisms as compositional transformations of coordinates Alternatively, many advanced methods for spatial normalization are building on structure preserving transformations homeomorphisms and diffeomorphisms since they carry smooth submanifolds smoothly during transformation. Diffeomorphisms are generated in the modern field of Computational Anatomy based on diffeomorphic flows, also called diffeomorphic mapping. However, such transformations via diffeomorphisms are not additive, although they form a group with function composition and acting non-linearly on the images via group action. For this reason, flows which generalize the ideas of additive groups allow for generating large deformations that preserve topology, providing 1-1 and onto transformations. Computational methods for generating such transforma
https://en.wikipedia.org/wiki/Amba%20%28condiment%29
Amba or anba (, but also mis-spelled عمبة, أمبة, همبة, , Syriac: ܐܡܒܵܐ)is a tangy mango pickle condiment of Baghdadi Jewish origin. It is typically made of pickled green mangoes, vinegar, salt, turmeric, chili and fenugreek. It is somewhat similar to savoury mango chutneys. Etymology Mangoes being native to South Asia, the name "amba" seems to have been borrowed, via Arabic, from the Marathi word āmbā (आंबा), which is in turn derived from the Sanskrit word āmra (आम्र, "mango"). History According to the legend, amba was developed in the 19th century by members of the Sassoon family of Bombay, India, who were Baghdadi Jews. Iraqi Jewish immigrants brought it to Israel in the 1950s as an accompaniment to their Shabbat morning meal. Variants Iraqi cuisine Amba is frequently used in Iraqi cuisine, especially as a spicy sauce to be added to fish dishes, falafel, kubbah, kebabs, and eggs. Saudi Arabian cuisine Amba is popular in the western part of the Arabian Peninsula, sold in sealed jars or by kilo. Eaten with bread as part of nawashef (a mixed platter of small plates containing different types of cheese, egg dishes, pickles, ful mudammas, falafel, mutabbag and offal) type meals at breakfast or dinner in the Hejaz. Indian cuisine Amba is similar to the South Asian pickle achar. Jewish cuisine The dish is found in Sephardi cuisine and Mizrahi cuisine. Amba has become very popular in Israel since its introduction to the country by Iraqi Jews in the 1950s and 1960s. Now one of the most common condiments in Israel, it is used as a condiment in sandwiches, as well as a topping for hummus and other mezzes. One difference with Israeli amba is that it is always made with unripe, green mangoes, which contribute to its more savory flavor as unripe mangoes taste less sweet. It is often served as a dressing on shawarma sandwiches, falafels, and usually on sabikh and as an optional topping on falafel, meorav yerushalmi, kebab and salads. In literature Amba is also mentione
https://en.wikipedia.org/wiki/Phase%20converter
A phase converter is a device that converts electric power provided as single phase to multiple phase or vice versa. The majority of phase converters are used to produce three-phase electric power from a single-phase source, thus allowing the operation of three-phase equipment at a site that only has single-phase electrical service. Phase converters are used where three-phase service is not available from the utility provider or is too costly to install. A utility provider will generally charge a higher fee for a three-phase service because of the extra equipment, including transformers, metering, and distribution wire required to complete a functional installation. Types of Phase Converters Three-phase induction motors may operate adequately on an unbalanced supply if not heavily loaded. This allows various imperfect techniques to be used. A single-phase motor can drive a three-phase generator, which will produce a high-quality three-phase source but at a high cost to the longevity of the system. While there are multiple phase conversion systems in place, the most common types are: Rotary phase converters constructed from a three-phase electric motor or generator "idler" and a simple on/off circuit. Rotary phase converters are known to drive up operations costs, due to the continued draw of power while idling that is not common in other phase converters. Rotary phase converters are considered a two-motor solution; one motor is not connected to a load and produces the three-phase power, the second motor driving the load runs on the power produced. A digital phase converter uses a rectifier and inverter to create a third leg of power, which is added to the two legs of the single-phase source to create three-phase power. Unlike a phase-converting VFD, it cannot vary the frequency and motor speed, since it generates only one leg. Digital phase Converters use a Digitial Signal Processor (DSP) to ensure the generated third leg matches the voltage and frequency of t
https://en.wikipedia.org/wiki/Nokia%20Business%20Center
Nokia Business Center (NBC) was a mobile email solution by Nokia, providing push e-mail and (through a paid-for client upgrade) calendar and contact availability to mobile devices. The server runs on Red Hat Enterprise Linux. It was discontinued in 2014. External links Press Release about support for IBM Lotus Notes and Domino addition to NBC Nokia services Mobile web
https://en.wikipedia.org/wiki/John%20Alexander%20Smith
John Alexander Smith (21 April 1863 – 19 December 1939) was a British idealist philosopher, who was the Jowett Lecturer of philosophy at Balliol College, Oxford from 1896 to 1910, and Waynflete Professor of Moral and Metaphysical Philosophy, carrying a Fellowship at Magdalen College in the same university, from 1910 to 1936. He was born in Dingwall and died in Oxford. Life and work Smith was educated at Inverness Academy, the Edinburgh Collegiate School, Edinburgh University (where he was Ferguson classical scholar in 1884), and at Balliol College, Oxford, to which he was admitted as Warner exhibitioner and honorary scholar in Hilary term 1884. His most visible accomplishments were his work with William David Ross on a 12-volume translation of Aristotle, and his Gifford Lectures for 1929–1931 on the Heritage of Idealism, which were never published. The 'Moral' tag in his Professorial title disappeared with R. G. Collingwood's appointment in 1936. Smith expressed some unease about the combination of 'moral' and 'metaphysical' in his inaugural lecture Knowing and Acting: The framer of the Chair's regulations, he remarks, describes the Professor's duties 'in a way which rather sets a problem than furnishes guidance. The Professor, he says, 'shall lecture and give instruction on the principles and history of Mental Philosophy, and on its connexion with Ethics.' He distinguishes two great departments of philosophical thought — so recognizedly different as already to be assigned for separate treatment to two other Professors in the University — and he enjoins that they shall be afresh discussed in their connexion with one another, yet with respect to their distinction. It can scarcely be his meaning that his Professor should attempt the invidious task of harmonising the possibly divergent accounts given of Logic by the Wykeham Professor and of Ethics by Whyte's Professor, of performing in public the higher synthesis of his colleagues' several contributions to philosoph
https://en.wikipedia.org/wiki/Network%20on%20a%20chip
A network on a chip or network-on-chip (NoC or ) is a network-based communications subsystem on an integrated circuit ("microchip"), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules. NoC technology applies the theory and methods of computer networking to on-chip communication and brings notable improvements over conventional bus and crossbar communication architectures. Networks-on-chip come in many network topologies, many of which are still experimental as of 2018. In 2000s researchers had started to propose a type of on-chip interconnection in the form of packet switching networks in order to address the scalability issues of bus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires. Then, the concept of "network on chips" was proposed in 2002. NoCs improve the scalability of systems-on-chip and the power efficiency of complex SoCs compared to other communication subsystem designs. They are an emerging technology, with projections for large growth in the near future as multicore computer architectures become more common. Structure NoCs can span synchronous and asynchronous clock domains, known as clock domain crossing, or use unclocked asynchronous logic. NoCs support globally asynchronous, locally synchronous electronics architectures, allowing each processor core or functional unit on the System-on-Chip to have its own clock domain. Architectures NoC architectures typically model sparse small-world networks (SWNs) and scale-free networks (SFNs) to limit the number, length, area and power consumption of interconnection wires and point-to-point connections. Topology The topology is the first fundamental aspect of NoC design
https://en.wikipedia.org/wiki/Strong%20prime
In mathematics, a strong prime is a prime number with certain special properties. The definitions of strong primes are different in cryptography and number theory. Definition in number theory In number theory, a strong prime is a prime number that is greater than the arithmetic mean of the nearest prime above and below (in other words, it's closer to the following than to the preceding prime). Or to put it algebraically, writing the sequence of prime numbers as (p, p, p, ...) = (2, 3, 5, ...), p is a strong prime if . For example, 17 is the seventh prime: the sixth and eighth primes, 13 and 19, add up to 32, and half that is 16; 17 is greater than 16, so 17 is a strong prime. The first few strong primes are 11, 17, 29, 37, 41, 59, 67, 71, 79, 97, 101, 107, 127, 137, 149, 163, 179, 191, 197, 223, 227, 239, 251, 269, 277, 281, 307, 311, 331, 347, 367, 379, 397, 419, 431, 439, 457, 461, 479, 487, 499 . In a twin prime pair (p, p + 2) with p > 5, p is always a strong prime, since 3 must divide p − 2, which cannot be prime. Definition in cryptography In cryptography, a prime number p is said to be "strong" if the following conditions are satisfied. p is sufficiently large to be useful in cryptography; typically this requires p to be too large for plausible computational resources to enable a cryptanalyst to factorise products of p with other strong primes. p − 1 has large prime factors. That is, p = aq + 1 for some integer a and large prime q. q − 1 has large prime factors. That is, q = aq + 1 for some integer a and large prime q. p + 1 has large prime factors. That is, p = aq − 1 for some integer a and large prime q. It is possible for a prime to be a strong prime both in the cryptographic sense and the number theoretic sense. For the sake of illustration, 439351292910452432574786963588089477522344331 is a strong prime in the number theoretic sense because the arithmetic mean of its two neighboring primes is 62 less. Without the aid of a computer, this nu
https://en.wikipedia.org/wiki/Seismic%20Handler
Seismic Handler (SH) is an interactive analysis program for preferably continuous waveform data. It was developed at the Seismological Observatory Gräfenberg and is in use there for daily routine analysis of local and global seismic events. In original form Seismic Handler was command line based, but now an interactive version is available. Main features Reading traces from continuous data streams in Steim-compressed MiniSEED files. Additionally supported formats are event data from GSE, AH and Q (private format of SH) files. Zoom in and out traces in time and amplitude. Application of a set of standard filters (simulation filters and Butterworth filters) on broadband input traces. Reading phases on original or preprocessed traces. Determination of signal/noise ratio Computation of teleseismic beam traces using array-beamforming or FK-algorithm, determination of slowness and back-azimuth of an incoming wavefront. Location of teleseismic events using global travel time tables based on array methods or relative travel times, determination of focal depth using depth phases Location of regional and local events using LocSAT program, flexible interface provided for integration of own location programs. Integration of an own external programs (e.g.: map display, phase diagrams). Displaying theoretical travel times. Determination of amplitudes and magnitudes (ml or mb and Ms). Saving analysis results into an output text file for further processing. Supported operating systems: Solaris and Linux Additional features Rotation of 3-component seismograms Particle motion diagrams Vespagram-like trace summation Trace spectrum display External links Seismic Handler official home page (SHM) Seismic Handler development site Seismological Observatory Gräfenberg (SZGRF) Science software Seismology measurement
https://en.wikipedia.org/wiki/Scalar%E2%80%93tensor%E2%80%93vector%20gravity
Scalar–tensor–vector gravity (STVG) is a modified theory of gravity developed by John Moffat, a researcher at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. The theory is also often referred to by the acronym MOG (MOdified Gravity). Overview Scalar–tensor–vector gravity theory, also known as MOdified Gravity (MOG), is based on an action principle and postulates the existence of a vector field, while elevating the three constants of the theory to scalar fields. In the weak-field approximation, STVG produces a Yukawa-like modification of the gravitational force due to a point source. Intuitively, this result can be described as follows: far from a source gravity is stronger than the Newtonian prediction, but at shorter distances, it is counteracted by a repulsive fifth force due to the vector field. STVG has been used successfully to explain galaxy rotation curves, the mass profiles of galaxy clusters, gravitational lensing in the Bullet Cluster, and cosmological observations without the need for dark matter. On a smaller scale, in the Solar System, STVG predicts no observable deviation from general relativity. The theory may also offer an explanation for the origin of inertia. Mathematical details STVG is formulated using the action principle. In the following discussion, a metric signature of will be used; the speed of light is set to , and we are using the following definition for the Ricci tensor: We begin with the Einstein–Hilbert Lagrangian: where is the trace of the Ricci tensor, is the gravitational constant, is the determinant of the metric tensor , while is the cosmological constant. We introduce the Maxwell-Proca Lagrangian for the STVG covector field : where is the (coordinate independent) exterior derivative of , is the mass of the vector field, characterizes the strength of the coupling between the fifth force and matter, and is a self-interaction potential. The three constants of the theory, and are promoted to
https://en.wikipedia.org/wiki/Nerve%20conduction%20velocity
In neuroscience, nerve conduction velocity (CV) is the speed at which an electrochemical impulse propagates down a neural pathway. Conduction velocities are affected by a wide array of factors, which include age, sex, and various medical conditions. Studies allow for better diagnoses of various neuropathies, especially demyelinating diseases as these conditions result in reduced or non-existent conduction velocities. CV is an important aspect of nerve conduction studies. Normal conduction velocities Ultimately, conduction velocities are specific to each individual and depend largely on an axon's diameter and the degree to which that axon is myelinated, but the majority of 'normal' individuals fall within defined ranges. Nerve impulses are extremely slow compared to the speed of electricity, where the electric field can propagate with a speed on the order of 50–99% of the speed of light; however, it is very fast compared to the speed of blood flow, with some myelinated neurons conducting at speeds up to 120 m/s (432 km/h or 275 mph). Different sensory receptors are innervated by different types of nerve fibers. Proprioceptors are innervated by type Ia, Ib and II sensory fibers, mechanoreceptors by type II and III sensory fibers, and nociceptors and thermoreceptors by type III and IV sensory fibers. Normal impulses in peripheral nerves of the legs travel at 40–45 m/s, and those in peripheral nerves of the arms at 50–65 m/s. Largely generalized, normal conduction velocities for any given nerve will be in the range of 50–60 m/s. Testing methods Nerve conduction studies Nerve conduction velocity is just one of many measurements commonly made during a nerve conduction study (NCS). The purpose of these studies is to determine whether nerve damage is present and how severe that damage may be. Nerve conduction studies are performed as follows: Two electrodes are attached to the subject's skin over the nerve being tested. Electrical impulses are sent through one elec
https://en.wikipedia.org/wiki/International%20Shark%20Attack%20File
The International Shark Attack File is a global database of shark attacks. The file reportedly contains information on over 6,800 shark attacks spanning from the early 1500s to the present day, and includes detailed, often privileged, information including autopsy reports and photos. It is accessible only to scientists whose access is permitted by a review board. History The database originated when the Office of Naval Research formed the Shark Research Panel in June 1958, which funded it until 1967. This group comprised 34 renowned scientists with expertise in sharks, tasked with exploring research strategies to enhance protection for Navy personnel against shark attacks. The file was temporarily housed at the Mote Marine Laboratory in Sarasota, Florida. In the 1980s, it was transferred to the National Underwater Accident Data Center at the University of Rhode Island before it was transferred to the Florida Museum of Natural History at the University of Florida under the direction of George H. Burgess. It is currently under the direction of Dr. Gavin Naylor and members of the American Elasmobranch Society, which has assumed the task of preserving, expanding, and analyzing shark attack data.
https://en.wikipedia.org/wiki/Brachypodium%20distachyon
Brachypodium distachyon, commonly called purple false brome or stiff brome, is a grass species native to southern Europe, northern Africa and southwestern Asia east to India. It is related to the major cereal grain species wheat, barley, oats, maize, rice, rye, sorghum, and millet. It has many qualities that make it an excellent model organism for functional genomics research in temperate grasses, cereals, and dedicated biofuel crops such as switchgrass. These attributes include small genome (~270 Mbp) diploid accessions, a series of polyploid accessions, a small physical stature, self-fertility, a short lifecycle, simple growth requirements, and an efficient transformation system. The genome of Brachypodium distachyon (diploid inbred line Bd21) has been sequenced and published in Nature in 2010. Model organism Although Brachypodium distachyon has little or no direct agricultural significance, it has several advantages as an experimental model organism for understanding the genetic, cellular and molecular biology of temperate grasses. The relatively small size of its genome makes it useful for genetic mapping and sequencing. In addition, only ~21% of the Brachypodium genome consists of repetitive elements, compared to 26% in rice and ~80% in wheat, further simplifying genetic mapping and sequencing. At about 272 million base pairs and with five chromosomes, it has a small genome for a grass species. Brachypodium distachyon'''s small size (15–20 cm) and rapid life cycle (eight to twelve weeks) are also advantageous for research purposes. For early-flowering accessions it can take as little as three weeks from germination to flower (under an appropriate inductive photoperiod). The small size of some accessions makes it convenient for cultivation in a small space. As a weed it grows easily without specialized growing conditions. This Brachypodium is emerging as a powerful model with a growing research community. The International Brachypodium Initiative (IBI) held i
https://en.wikipedia.org/wiki/Biscuit%20Tortoni
Biscuit Tortoni is an ice cream made with eggs and heavy cream, often containing chopped cherries or topped with minced almonds or crumbled macaroons. It is believed to be named after an Italian café owner in Paris in the 18th century. The dish has appeared on restaurant menus in the United States since 1899, if not earlier. See also List of almond dishes
https://en.wikipedia.org/wiki/Peter%20B.%20Kronheimer
Peter Benedict Kronheimer (born 1963) is a British mathematician, known for his work on gauge theory and its applications to 3- and 4-dimensional topology. He is William Caspar Graustein Professor of Mathematics at Harvard University and former chair of the mathematics department. Education Kronheimer attended the City of London School. He completed his DPhil at Oxford University under the direction of Michael Atiyah. He has had a long association with Merton College, the oldest of the constituent colleges of Oxford University, being an undergraduate, graduate, and full fellow of the college. Career Kronheimer's early work was on gravitational instantons, in particular the classification of hyperkähler 4-manifolds with asymptotical locally Euclidean geometry (ALE spaces), leading to the papers "The construction of ALE spaces as hyper-Kähler quotients" and "A Torelli-type theorem for gravitational instantons." He and Hiraku Nakajima gave a construction of instantons on ALE spaces generalizing the Atiyah–Hitchin–Drinfeld–Manin construction. This constructions identified these moduli spaces as moduli spaces for certain quivers (see "Yang-Mills instantons on ALE gravitational instantons.") He was the initial recipient of the Oberwolfach prize in 1998 on the basis of some of this work. Kronheimer has frequently collaborated with Tomasz Mrowka from the Massachusetts Institute of Technology. Their collaboration began at the Mathematical Research Institute of Oberwolfach, and their first work developed analogues of Simon Donaldson's invariants for 4-manifolds with a distinguished surface. They used the tools developed to prove a conjecture of John Milnor, that the four-ball genus of a -torus knot is . They then went on to develop these tools further and established a structure theorem for Donaldson's polynomial invariants using Kronheimer–Mrowka basic classes. After the arrival of Seiberg–Witten theory their work on embedded surfaces culminated in a proof of th
https://en.wikipedia.org/wiki/Placentation
Placentation refers to the formation, type and structure, or arrangement of the placenta. The function of placentation is to transfer nutrients, respiratory gases, and water from maternal tissue to a growing embryo, and in some instances to remove waste from the embryo. Placentation is best known in live-bearing mammals (theria), but also occurs in some fish, reptiles, amphibians, a diversity of invertebrates, and flowering plants. In vertebrates, placentas have evolved more than 100 times independently, with the majority of these instances occurring in squamate reptiles. The placenta can be defined as an organ formed by the sustained apposition or fusion of fetal membranes and parental tissue for physiological exchange. This definition is modified from the original Mossman (1937) definition, which constrained placentation in animals to only those instances where it occurred in the uterus. In mammals In live bearing mammals, the placenta forms after the embryo implants into the wall of the uterus. The developing fetus is connected to the placenta via an umbilical cord. Mammalian placentas can be classified based on the number of tissues separating the maternal from the fetal blood. These include: endotheliochorial placentation In this type of placentation, the chorionic villi are in contact with the endothelium of maternal blood vessels. (e.g. in most carnivores like cats and dogs) epitheliochorial placentation Chorionic villi, growing into the apertures of uterine glands ( epithelium). (e.g. in ruminants, horses, whales, lower primates, dugongs) hemochorial placentation In hemochorial placentation maternal blood comes in direct contact with the fetal chorion, which it does not in the other two types. It may avail for more efficient transfer of nutrients etc., but is also more challenging for the systems of gestational immune tolerance to avoid rejection of the fetus. (e.g. in higher order primates, including humans, and also in rabbits, guinea pigs, mic
https://en.wikipedia.org/wiki/Human%20male%20sexuality
Human male sexuality encompasses a wide variety of feelings and behaviors. Men's feelings of attraction may be caused by various physical and social traits of their potential partner. Men's sexual behavior can be affected by many factors, including evolved predispositions, individual personality, upbringing, and culture. While most men are heterosexual, significant minorities are homosexual or varying degrees of bisexual. Sexual attraction Physical factors Research indicates that men tend to be attracted to young women with bodily symmetry. Facial symmetry, femininity, and averageness are also linked with attractiveness. Men typically find female breasts attractive and this holds true for a variety of cultures. A preference for lighter-skinned women has been documented across many cultures. Women with a relatively low waist-to-hip ratio (WHR) are considered more attractive. The exact ratio varies among cultures, depending on the WHR of the women in the local culture. In Western cultures, a WHR of 0.70 is preferred. Other possible physical factors of attraction include low body mass index, low waist circumference, longer legs, and greater lower back curvature. Preference for a slim or a plump body build is culturally variable, but in a predictable manner. In cultures where food is scarce, plumpness is associated with higher status and is more attractive, but the reverse is true in wealthy cultures. Men generally prefer their wives to be younger than they are, but by how much exactly varies between cultures. Older men prefer greater age differences, while teenage males prefer females slightly older than they are. The exact degree to which physical appearance is considered important in selecting a long-term mate varies between cultures. Non-physical factors When choosing long-term partners, both men and women desire those who are intelligent, kind, understanding, and healthy. They also show a preference for partners who have similar values, attitudes, personal
https://en.wikipedia.org/wiki/Windows%20Workflow%20Foundation
Windows Workflow Foundation (WF) is a Microsoft technology that provides an API, an in-process workflow engine, and a rehostable designer to implement long-running processes as workflows within .NET applications. The latest version of WF was released as part of the .NET Framework version 4.5 and is referred to as (WF45). A workflow, as defined here, is a series of distinct programming steps or phases. Each step is modeled in WF as an Activity. The .NET Framework provides a library of activities (such as WriteLine, an activity that writes text to the console or other form of output). Custom activities can also be developed for additional functionality. Activities can be assembled visually into workflows using the Workflow Designer, a design surface that runs within Visual Studio. The designer can also be hosted in other applications. Encapsulating programming functionality into the activities allows the developer to create more manageable applications; each component of execution can be developed as a Common Language Runtime object whose execution will be managed by the workflow runtime. Workflow Foundation versions Workflow Foundation was first released in Version 3 of the .NET Framework, and primarily uses the System.WorkflowActivities, System.Workflow.ComponentModel, and System.WorkflowRuntime namespaces. Workflows in version 3 were created using either the Sequential model (in which activities are executed in order, with the completion of one activity leading to the next), or the State Machine model (in which activities are executed in response to external events). Microsoft SharePoint 2007 uses WF 3. In .NET Framework 3.5, messaging activities were introduced that integrated Workflow with Windows Communication Foundation (WCF). With the new ReceiveActivity, workflows could respond to incoming WCF messages. The new features of Workflow in version 3.5 use the System.ServiceModel namespace. Microsoft SharePoint 2010 uses WF 3.5. In .NET Framework 4, Windows W
https://en.wikipedia.org/wiki/Beryllide
Beryllide is an intermetallic compound of beryllium with other metals, e.g. zirconium, tantalum, titanium, nickel, or cobalt. Typical chemical formulae are Be12Ti and FeBe5. These are hard, metal-like materials that display properties distinct from the constituents, especially with regards to their resilience toward oxidation. Applications and potential applications Beryllides of cobalt and nickel have metallurgical importance as the precipitated phase in beryllium copper alloys. These materials are nonsparking, which allows them to be used in certain hazardous environments. In nuclear technology, beryllides are investigated as neutron multipliers. Unlike metallic Be, materials such as Be12Ti are more resistant to oxidation by water but retain the neutron-multiplying properties of the predominant isotope 9Be.
https://en.wikipedia.org/wiki/Thom%20conjecture
In mathematics, a smooth algebraic curve in the complex projective plane, of degree , has genus given by the genus–degree formula . The Thom conjecture, named after French mathematician René Thom, states that if is any smoothly embedded connected curve representing the same class in homology as , then the genus of satisfies the inequality . In particular, C is known as a genus minimizing representative of its homology class. It was first proved by Peter Kronheimer and Tomasz Mrowka in October 1994, using the then-new Seiberg–Witten invariants. Assuming that has nonnegative self intersection number this was generalized to Kähler manifolds (an example being the complex projective plane) by John Morgan, Zoltán Szabó, and Clifford Taubes, also using the Seiberg–Witten invariants. There is at least one generalization of this conjecture, known as the symplectic Thom conjecture (which is now a theorem, as proved for example by Peter Ozsváth and Szabó in 2000). It states that a symplectic surface of a symplectic 4-manifold is genus minimizing within its homology class. This would imply the previous result because algebraic curves (complex dimension 1, real dimension 2) are symplectic surfaces within the complex projective plane, which is a symplectic 4-manifold. See also Adjunction formula Milnor conjecture (topology)
https://en.wikipedia.org/wiki/Strings%20%28Unix%29
In computer software, strings is a program in Unix, Plan 9, Inferno, and Unix-like operating systems that finds and prints the strings of printable characters in files. The files can be of regular text files or binary files such as executables. It can be used on object files and core dumps. strings is mainly useful for determining the contents of non-text files. Overview Strings are recognized by looking for sequences of at least 4 (by default) printable characters terminating in a NUL character (that is, null-terminated strings). Some implementations provide options for determining what is recognized as a printable character, which is useful for finding non-ASCII and wide character text. By default, it only prints the strings from the initialized and loaded sections of object files; for other types of files, it prints the strings from the whole file. But it doesn't make the behavior of cat and strings the same on regular text files. cat processes the non printable characters and output it to the terminal but strings ignore them. It is part of the GNU Binary Utilities (), and has been ported to other operating systems including Windows. Example Using strings to print sequences of characters that are at least 8 characters long (this command prints the system's BIOS information; should be run as root): dd if=/dev/mem bs=1k skip=768 count=256 2>/dev/null | strings -n 8 | less file.txt a aa aaa aaaa strings file.txt # prints aaaa See also Cat (Unix) Paste (Unix) GNU Debugger Strip (Unix)
https://en.wikipedia.org/wiki/Masu%20%28measurement%29
A was originally a square wooden box used to measure rice in Japan during the feudal period. In 1885 Japan signed the Convention du Mètre and in 1886 converted all of its traditional measures to the metric system. Masu existed in many sizes, typically covering the range from one to one . The advent of modern rice cookers and a higher calorie diet in Japan has made them impractical for measuring portions of rice. Today masu are largely used for drinking sake. Drinking vessels are made from hinoki (Japanese Cypress wood), as it imparts a special scent and flavor. The drinker sips from the corner of the box, which pours it into the mouth.Toasts are poured by stacking a pyramid of the guests' masu on a towel or cloth, with the toastmaker's masu on top. It is then overflowed until it fills all the masu beneath it. This symbolizes the generosity of the toaster to their friends and how they wish to share their happiness and good fortune with them. Sanjakumasu (measure equal to 3 shaku [54ml]) = Often used in bars to hold a 50ml shotglass, which is then filled to overflowing to make up the difference. If the shotglass is used for sake, it is served chilled or at room-temperature. The sanjakumasu can also be used in the san san kudo wedding ceremony in the place of the sakazuki (sake dish). Goshakumasu (measure equal to 5 shaku [90ml]) = Holds a half gō measure. Hasshakumasu (measure equal to 8 shaku or 4/5 gō [144ml]) = The former standard masu size, probably because 8 is a lucky number. Ichigōmasu (measure equal to 1 gō [180ml]) = The modern standard masu size, equal to a measure of 1 gō (0.18039 L) or 10 shaku. Nigōhanmasu (measure equal to 2.5 gō [450ml.]) = Holds a quarter shō measure. Gogōmasu (measure equal to 5 gō [900ml]) = Holds a half shō measure. Isshōmasu (measure equal to 1 shō or 10 gō [1.8L]) = Holds a full shō measure. A small , lidded form of masu, is sold for serving pepper, salt, sugar, and other dry condiments at the table. See also Sake set
https://en.wikipedia.org/wiki/Lattice%20energy
In chemistry, the lattice energy is the energy change upon formation of one mole of a crystalline ionic compound from its constituent ions, which are assumed to initially be in the gaseous state. It is a measure of the cohesive forces that bind ionic solids. The size of the lattice energy is connected to many other physical properties including solubility, hardness, and volatility. Since it generally cannot be measured directly, the lattice energy is usually deduced from experimental data via the Born–Haber cycle. Lattice energy and lattice enthalpy The concept of lattice energy was originally applied to the formation of compounds with structures like rocksalt (NaCl) and sphalerite (ZnS) where the ions occupy high-symmetry crystal lattice sites. In the case of NaCl, lattice energy is the energy change of the reaction Na+ (g) + Cl− (g) → NaCl (s) which amounts to −786 kJ/mol. Some chemistry textbooks as well as the widely used CRC Handbook of Chemistry and Physics define lattice energy with the opposite sign, i.e. as the energy required to convert the crystal into infinitely separated gaseous ions in vacuum, an endothermic process. Following this convention, the lattice energy of NaCl would be +786 kJ/mol. Both sign conventions are widely used. The relationship between the lattice energy and the lattice enthalpy at pressure is given by the following equation: , where is the lattice energy (i.e., the molar internal energy change), is the lattice enthalpy, and the change of molar volume due to the formation of the lattice. Since the molar volume of the solid is much smaller than that of the gases, . The formation of a crystal lattice from ions in vacuum must lower the internal energy due to the net attractive forces involved, and so . The term is positive but is relatively small at low pressures, and so the value of the lattice enthalpy is also negative (and exothermic). Theoretical treatments The lattice energy of an ionic compound depends strongly upo
https://en.wikipedia.org/wiki/Chili%20sauce%20and%20paste
Chili sauce and chili paste are condiments prepared with chili peppers. Chili sauce may be hot, sweet or a combination thereof, and may differ from hot sauce in that many sweet or mild varieties exist, which is typically lacking in hot sauces. Several varieties of chili sauce include sugar in their preparation, such as the Thai sweet chili sauce and Filipino agre dulce, which adds sweetness to their flavor profile. Sometimes, chili sauces are prepared with red tomato as primary ingredients. Many chili sauces may have a thicker texture and viscosity when compared to that of hot sauces. Chili paste usually refers to a paste where the main ingredient is chili pepper. Some are used as a cooking ingredient, while others are used to season a dish after preparation. Some are fermented with beans, as in Chinese doubanjiang, and some are prepared with powdered fermented beans, as in Korean gochujang. There are different regional varieties of chili paste and also within the same cuisine. Chili sauces and pastes can be used as dipping sauces, cooking glazes and marinades. Many commercial varieties of mass-produced chili sauce and paste exist. Ingredients Ingredients typically include puréed or chopped chili peppers, vinegar, sugar and salt, that are cooked, which thickens the mixture. Additional ingredients may include, water, garlic, other foodstuffs, corn syrup, spices and seasonings. Some varieties use ripe red puréed tomato as the primary ingredient. Varieties East Asia China Chili oil is a distinctive Sichuan flavoring found mainly in cold dishes, as well as a few hot dishes. Chili oil is made by pouring hot oil onto a bowl of dried chilies, to which some Sichuan pepper is usually added. After steeping in hot oil for at least a few hours, the oil takes on the taste and fragrance of chili. The finer the chili is ground, the stronger the flavor (regional preferences vary; ground chili is usually used in western China, while whole dried chili is more common in nor
https://en.wikipedia.org/wiki/Ivar%20Otto%20Bendixson
Ivar Otto Bendixson (1 August 1861 – 29 November 1935) was a Swedish mathematician. Biography Bendixson was born on 1 August 1861 at Villa Bergshyddan, Djurgården, Oscar Parish, Stockholm, Sweden, to a middle-class family. His father Vilhelm Emanuel Bendixson was a merchant, and his mother was Tony Amelia Warburg. On completing secondary education in Stockholm, he obtained his school certificate on 25 May 1878. On 13 September 1878 he enrolled to the Royal Institute of Technology in Stockholm. In 1879 Bendixson went to Uppsala University and graduated with the equivalent of a Master's degree on 27 January 1881. Graduating from Uppsala, he went on to study at the newly opened Stockholm University College after which he was awarded a doctorate by Uppsala University on 29 May 1890. On 10 June 1890 Bendixson was appointed as a docent at Stockholm University College. He then worked as an assistant to the professor of mathematical analysis from 5 March 1891 until 31 May 1892. From 1892 until 1899 he taught at the Royal Institute of Technology and he also taught calculus and algebra at Stockholm University College. During this period he married Anna Helena Lind on 19 December 1887. Anna, who was about eighteen months older than Bendixson, was the daughter of the banker Johan Lind. In 1899 Bendixson substituted for the Professor of Pure Mathematics at the Royal Institute of Technology and then he was promoted to professor there on 26 January 1900. On June 16, 1905, he assumed the position of professor of higher mathematical analysis at Stockholm University College, and he served as its rector from 1911 until 1927. For his outstanding contributions, Bendixson received many honours including an honorary doctorate on 24 May 1907. Bendixson became more involved in politics as his career progressed. He was well known for his mild left-wing views and he put his beliefs into practice being head of a committee to help poor students. He served on many other committees and h
https://en.wikipedia.org/wiki/Samuel%20Francis%20Boys
Samuel Francis (Frank) Boys (20 December 1911 – 16 October 1972) was a British theoretical chemist. Education Boys was born in Pudsey, Yorkshire, England. He was educated at the Grammar School in Pudsey and then at Imperial College London. He graduated in Chemistry in 1932. He was awarded a PhD in 1937 from Cambridge for research conducted at Trinity College, supervised first by Martin Lowry, and then, after Lowry's 1936 death, by John Lennard-Jones. His thesis was "The Quantum Theory of Optical Rotation". Career In 1938, Boys was appointed an Assistant Lecturer in Mathematical Physics at Queen's University Belfast. He spent the whole of the Second World War working on explosives research with the Ministry of Supply at the Royal Arsenal, Woolwich, with Lennard-Jones as his supervisor. After the war, Boys accepted an ICI Fellowship at Imperial College, London. In 1949, he was appointed to a Lectureship in theoretical chemistry at the University of Cambridge. He remained at Cambridge until his death. He was only elected to a Cambridge College Fellowship at University College, now Wolfson College, Cambridge, shortly before his death. Boys is best known for the introduction of Gaussian orbitals into ab initio quantum chemistry. Almost all basis sets used in computational chemistry now employ these orbitals. Frank Boys was also one of the first scientists to use digital computers for calculations on polyatomic molecules. An International Conference, entitled "Molecular Quantum Mechanics: Methods and Applications" was held in memory of S. Francis Boys and in honour of Isaiah Shavitt in September 1995 at St Catharine's College, Cambridge. Awards and honours Boys was a member of the International Academy of Quantum Molecular Science. He was elected a Fellow of the Royal Society (FRS) in 1972, a few months before his death.
https://en.wikipedia.org/wiki/Freediving%20blackout
Freediving blackout, breath-hold blackout, or apnea blackout is a class of hypoxic blackout, a loss of consciousness caused by cerebral hypoxia towards the end of a breath-hold (freedive or dynamic apnea) dive, when the swimmer does not necessarily experience an urgent need to breathe and has no other obvious medical condition that might have caused it. It can be provoked by hyperventilating just before a dive, or as a consequence of the pressure reduction on ascent, or a combination of these. Victims are often established practitioners of breath-hold diving, are fit, strong swimmers and have not experienced problems before. Blackout may also be referred to as a syncope or fainting. Divers and swimmers who black out or grey out underwater during a dive will usually drown unless rescued and resuscitated within a short time. Freediving blackout has a high fatality rate, and mostly involves males younger than 40 years, but is generally avoidable. Risk cannot be quantified, but is clearly increased by any level of hyperventilation. Freediving blackout can occur on any dive profile: at constant depth, on an ascent from depth, or at the surface following ascent from depth and may be described by a number of terms depending on the dive profile and depth at which consciousness is lost. Blackout during a shallow dive differs from blackout during ascent from a deep dive in that blackout during ascent is precipitated by depressurisation on ascent from depth while blackout in consistently shallow water is a consequence of hypocapnia following hyperventilation. Terminology Different types of freediving blackout have become known under a variety of names; these include: In this article constant pressure blackout and shallow water blackout refers to blackouts in shallow water following hyperventilation and ascent blackout and deep water blackout refers to blackout on ascent from depth. Some free divers consider blackout on ascent to be a special condition or subset of sh
https://en.wikipedia.org/wiki/Computational%20electromagnetics
Computational electromagnetics (CEM), computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment using computers. It typically involves using computer programs to compute approximate solutions to Maxwell's equations to calculate antenna performance, electromagnetic compatibility, radar cross section and electromagnetic wave propagation when not in free space. A large subfield is antenna modeling computer programs, which calculate the radiation pattern and electrical properties of radio antennas, and are widely used to design antennas for specific applications. Background Several real-world electromagnetic problems like electromagnetic scattering, electromagnetic radiation, modeling of waveguides etc., are not analytically calculable, for the multitude of irregular geometries found in actual devices. Computational numerical techniques can overcome the inability to derive closed form solutions of Maxwell's equations under various constitutive relations of media, and boundary conditions. This makes computational electromagnetics (CEM) important to the design, and modeling of antenna, radar, satellite and other communication systems, nanophotonic devices and high speed silicon electronics, medical imaging, cell-phone antenna design, among other applications. CEM typically solves the problem of computing the E (electric) and H (magnetic) fields across the problem domain (e.g., to calculate antenna radiation pattern for an arbitrarily shaped antenna structure). Also calculating power flow direction (Poynting vector), a waveguide's normal modes, media-generated wave dispersion, and scattering can be computed from the E and H fields. CEM models may or may not assume symmetry, simplifying real world structures to idealized cylinders, spheres, and other regular geometrical objects. CEM models extensively make use of symmetry, and solve for reduced dimensionality
https://en.wikipedia.org/wiki/Boyce%E2%80%93Codd%20normal%20form
Boyce–Codd normal form (or BCNF or 3.5NF) is a normal form used in database normalization. It is a slightly stronger version of the third normal form (3NF). BCNF was developed in 1974 by Raymond F. Boyce and Edgar F. Codd to address certain types of anomalies not dealt with by 3NF as originally defined. If a relational schema is in BCNF then all redundancy based on functional dependency has been removed, although other types of redundancy may still exist. A relational schema R is in Boyce–Codd normal form if and only if for every one of its dependencies X → Y, at least one of the following conditions hold: X → Y is a trivial functional dependency (Y ⊆ X), X is a superkey for schema R. Note that if a relational schema is in BCNF, then it is in 3NF. 3NF table always meeting BCNF (Boyce–Codd normal form) Only in rare cases does a 3NF table not meet the requirements of BCNF. A 3NF table that does not have multiple overlapping candidate keys is guaranteed to be in BCNF. Depending on what its functional dependencies are, a 3NF table with two or more overlapping candidate keys may or may not be in BCNF. An example of a 3NF table that does not meet BCNF is: Each row in the table represents a court booking at a tennis club. That club has one hard court (Court 1) and one grass court (Court 2) A booking is defined by its Court and the period for which the Court is reserved Additionally, each booking has a Rate Type associated with it. There are four distinct rate types: SAVER, for Court 1 bookings made by members STANDARD, for Court 1 bookings made by non-members PREMIUM-A, for Court 2 bookings made by members PREMIUM-B, for Court 2 bookings made by non-members The table's superkeys are: S1 = {Court, Start time} S2 = {Court, End time} S3 = {Rate type, Start time} S4 = {Rate type, End time} S5 = {Court, Start time, End time} S6 = {Rate type, Start time, End time} S7 = {Court, Rate type, Start time} S8 = {Court, Rate type, End time} ST = {Court, Rate type
https://en.wikipedia.org/wiki/Cuv%C3%A9e
Cuvée () is a French wine term that derives from cuve, meaning vat or tank. Wine makers use the term cuvée with several different meanings, more or less based on the concept of a tank of wine put to some purpose. Wines Cuvée on wine labels generally denotes wine of a specific blend or batch. Since the term cuvée for this purpose is unregulated, and most wines have been stored in a vat or tank at some stage of production, the presence of the word cuvée on a label of an arbitrary producer is no guarantee of superior quality. However, discerning producers who market both regular blends and blends they call "cuvée..." usually reserve the word for special blends or selected vats of higher quality—at least in comparison to that producer's regular wines. Particularly terms like cuvée speciale, or tête de cuvée (the latter especially in Sauternes AOC) are supposed to indicate higher quality. In this context, higher-quality than ordinary cuvées are often referred to as reserve wines, while a cuvée lower in quality than the main one is a second wine. In some regions, cuvée specifically means a blend, i.e., a wine produced from a mixture of several grape varieties, rather than from a single variety. This is especially true outside France. In Champagne, and sometimes other regions, producing sparkling wines by the traditional method, cuvée also refers to the best grape juice from gentle pressing of the grapes. In Champagne, the cuvée is the first 2,050 litres of grape juice from 4,000 kg of grapes (a marc), while the following 500 litres are known as the taille (tail), and are expected to give wines of a coarser character. Many Champagne producers pride themselves on only using the cuvée in their wine. Other food and drink The term can also apply to beer, or to chocolate to refer to a batch that is blended by the manufacturers to produce a certain taste. Many lambics and gueuzes—sour beers with wine-like characteristics—are marketed as cuvée. When referring to beer, ale, or
https://en.wikipedia.org/wiki/Generalized%20Helmholtz%20theorem
The generalized Helmholtz theorem is the multi-dimensional generalization of the Helmholtz theorem which is valid only in one dimension. The generalized Helmholtz theorem reads as follows. Let be the canonical coordinates of a s-dimensional Hamiltonian system, and let be the Hamiltonian function, where , is the kinetic energy and is the potential energy which depends on a parameter . Let the hyper-surfaces of constant energy in the 2s-dimensional phase space of the system be metrically indecomposable and let denote time average. Define the quantities , , , , as follows: , , , Then: Remarks The thesis of this theorem of classical mechanics reads exactly as the heat theorem of thermodynamics. This fact shows that thermodynamic-like relations exist between certain mechanical quantities in multidimensional ergodic systems. This in turn allows to define the "thermodynamic state" of a multi-dimensional ergodic mechanical system, without the requirement that the system be composed of a large number of degrees of freedom. In particular the temperature is given by twice the time average of the kinetic energy per degree of freedom, and the entropy by the logarithm of the phase space volume enclosed by the constant energy surface (i.e. the so-called volume entropy).
https://en.wikipedia.org/wiki/Video%20game%20conversion
In video gaming parlance, a conversion is the production of a game on one computer or console that was originally written for another system. Over the years, video game conversion has taken form in a number of different ways, both in their style and the method in which they were converted. In the arcade video game industry, the term conversion has a different usage, in reference to game conversion kits for arcade cabinets. Types of conversions Direct conversions Direct conversions, also referred to as "straight conversions", are conversions in which the source code of the original game is used with relatively few modifications. Direct conversions were fairly rare until the second half of the 1990s. In the case of arcade conversions, this was because arcade systems were usually much more advanced than their contemporary home-based systems, which thus could not accurately recreate the speed, graphics, audio, and in some cases even the gameplay algorithms of arcade games. In the case of personal computer conversions, most games pre-1995 were produced in assembly language, and source-based conversions could not be reproduced on systems with other processors, rendering the original source code useless. Also, while most third-party developers had access to the original graphics and audio, they could not be faithfully reproduced on older home computers such as the ZX Spectrum and developers were forced to recreate the graphics and audio from scratch. In the early 2000s, source-based conversions of games became more feasible and one-to-one pixel perfect conversions became commonplace. Imitations/clones Imitations of popular arcade games were common, particularly in the early days of video gaming when copyright violations were treated less severely. While the game was fundamentally the same, the title, names, graphics and audio were usually changed to avoid legal challenges. Developers have created "clones" of their own games. Escape (now Westone) produced a clone of W
https://en.wikipedia.org/wiki/One-way%20compression%20function
In cryptography, a one-way compression function is a function that transforms two fixed-length inputs into a fixed-length output. The transformation is "one-way", meaning that it is difficult given a particular output to compute inputs which compress to that output. One-way compression functions are not related to conventional data compression algorithms, which instead can be inverted exactly (lossless compression) or approximately (lossy compression) to the original data. One-way compression functions are for instance used in the Merkle–Damgård construction inside cryptographic hash functions. One-way compression functions are often built from block ciphers. Some methods to turn any normal block cipher into a one-way compression function are Davies–Meyer, Matyas–Meyer–Oseas, Miyaguchi–Preneel (single-block-length compression functions) and MDC-2/Meyer–Schilling, MDC-4, Hirose (double-block-length compression functions). These methods are described in detail further down. (MDC-2 is also the name of a hash function patented by IBM.) Another method is 2BOW (or NBOW in general), which is a "high-rate multi-block-length hash function based on block ciphers" and typically achieves (asymptotic) rates between 1 and 2 independent of the hash size (only with small constant overhead). This method has not yet seen any serious security analysis, so should be handled with care. Compression A compression function mixes two fixed length inputs and produces a single fixed length output of the same size as one of the inputs. This can also be seen as that the compression function transforms one large fixed-length input into a shorter, fixed-length output. For instance, input A might be 128 bits, input B 128 bits and they are compressed together to a single output of 128 bits. This is equivalent to having a single 256-bit input compressed to a single output of 128 bits. Some compression functions do not compress by half, but instead by some other factor. For example, input A mi
https://en.wikipedia.org/wiki/Word%20problem%20%28mathematics%29
In computational mathematics, a word problem is the problem of deciding whether two given expressions are equivalent with respect to a set of rewriting identities. A prototypical example is the word problem for groups, but there are many other instances as well. A deep result of computational theory is that answering this question is in many important cases undecidable. Background and motivation In computer algebra one often wishes to encode mathematical expressions using an expression tree. But there are often multiple equivalent expression trees. The question naturally arises of whether there is an algorithm which, given as input two expressions, decides whether they represent the same element. Such an algorithm is called a solution to the word problem. For example, imagine that are symbols representing real numbers - then a relevant solution to the word problem would, given the input , produce the output EQUAL, and similarly produce NOT_EQUAL from . The most direct solution to a word problem takes the form of a normal form theorem and algorithm which maps every element in an equivalence class of expressions to a single encoding known as the normal form - the word problem is then solved by comparing these normal forms via syntactic equality. For example one might decide that is the normal form of , , and , and devise a transformation system to rewrite those expressions to that form, in the process proving that all equivalent expressions will be rewritten to the same normal form. But not all solutions to the word problem use a normal form theorem - there are algebraic properties which indirectly imply the existence of an algorithm. While the word problem asks whether two terms containing constants are equal, a proper extension of the word problem known as the unification problem asks whether two terms containing variables have instances that are equal, or in other words whether the equation has any solutions. As a common example, is a word problem in the
https://en.wikipedia.org/wiki/Altova
Altova is a commercial software development company with headquarters in Beverly, MA, United States and Vienna, Austria, that produces integrated XML, JSON, database, UML, and data management software development tools. Company Altova was founded in 1992 as an XML development software company. Its software is used by more than 4 million users and more than 100,000 companies globally. The first product was XMLSpy, and around the year 2000, Altova began to develop new tools to augment XMLSpy and expand into new areas of software development. The CEO and president of Altova is Alexander Falk, who has explained that the development of Altova software has occurred through the inclusion of features most requested by the users of previous program incarnations. Falk is also the inventor behind Altova's patents. Altova software attempts to increase the efficiency of program use in order to reduce the amount of time needed for users to learn database software and other tasks such as query execution. Examples of Altova software includes the XML editor XMLSpy, and MapForce, a data mapping tool. Altova has also added XBRL capable programs to its XML software line, including development tools. In addition, they have included Web Services Description Language, project management and Unified Modeling Language capabilities to their software. Most recently, the company has introduced a mobile development environment called MobileTogether for developing cross-platform enterprise mobile solutions. At the beginning of 2014, the company claimed to have more than 4.6 million users of its software. Programs XMLSpy—XML editor for modeling, editing, transforming, and debugging XML technologies MapForce—any-to-any graphical data mapping, conversion, and integration tool MapForce FlexText—graphical utility for parsing flat files StyleVision—multipurpose visual XSLT stylesheet design, multi-channel publishing, and report building tool UModel—UML modeling tool DatabaseSpy—multi-database
https://en.wikipedia.org/wiki/Acoustic%20holography
Acoustic holography is a method for estimating the sound field near a sound source by measuring acoustic parameters away from the source by means of an array of pressure and/or particle velocity transducers. The measuring techniques included in acoustic holography are becoming increasingly popular in various fields, most notably those of transportation, vehicle and aircraft design, and noise, vibration, and harshness (NVH). The general idea of acoustic holography has led to different versions such as near-field acoustic holography (NAH) and statistically optimal near-field acoustic holography (SONAH). For audio rendering and production, Wave Field Synthesis and Higher Order Ambisonics are related technologies, respectively modelling the acoustic pressure field on a plane, or in a spherical volume.
https://en.wikipedia.org/wiki/Cryptovirology
Cryptovirology refers to the use of cryptography to devise particularly powerful malware, such as ransomware and asymmetric backdoors. Traditionally, cryptography and its applications are defensive in nature, and provide privacy, authentication, and security to users. Cryptovirology employs a twist on cryptography, showing that it can also be used offensively. It can be used to mount extortion based attacks that cause loss of access to information, loss of confidentiality, and information leakage, tasks which cryptography typically prevents. The field was born with the observation that public-key cryptography can be used to break the symmetry between what an antivirus analyst sees regarding malware and what the attacker sees. The antivirus analyst sees a public key contained in the malware, whereas the attacker sees the public key contained in the malware as well as the corresponding private key (outside the malware) since the attacker created the key pair for the attack. The public key allows the malware to perform trapdoor one-way operations on the victim's computer that only the attacker can undo. Overview The field encompasses covert malware attacks in which the attacker securely steals private information such as symmetric keys, private keys, PRNG state, and the victim's data. Examples of such covert attacks are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can be used only by the attacker, even after it is found. This contrasts with the traditional backdoor that is symmetric, i.e., anyone that finds it can use it. Kleptography, a subfield of cryptovirology, is the study of asymmetric backdoors in key generation algorithms, digital signature algorithms, key exchanges, pseudorandom number generators, encryption algorithms, and other cryptographic algorithms. The NIST Dual EC DRBG random bit generator has an asymmetric backdoor in it. The EC-DRBG algorithm utilizes the discrete-log kleptogram from kleptography, which
https://en.wikipedia.org/wiki/LongRun
LongRun and LongRun2 are power management technologies introduced by Transmeta. LongRun was introduced with the Crusoe processor, while LongRun2 was introduced with the Efficeon processor. LongRun2 has since been licensed to Fujitsu, NEC, Sony, Toshiba, and NVIDIA. LongRun automatically adjusted the processor, moving between higher performance but higher power, and lower power but lower performance. The goals of the automation could be adjusted. One control offered processor frequency levels, and the ability to set a minimum and maximum "window", where the automatic controls would not adjust the speed outside of the window. A second control offered a target of either "economy" or "performance". Some versions offered a third control that adjusted the processor based on power rather than speed. LongRun was based primarily on reducing the clock frequency and voltage supplied to the processor, now commonly called DVFS. Lower frequency reduces performance but also allows voltage reduction, and can yield both power savings and improved efficiency. LongRun2 built further on this by incorporating process technology aimed at reducing variations in the manufacturing process and thereby improving yields.
https://en.wikipedia.org/wiki/Gaussian%20fixed%20point
A Gaussian fixed point is a fixed point of the renormalization group flow which is noninteracting in the sense that it is described by a free field theory. The word Gaussian comes from the fact that the probability distribution is Gaussian at the Gaussian fixed point. This means that Gaussian fixed points are exactly solvable (trivially solvable in fact). Slight deviations from the Gaussian fixed point can be described by perturbation theory. See also UV fixed point IR fixed point Quantum triviality
https://en.wikipedia.org/wiki/Stem-cell%20therapy
Stem-cell therapy is the use of stem cells to treat or prevent a disease or condition. , the only established therapy using stem cells is hematopoietic stem cell transplantation. This usually takes the form of a bone-marrow transplantation, but the cells can also be derived from umbilical cord blood. Research is underway to develop various sources for stem cells as well as to apply stem-cell treatments for neurodegenerative diseases and conditions such as diabetes and heart disease. Stem-cell therapy has become controversial following developments such as the ability of scientists to isolate and culture embryonic stem cells, to create stem cells using somatic cell nuclear transfer and their use of techniques to create induced pluripotent stem cells. This controversy is often related to abortion politics and to human cloning. Additionally, efforts to market treatments based on transplant of stored umbilical cord blood have been controversial. Medical uses For over 90 years, hematopoietic stem cell transplantation (HSCT) has been used to treat people with conditions such as leukaemia and lymphoma; this is the only widely practiced form of stem-cell therapy. During chemotherapy, most growing cells are killed by the cytotoxic agents. These agents, however, cannot discriminate between the leukaemia or neoplastic cells, and the hematopoietic stem cells within the bone marrow. This is the side effect of conventional chemotherapy strategies that the stem-cell transplant attempts to reverse; a donor's healthy bone marrow reintroduces functional stem cells to replace the cells lost in the host's body during treatment. The transplanted cells also generate an immune response that helps to kill off the cancer cells; this process can go too far, however, leading to graft vs host disease, the most serious side effect of this treatment. Another stem-cell therapy, called Prococvhymal, was conditionally approved in Canada in 2012 for the management of acute graft-vs-host disease
https://en.wikipedia.org/wiki/Wave%20function%20renormalization
In quantum field theory wave function renormalization is a rescaling (or renormalization) of quantum fields to take into account the effects of interactions. For a noninteracting or free field, the field operator creates or annihilates a single particle with probability 1. Once interactions are included, however, this probability is modified in general to Z 1. This appears when one calculates the propagator beyond leading order; e.g. for a scalar field, (The shift of the mass from m0 to m constitutes the mass renormalization.) One possible wave function renormalization, which happens to be scale independent, is to rescale the fields so that the Lehmann weight (Z in the formula above) of their quanta is 1. For the purposes of studying renormalization group flows, if the coefficient of the kinetic term in the action at the scale Λ is Z, then the field is rescaled by . A scale dependent wave function renormalization for a field means that that field has an anomalous scaling dimension. See also Renormalization Renormalization group
https://en.wikipedia.org/wiki/Cryogenic%20particle%20detector
Cryogenic particle detectors operate at very low temperature, typically only a few degrees above absolute zero. These sensors interact with an energetic elementary particle (such as a photon) and deliver a signal that can be related to the type of particle and the nature of the interaction. While many types of particle detectors might be operated with improved performance at cryogenic temperatures, this term generally refers to types that take advantage of special effects or properties occurring only at low temperature. Introduction The most commonly cited reason for operating any sensor at low temperature is the reduction in thermal noise, which is proportional to the square root of the absolute temperature. However, at very low temperature, certain material properties become very sensitive to energy deposited by particles in their passage through the sensor, and the gain from these changes may be even more than that from reduction in thermal noise. Two such commonly used properties are heat capacity and electrical resistivity, particularly superconductivity; other designs are based on superconducting tunnel junctions, quasiparticle trapping, rotons in superfluids, magnetic bolometers, and other principles. Originally, astronomy pushed the development of cryogenic detectors for optical and infrared radiation. Later, particle physics and cosmology motivated cryogenic detector development for sensing known and predicted particles such as neutrinos, axions, and weakly interacting massive particles (WIMPs). Types of cryogenic particle detectors Calorimetric particle detection A calorimeter is a device that measures the amount of heat deposited in a sample of material. A calorimeter differs from a bolometer in that a calorimeter measures energy, while a bolometer measures power. Below the Debye temperature of a crystalline dielectric material (such as silicon), the heat capacity decreases inversely as the cube of the absolute temperature. It becomes very small, so
https://en.wikipedia.org/wiki/Waveguide%20%28radio%20frequency%29
In radio-frequency engineering and communications engineering, waveguide is a hollow metal pipe used to carry radio waves. This type of waveguide is used as a transmission line mostly at microwave frequencies, for such purposes as connecting microwave transmitters and receivers to their antennas, in equipment such as microwave ovens, radar sets, satellite communications, and microwave radio links. The electromagnetic waves in a (metal-pipe) waveguide may be imagined as travelling down the guide in a zig-zag path, being repeatedly reflected between opposite walls of the guide. For the particular case of rectangular waveguide, it is possible to base an exact analysis on this view. Propagation in a dielectric waveguide may be viewed in the same way, with the waves confined to the dielectric by total internal reflection at its surface. Some structures, such as non-radiative dielectric waveguides and the Goubau line, use both metal walls and dielectric surfaces to confine the wave. Principle Depending on the frequency, waveguides can be constructed from either conductive or dielectric materials. Generally, the lower the frequency to be passed the larger the waveguide is. For example, the natural waveguide the earth forms given by the dimensions between the conductive ionosphere and the ground as well as the circumference at the median altitude of the Earth is resonant at 7.83 Hz. This is known as Schumann resonance. On the other hand, waveguides used in extremely high frequency (EHF) communications can be less than a millimeter in width. History During the 1890s theorists did the first analyses of electromagnetic waves in ducts. Around 1893 J. J. Thomson derived the electromagnetic modes inside a cylindrical metal cavity. In 1897 Lord Rayleigh did a definitive analysis of waveguides; he solved the boundary value problem of electromagnetic waves propagating through both conducting tubes and dielectric rods of arbitrary shape. He showed that the waves could tr
https://en.wikipedia.org/wiki/Geir%20Ellingsrud
Geir Ellingsrud (born 29 November 1948) is professor of mathematics at the University of Oslo, where he specialises in algebra and algebraic geometry. He took the cand.real. degree at the University of Oslo in 1973, and the doctorate at Stockholm University in 1982. He was a lecturer at Stockholm University from 1982 to 1984, associate professor at the University of Oslo from 1984 to 1989, professor at the University of Bergen from 1989 to 1993 and at the University of Oslo since 1993. He has been a visiting scholar in Nice, Paris, Bonn and Chicago. He has edited the journals Acta Mathematica and Normat. In 2005 Ellingsrud was elected to be rector of the University of Oslo for the period 2006-2009. His team also consisted of Inga Bostad and Haakon Breien Benestad. He did not seek reelection to a second term, and was succeeded by Ole Petter Ottersen.
https://en.wikipedia.org/wiki/Self-service
Self-service is the practice of serving oneself, usually when making purchases. Aside from Automated Teller Machines, which are not limited to banks, and customer-operated supermarket check-out, labor-saving which has been described as self-sourcing, there is the latter's subset, selfsourcing and a related pair: End-user development and End-user computing. Note has been made how paid labor has been replaced with unpaid labor, and how reduced professionalism and distractions from primary duties have reduced value obtained from employees' time. For decades, laws have been passed both facilitating and preventing self-pumping of gas and other self-service. Overview Self-service is the practice of serving oneself, usually when purchasing items. Common examples include many gas stations, where the customer pumps their own gas rather than have an attendant do it (full service is required by law in New Jersey, urban parts of Oregon, most of Mexico, and Richmond, British Columbia, but is the exception rather than the rule elsewhere). Automatic Teller Machines (ATMs) in the banking world have also revolutionized how people withdraw and deposit funds; most stores in the Western world, where the customer uses a shopping cart in the store, placing the items they want to buy into the cart and then proceeding to the checkout counter/aisles; or at buffet-style restaurants, where the customer serves their own plate of food from a large, central selection. Patentable business method In 1917, the US Patent Office awarded Clarence Saunders a patent for a "self-serving store." Saunders invited his customers to collect the goods they wanted to buy from the store and present them to a cashier, rather than having the store employee consult a list presented by the customer, and collect the goods. Saunders licensed the business method to independent grocery stores; these operated under the name "Piggly Wiggly." Electronic commerce Self-service is over the phone, web, and email to faci
https://en.wikipedia.org/wiki/REPROM
Reprogrammable memory (abbreviated as REPROM or RePROM) is type of ROM, more precisely, a type of PROM electronic memory. Re refers to reprogrammable ROM memory. There are two types of RePROM electronic memories: EPROM E²PROM or EEPROM See also Read-mostly memory (RMM) Non-volatile memory Computer memory
https://en.wikipedia.org/wiki/Giulio%20Ascoli
Giulio Ascoli (20 January 1843, Trieste – 12 July 1896, Milan) was a Jewish-Italian mathematician. He was a student of the Scuola Normale di Pisa, where he graduated in 1868. In 1872 he became Professor of Algebra and Calculus of the Politecnico di Milano University. From 1879 he was professor of mathematics at the Reale Istituto Tecnico Superiore, where, in 1901, was affixed a plaque that remembers him. He was also a corresponding member of Istituto Lombardo. He made contributions to the theory of functions of a real variable and to Fourier series. For example, Ascoli introduced equicontinuity in 1884, a topic regarded as one of the fundamental concepts in the theory of real functions. In 1889, Italian mathematician Cesare Arzelà generalized Ascoli's Theorem into the Arzelà–Ascoli theorem, a practical sequential compactness criterion of functions. See also Measure (mathematics) Oscillation (mathematics) Riemann Integral Notes Biographical references . (in Italian). Available from the website of the.
https://en.wikipedia.org/wiki/Burr%20%28edge%29
A burr is a raised edge or small piece of material that remains attached to a workpiece after a modification process. It is usually an unwanted piece of material and is removed with a deburring tool in a process called 'deburring'. Burrs are most commonly created by machining operations, such as grinding, drilling, milling, engraving or turning. It may be present in the form of a fine wire on the edge of a freshly sharpened tool or as a raised portion of a surface; this type of burr is commonly formed when a hammer strikes a surface. Deburring accounts for a significant portion of manufacturing costs. In the printmaking technique of drypoint, burr, which gives a rich fuzzy quality to the engraved line, is highly desirable—the great problem with the drypoint medium is that the burr rapidly diminishes after as few as ten impressions are printed. Types There are three types of burrs that can be formed from machining operations: Poisson burr, rollover burr, and breakout burr. The rollover burr is the most common. Burrs may be classified by the physical manner of formation. Plastic deformation of material includes lateral flow (Poisson burr), bending (rollover burr), and tearing of material from the workpiece (tear burr). Solidification or redeposition of material results in a recast bead. Incomplete cutoff of material causes a cutoff projection. Burrs can be minimized or prevented by considering materials, function, shape, and processing in the design and manufacturing engineering phases of product development. Burrs in drilled holes cause fastener and material problems. Burrs cause more stress to be concentrated at the edges of holes, decreasing resistance to fracture and shortening fatigue life. They interfere with the seating of fasteners, causing damage to fastener or the assembly itself. Cracks caused by stress and strain can result in material failure. Burrs in holes also increase the risk of corrosion, which may be due to variations in the thickness of coati
https://en.wikipedia.org/wiki/N%C3%B8rlund%E2%80%93Rice%20integral
In mathematics, the Nørlund–Rice integral, sometimes called Rice's method, relates the nth forward difference of a function to a line integral on the complex plane. It commonly appears in the theory of finite differences and has also been applied in computer science and graph theory to estimate binary tree lengths. It is named in honour of Niels Erik Nørlund and Stephen O. Rice. Nørlund's contribution was to define the integral; Rice's contribution was to demonstrate its utility by applying saddle-point techniques to its evaluation. Definition The nth forward difference of a function f(x) is given by where is the binomial coefficient. The Nörlund–Rice integral is given by where f is understood to be meromorphic, α is an integer, , and the contour of integration is understood to circle the poles located at the integers α, ..., n, but encircles neither integers 0, ..., nor any of the poles of f. The integral may also be written as where B(a,b) is the Euler beta function. If the function is polynomially bounded on the right hand side of the complex plane, then the contour may be extended to infinity on the right hand side, allowing the transform to be written as where the constant c is to the left of α. Poisson–Mellin–Newton cycle The Poisson–Mellin–Newton cycle, noted by Flajolet et al. in 1985, is the observation that the resemblance of the Nørlund–Rice integral to the Mellin transform is not accidental, but is related by means of the binomial transform and the Newton series. In this cycle, let be a sequence, and let g(t) be the corresponding Poisson generating function, that is, let Taking its Mellin transform one can then regain the original sequence by means of the Nörlund–Rice integral: where Γ is the gamma function. Riesz mean A closely related integral frequently occurs in the discussion of Riesz means. Very roughly, it can be said to be related to the Nörlund–Rice integral in the same way that Perron's formula is related to the Mellin transfo
https://en.wikipedia.org/wiki/Pursuit%E2%80%93evasion
Pursuit–evasion (variants of which are referred to as cops and robbers and graph searching) is a family of problems in mathematics and computer science in which one group attempts to track down members of another group in an environment. Early work on problems of this type modeled the environment geometrically. In 1976, Torrence Parsons introduced a formulation whereby movement is constrained by a graph. The geometric formulation is sometimes called continuous pursuit–evasion, and the graph formulation discrete pursuit–evasion (also called graph searching). Current research is typically limited to one of these two formulations. Discrete formulation In the discrete formulation of the pursuit–evasion problem, the environment is modeled as a graph. Problem definition There are innumerable possible variants of pursuit–evasion, though they tend to share many elements. A typical, basic example is as follows (cops and robber games): Pursuers and evaders occupy nodes of a graph. The two sides take alternate turns, which consist of each member either staying put or moving along an edge to an adjacent node. If a pursuer occupies the same node as an evader the evader is captured and removed from the graph. The question usually posed is how many pursuers are necessary to ensure the eventual capture of all the evaders. If one pursuer suffices, the graph is called a cop-win graph. In this case, a single evader can always be captured in time linear to the number of n nodes of the graph. Capturing r evaders with k pursuers can take in the order of r n time as well, but the exact bounds for more than one pursuer are still unknown. Often the movement rules are altered by changing the velocity of the evaders. This velocity is the maximum number of edges that an evader can move along in a single turn. In the example above, the evaders have a velocity of one. At the other extreme is the concept of infinite velocity, which allows an evader to move to any node in the graph so l
https://en.wikipedia.org/wiki/Acoustic%20waveguide
An acoustic waveguide is a physical structure for guiding sound waves, i.e., a waveguide used in acoustics. Examples One example is a speaking tube used aboard ships for communication between decks. Other examples include the rear passage in a transmission-line loudspeaker enclosure, the ear canal, and a stethoscope. The term also applies to guided waves in solids. A duct for sound propagation also behaves like a transmission line (e.g. air conditioning duct, car muffler, etc.). The duct contains some medium, such as air, that supports sound propagation. Its length is typically around a quarter of the wavelength which is intended to be guided, but the dimensions of its cross section are smaller than this. Sound is introduced at one end of the tube by forcing the pressure to vary in the direction of propagation, which causes a pressure gradient to travel perpendicular to the cross section at the speed of sound. When the wave reaches the end of the transmission line, its behaviour depends on what is present at the end of the line. There are three generalized scenarios: A low impedance load (e.g. leaving the end open in free air) will cause a reflected wave in which the sign of the pressure variation reverses, but the direction of the pressure wave remains the same. A load that matches the characteristic impedance (defined below) will completely absorb the wave and the energy associated with it. No reflection will occur. A high impedance load (e.g. by plugging the end of the line) will cause a reflected wave in which the direction of the pressure wave is reversed but the sign of the pressure remains the same. Since a transmission line behaves like a four terminal model, one cannot really define or measure the impedance of a transmission line component. One can however measure its input or output impedance. It depends on the cross-sectional area and length of the line, the sound frequency, as well as the characteristic impedance of the sound propagating medium
https://en.wikipedia.org/wiki/Vitosha%20Mountain%20TV%20Tower
Vitosha Mountain TV Tower, better known as Kopitoto (, "The Hoof") after the rock outcrop () it stands on, is a tall TV tower built of reinforced concrete on Vitosha Mountain near Sofia, Bulgaria. The footprint of the tower has the shape of a hexagon with three of the sides extended (i.e. almost triangular). From the tower there is a commanding view of Sofia, and the tower can be seen from everywhere in Sofia, making it a landmark of Sofia's skyline. It is the second tallest television tower in Bulgaria. See also List of towers List of tallest structures in Bulgaria External links Pictures and description in Bulgarian Towers in Bulgaria Buildings and structures in Sofia
https://en.wikipedia.org/wiki/Flooding%20%28computer%20networking%29
Flooding is used in computer network routing algorithms in which every incoming packet is sent through every outgoing link except the one it arrived on. Flooding is used in bridging and in systems such as Usenet and peer-to-peer file sharing and as part of some routing protocols, including OSPF, DVMRP, and those used in ad-hoc wireless networks (WANETs). Types There are generally two types of flooding available, uncontrolled flooding and controlled flooding. In uncontrolled flooding each node unconditionally distributes packets to each of its neighbors. Without conditional logic to prevent indefinite recirculation of the same packet, broadcast storms are a hazard. Controlled flooding has its own two algorithms to make it reliable, SNCF (Sequence Number Controlled Flooding) and RPF (reverse-path forwarding). In SNCF, the node attaches its own address and sequence number to the packet, since every node has a memory of addresses and sequence numbers. If it receives a packet in memory, it drops it immediately while in RPF, the node will only send the packet forward. If it is received from the next node, it sends it back to the sender. Algorithms There are several variants of flooding algorithms. Most work roughly as follows: Each node acts as both a transmitter and a receiver. Each node tries to forward every message to every one of its neighbors except the source node. This results in every message eventually being delivered to all reachable parts of the network. Algorithms may need to be more complex than this, since, in some case, precautions have to be taken to avoid wasted duplicate deliveries and infinite loops, and to allow messages to eventually expire from the system. Selective flooding A variant of flooding called selective flooding partially addresses these issues by only sending packets to routers in the same direction. In selective flooding, the routers don't send every incoming packet on every line but only on those lines which are going approxim
https://en.wikipedia.org/wiki/OK%20Kosher%20Certification
OK Kosher Certification is a major kosher certification agency based out of Brooklyn, NY. It is one of the "Big Five," the five largest kosher certifying agencies in the United States. OK also has a large kosher presence in Asia. Early history In 1935, Abraham Goldstein founded Organized Kashrut Laboratories (OK Labs) to meet the American Jewish community's need for Kosher food products. In 1968, Rabbi Bernard Levy purchased OK Labs. He was already involved in kosher certification several years prior to the purchase. At the time, it was certifying a relatively small number of companies, but under his leadership, the organization began to grow, certifying companies internationally. Rabbi Levy instituted several improvements in the methods employed by kosher certifying agencies to verify the nature of products. Until then, many ingredients of products were assumed to be kosher, without visiting the company of each one. His policy was to travel to each company to see how the production was done first-hand. This led him to further investigate other companies, as many ingredients were composed of other sub-ingredients. As the organization grew, and the workload increased, Rabbi Levy's son, Rabbi Don Yoel Levy, joined the OK to help expand the organization. Today After the death of Rabbi Bernard Levy in 1987, his son Rabbi Don Yoel Levy assumed leadership of the organization. Upon Rabbi Don Yoel Levy’s death in 2020, the Executive Rabbinical Council took responsibility of the OK. With more than 10 million consumers seeking kosher products in the United States alone, the kosher food industry has seen rapid growth in the past two decades, with sales reaching $165 billion in 2002. Today the OK Certifies over 140,000 products, produced by over 1500 companies worldwide, including food giants such as Kraft, Snapple, and ConAgra. It employs over 350 Rabbis worldwide. Besides giving Kosher Certification, the OK actively promotes education and observance of kosher laws and
https://en.wikipedia.org/wiki/Ars%20Combinatoria%20%28journal%29
Ars Combinatoria, a Canadian Journal of Combinatorics is an English language research journal in combinatorics, published by the Charles Babbage Research Centre, Winnipeg, Manitoba, Canada. From 1976 to 1988 it published two volumes per year, and subsequently it published as many as six volumes per year. The journal is indexed in MathSciNet and Zentralblatt. As of 2019, SCImago Journal Rank listed it in the bottom quartile of miscellaneous mathematics journals. As of December 15, 2021, the editorial board of the journal resigned, asking that inquiries be directed to the publisher.
https://en.wikipedia.org/wiki/TSQ
6-Methoxy-(8-p-toluenesulfonamido)quinoline (TSQ) is one of the most efficient fluorescent stains for zinc(II). It was introduced by Soviet biochemists Toroptsev and Eshchenko in the early 1970s. The popularity of TSQ as physiological stain rose after seminal works by Christopher Frederickson two decades later. TSQ forms a 2:1 (ligand-metal) complex with zinc and emits blue light upon excitation at 365 nanometers. TSQ has been extensively applied for determination of extracellular or intracellular levels of Zn2+ in biological systems, also to study Zn2+ in mossy fibers of the hippocampus.
https://en.wikipedia.org/wiki/National%20Institutes%20of%20Health%20Director%27s%20Pioneer%20Award
National Institutes of Health Director's Pioneer Award is a research initiative first announced in 2004 designed to support individual scientists' biomedical research. The focus is specifically on "pioneering" research that is highly innovative and has a potential to produce paradigm shifting results. The awards, made annually from the National Institutes of Health common fund, are each worth $500,000 per year, or $2,500,000 for five years. Recipients 2004 Source: NIH Larry Abbott George Q. Daley Homme W. Hellinga Joseph McCune Steven L. McKnight Rob Phillips Stephen R. Quake Chad Mirkin Xiaoliang Sunney Xie 2005 Source: NIH Vicki L. Chandler Hollis T. Cline Leda Cosmides Titia de Lange Karl Deisseroth Pehr A.B. Harbury Erich D. Jarvis Thomas A. Rando Derek J. Smith Giulio Tononi Clare M. Waterman-Storer Nathan Wolfe Junying Yuan 2006 Source: NIH Kwabena A. Boahen Arup K. Chakraborty Lila M. Gierasch Rebecca W. Heald Karla Kirkegaard Thomas J. Kodadek Cheng Chi Lee Evgeny A. Nudler Gary J. Pielak David A. Relman Rosalind A Segal James L. Sherley Younan Xia 2007 Source: NIH Lisa Feldman Barrett Peter Bearman Emery N. Brown Thomas R. Clandinin James J. Collins Margaret Gardel Takao K. Hensch Marshall S. Horwitz Rustem F. Ismagilov Frances E. Jensen Mark J. Schnitzer Gina Turrigiano 2008 Source: NIH James K. Chen, Ph.D., Stanford University Ricardo Dolmetsch, Ph.D., Stanford University James Eberwine, Ph.D., University of Pennsylvania Joshua M. Epstein, Ph.D., Brookings Institution Bruce A. Hay, Ph.D., California Institute of Technology Ann Hochschild, Ph.D., Harvard Medical School Charles M. Lieber, Ph.D., Harvard University Barry London, M.D., Ph.D., University of Pittsburgh Tom Maniatis, Ph.D., Harvard University Teri W. Odom, Ph.D., Northwestern University Hongkun Park, Ph.D., Harvard University Aviv Regev, Ph.D., Massachusetts Institute of Technology/Broad Institute Aravinthan D.T. Samuel, Ph.D., Harvard University Saeed Tavazoie, Ph.D., Princeton Universit
https://en.wikipedia.org/wiki/CADPAC
CADPAC, the Cambridge Analytic Derivatives Package, is a suite of programs for ab initio computational chemistry calculations. It has been developed by R. D. Amos with contributions from I. L. Alberts, J. S. Andrews, S. M. Colwell, N. C. Handy, D. Jayatilaka, P. J. Knowles, R. Kobayashi, K. E. Laidig, G. Laming, A. M. Lee, P. E. Maslen, C. W. Murray, J. E. Rice, E. D. Simandiras, A. J. Stone, M.-D. Su and D. J. Tozer. at Cambridge University since 1981. It is capable of molecular Hartree–Fock calculations, Møller–Plesset calculations, various other correlated calculations and density functional theory calculations. See also Quantum chemistry computer programs External links Computational chemistry software
https://en.wikipedia.org/wiki/Asperity%20%28materials%20science%29
In materials science, asperity, defined as "unevenness of surface, roughness, ruggedness" (from the Latin asper—"rough"), has implications (for example) in physics and seismology. Smooth surfaces, even those polished to a mirror finish, are not truly smooth on a microscopic scale. They are rough, with sharp, rough or rugged projections, termed "asperities". Surface asperities exist across multiple scales, often in a self affine or fractal geometry. The fractal dimension of these structures has been correlated with the contact mechanics exhibited at an interface in terms of friction and contact stiffness. When two macroscopically smooth surfaces come into contact, initially they only touch at a few of these asperity points. These cover only a very small portion of the surface area. Friction and wear originate at these points, and thus understanding their behavior becomes important when studying materials in contact. When the surfaces are subjected to a compressive load, the asperities deform through elastic and plastic modes, increasing the contact area between the two surfaces until the contact area is sufficient to support the load. The relationship between frictional interactions and asperity geometry is complex and poorly understood. It has been reported that an increased roughness may under certain circumstances result in weaker frictional interactions while smoother surfaces may in fact exhibit high levels of friction owing to high levels of true contact. The Archard equation provides a simplified model of asperity deformation when materials in contact are subject to a force. Due to the ubiquitous presence of deformable asperities in self affine hierarchical structures, the true contact area at an interface exhibits a linear relationship with the applied normal load. See also Surface roughness Burnishing (metal)
https://en.wikipedia.org/wiki/Composition%20algebra
In mathematics, a composition algebra over a field is a not necessarily associative algebra over together with a nondegenerate quadratic form that satisfies for all and in . A composition algebra includes an involution called a conjugation: The quadratic form is called the norm of the algebra. A composition algebra (A, ∗, N) is either a division algebra or a split algebra, depending on the existence of a non-zero v in A such that N(v) = 0, called a null vector. When x is not a null vector, the multiplicative inverse of x is When there is a non-zero null vector, N is an isotropic quadratic form, and "the algebra splits". Structure theorem Every unital composition algebra over a field can be obtained by repeated application of the Cayley–Dickson construction starting from (if the characteristic of is different from ) or a 2-dimensional composition subalgebra (if ).  The possible dimensions of a composition algebra are , , , and . 1-dimensional composition algebras only exist when . Composition algebras of dimension 1 and 2 are commutative and associative. Composition algebras of dimension 2 are either quadratic field extensions of or isomorphic to . Composition algebras of dimension 4 are called quaternion algebras.  They are associative but not commutative. Composition algebras of dimension 8 are called octonion algebras.  They are neither associative nor commutative. For consistent terminology, algebras of dimension 1 have been called unarion, and those of dimension 2 binarion. Every composition algebra is an alternative algebra. Using the doubled form ( _ : _ ): A × A → K by then the trace of a is given by (a:1) and the conjugate by a* = (a:1)e – a where e is the basis element for 1. A series of exercises prove that a composition algebra is always an alternative algebra. Instances and usage When the field is taken to be complex numbers and the quadratic form , then four composition algebras over are , the bicomplex numbers, the biquaterni
https://en.wikipedia.org/wiki/Million-dollar%20wound
"Million-dollar wound" (American English) or "Blighty wound" (British English) is military slang for a type of wound received in combat which is serious enough to get the soldier sent away from the fighting, but neither fatal nor permanently crippling. Description In his World War II memoir With the Old Breed, Eugene Sledge wrote that during the Battle of Okinawa, the day after he tried to reassure a fellow United States Marine who believed he would soon die, A similar concept is the Blighty (a slang term for Britain or England) wound, a British reference from World War I. In popular culture In the film Forrest Gump, the titular character receives a gunshot to his backside during his service in the Vietnam War, which leaves him sidelined from combat for months (ultimately serving as the end of his combat service, but for unrelated reasons). Due to Gump's below-average intelligence, he takes the expression "million dollar wound" literally, saying: "the Army must keep that money, 'cause I still ain't seen a nickel of that million dollars".
https://en.wikipedia.org/wiki/French%20mathematical%20seminars
French mathematical seminars have been an important type of institution combining research and exposition, active since the beginning of the twentieth century. From 1909 to 1937, the Séminaire Hadamard gathered many participants (f. i. André Weil) around the presentation of international research papers and work in progress. The Séminaire Julia focussed on yearly themes and impulsed the Bourbaki movement. The Séminaire Nicolas Bourbaki is the most famous, but is atypical in a number of ways: it attempts to cover, if selectively, the whole of pure mathematics, and its talks are now, by convention, reports and surveys on research by someone not directly involved. More standard is a working group organised around a specialist area, with research talks given and written up "from the horse's mouth". Historically speaking, the Séminaire Cartan of the late 1940s and early 1950s, around Henri Cartan, was one of the most influential. Publication in those days was by means of the duplicated exemplaire (limited distribution and not peer-reviewed). The seminar model was tested, almost to destruction, by the SGA series of Alexander Grothendieck. Notable seminars Séminaire Bourbaki, still current, general; Nicolas Bourbaki Séminaire Brelot-Choquet-Deny (from 1957), potential theory; Marcel Brelot, Gustave Choquet, Jacques Deny Séminaire Cartan, homological algebra, sheaf theory, several complex variables; Henri Cartan and his students Séminaire Châtelet-Dubreil, Dubreil, Dubreil-Pisot, from 1951, abstract algebra Séminaire Chevalley, algebraic geometry, late 1950s Séminaire Delange-Pisot, then Delange-Pisot-Poitou, from 1959, number theory Séminaire Ehresmann, differential geometry and category theory; Charles Ehresmann Séminaire Grothendieck, from 1957, became Grothendieck's Séminaire de Géométrie Algébrique Séminaire Janet, differential equations Séminaire Kahane Séminaire Lelong, several complex variables Séminaire Schwartz, functional analysis; Laurent Schwar
https://en.wikipedia.org/wiki/Chorioamnionitis
Chorioamnionitis, also known as intra-amniotic infection (IAI), is inflammation of the fetal membranes (amnion and chorion), usually due to bacterial infection. In 2015, a National Institute of Child Health and Human Development Workshop expert panel recommended use of the term "triple I" to address the heterogeneity of this disorder. The term triple I refers to intrauterine infection or inflammation or both and is defined by strict diagnostic criteria, but this terminology has not been commonly adopted although the criteria are used. Chorioamnionitis results from an infection caused by bacteria ascending from the vagina into the uterus and is associated with premature or prolonged labor. It triggers an inflammatory response to release various inflammatory signaling molecules, leading to increased prostaglandin and metalloproteinase release. These substances promote uterine contractions and cervical ripening, causations of premature birth. The risk of developing chorioamnionitis increases with number of vaginal examinations performed in the final month of pregnancy, including labor. Tobacco and alcohol use also puts mothers at risk for chorioamnionitis development. Chorioamnionitis is caught early by looking at signs and symptoms such as fever, abdominal pain, or abnormal vaginal excretion. Administration of antibiotics if the amniotic sac bursts prematurely can prevent chorioamnionitis occurrence. Signs and symptoms The signs and symptoms of clinical chorioamnionitis include fever, leukocytosis (>15,000 cells/mm³), maternal (>100 bpm) or fetal (>160 bpm) tachycardia, uterine tenderness and preterm rupture of membranes. Causes Causes of chorioamnionitis stem from bacterial infection as well as obstetric and other related factors. Microorganisms Bacterial, viral, and even fungal infections can cause chorioamnionitis. Most commonly from Ureaplasma, Fusobacterium, and Streptococcus bacteria species. Less commonly, Gardnerella, Mycoplasma, and Bacteroides bacter
https://en.wikipedia.org/wiki/Frutarom
Frutarom Industries Ltd. () is an Israeli-based company that specializes in the production and distribution of extracts for flavor and fragrance. In 2015 it had sales of over $872 million. Shares of the company had traded on the London Stock Exchange and the Tel Aviv Stock Exchange until it was acquired by International Flavors & Fragrances in 2018. Overview The Frutarom Group is a flavor and ingredients company based in Haifa, Israel. The company develops, manufactures, and markets an extensive variety of flavors and ingredients catered to customers in a range of industries around the world. Frutarom operates through two major divisions: Flavors Division – develops, produces, and markets flavor compounds and food system solutions. Fine Ingredients Division – develops, produces and markets flavor extracts, functional food ingredients, pharmaceutical/nutraceutical extracts, specialty essential oils, citrus products and aroma chemicals. History Frutarom was established by Yehuda Araten and Maurice Gerzon, two industrialists from the Netherlands in 1933. They chose to build their first plant in an almost desert-like area between Haifa and Acco in Israel, making Frutarom one of Israel's earliest industrial enterprises. In 1952, Frutarom became a subsidiary of the newly established Electrochemical Industries (Frutarom) Ltd (EIF). By 1973 ICC Industries, owned by Dr. John J. Farber, became the controlling party of EIF and Frutarom. Since the 1990s, Frutarom has expanded by establishing itself overseas beginning with the acquisition of a small US company in 1990. Frutarom now has subsidiaries in Israel, the United States, the UK, Switzerland, Russia, Turkey, China, Ukraine, Kazakhstan, Peru and Germany, plus marketing offices in France, Romania, India, and Hong Kong. In 1996 Frutarom listed on the Tel Aviv Stock Exchange. The company was formerly known as Frutarom NewCo (1995) Ltd but changed its name to Frutarom Industries Ltd. in 1996. On 22 February 2005, Frutaro
https://en.wikipedia.org/wiki/Point%20of%20zero%20charge
The point of zero charge (pzc) is generally described as the pH at which the net charge of total particle surface (i.e. absorbent's surface) is equal to zero, which concept has been introduced in the studies dealt with colloidal flocculation to explain pH affecting the phenomenon. A related concept in electrochemistry is the electrode potential at the point of zero charge. Generally, the pzc in electrochemistry is the value of the negative decimal logarithm of the activity of the potential-determining ion in the bulk fluid. The pzc is of fundamental importance in surface science. For example, in the field of environmental science, it determines how easily a substrate is able to adsorb potentially harmful ions. It also has countless applications in technology of colloids, e.g., flotation of minerals. Therefore, the pzc value has been examined in many application of adsorption to the environmental science. The pzc value is typically obtained by titrations and several titration method has been developed. Related values associated with the soil characteristics exist along with the pzc value, including zero point of charge (zpc), point of zero net charge (pznc), etc. Term definition of point of zero charge The point of zero charge is the pH for which the net surface charge of adsorbent is equal to zero. This concept has been introduced by an increase of interest in the pH of the solution during adsorption. The reason why pH has attracted much attention is that the adsorption of some substances is very dependent on pH. The pzc value is determined by the characteristics of an adsorbent. For example, the surface charge of adsorbent is described by the ion that lies on the surface of the particle (adsorbent) structure like image. At the lower pH, hydrogen ions (protons, H+) would be adsorbed more than other cations (adsorbate) so that the other cations would be less adsorbed in the case of the negatively charged particle. On the other hand, if the surface is positively c
https://en.wikipedia.org/wiki/Virt-manager
virt-manager is a desktop virtual machine monitor primarily developed by Red Hat. Features Virtual Machine Manager allows users to: create, edit, start and stop VMs view and control each VM's console see performance and utilization statistics for each VM view all running VMs and hosts, and their live performance or resource utilization statistics. use KVM, Xen or QEMU virtual machines, running either locally or remotely. use LXC containers Support for FreeBSD's bhyve hypervisor has been included since 2014, though it remains disabled by default. Distributions including Virtual Machine Manager Virtual Machine Manager comes as the package in: Arch Linux CentOS Debian (since lenny) Fedora (since version 6) FreeBSD (via Ports collection) Frugalware Gentoo Mandriva Linux (since release 2007.1) MXLinux NetBSD (via pkgsrc) NixOS OpenBSD (via Ports collection) openSUSE (since release 10.3) Red Hat Enterprise Linux (versions 5 through 7 only) Scientific Linux Trisquel TrueOS Ubuntu (version 8.04 and above) Void Linux See also libvirt, the API used by Virtual Machine Manager to create and manage virtual machines
https://en.wikipedia.org/wiki/Genomic%20convergence
Genomic convergence is a multifactor approach used in genetic research that combines different kinds of genetic data analysis to identify and prioritize susceptibility genes for a complex disease. Early applications In January 2003, Michael Hauser along with fellow researchers at the Duke Center for Human Genetics (CHG) coined the term “genomic convergence” to describe their endeavor to identify genes affecting the expression of Parkinson disease (PD). Their work successfully combined serial analysis of gene expression (SAGE) with genetic linkage analysis. The authors explain, “While both linkage and expression analyses are powerful on their own, the number of possible genes they present as candidates for PD or any complex disorder remains extremely large”. The convergence of the two methods allowed researchers to decrease the number of possible PD genes to consider for further study. Their success prompted further use of the genomic convergence method at the CHG, and in July 2003 Yi-Ju Li, et al. published a paper revealing that glutathione S-transferase omega-1 (GSTO1) modifies the age-at-onset (AAO) of Alzheimer disease (AD) and PD. In May 2004, Dr. Margaret Pericak-Vance, currently the director of the John P. Hussman Institute for Human Genomics at the University of Miami Miller School of Medicine and then the director of the CHG, articulated the value of the genomic convergence method at a New York Academy of Sciences (NYAS) keynote address entitled "Novel Methods in Genetic Exploration of Neurodegenerative Disease." She stated, "No single method is going to get us where we need to be with these complex traits. It is going to take a combination of methods to dissect the underlying etiology of these disorders". Recent and future applications Genomic convergence has a countless number of creative applications that combine the strengths of different analyses and studies. Maher Noureddine et al., note in their 2005 paper, “One of the growing problems in the s
https://en.wikipedia.org/wiki/Variation%20%28astronomy%29
In astronomy, the variation of the Moon is one of the principal perturbations in the motion of the Moon. Discovery The variation was discovered by Tycho Brahe, who noticed that, starting from a lunar eclipse in December 1590, at the times of syzygy (new or full moon), the apparent velocity of motion of the Moon (along its orbit as seen against the background of stars) was faster than expected. On the other hand, at the times of first and last quarter, its velocity was correspondingly slower than expected. (Those expectations were based on the lunar tables widely used up to Tycho's time. They took some account of the two largest irregularities in the Moon's motion, i.e. those now known as the equation of the center and the evection, see also Lunar theory - History.) Variation The main visible effect (in longitude) of the variation of the Moon is that during the course of every month, at the octants of the Moon's phase that follow the syzygies (i.e. halfway between the new or the full moon and the next-following quarter), the Moon is about two thirds of a degree farther ahead than would be expected on the basis of its mean motion (as modified by the equation of the centre and by the evection). But at the octants that precede the syzygies, it is about two thirds of a degree behind. At the syzygies and quarters themselves, the main effect is on the Moon's velocity rather than its position. In 1687 Newton published, in the 'Principia', his first steps in the gravitational analysis of the motion of three mutually-attracting bodies. This included a proof that the Variation is one of the results of the perturbation of the motion of the Moon caused by the action of the Sun, and that one of the effects is to distort the Moon's orbit in a practically elliptical manner (ignoring at this point the eccentricity of the Moon's orbit), with the centre of the ellipse occupied by the Earth, and the major axis perpendicular to a line drawn between the Earth and Sun. The Variat
https://en.wikipedia.org/wiki/End%20%28category%20theory%29
In category theory, an end of a functor is a universal dinatural transformation from an object e of X to S. More explicitly, this is a pair , where e is an object of X and is an extranatural transformation such that for every extranatural transformation there exists a unique morphism of X with for every object a of C. By abuse of language the object e is often called the end of the functor S (forgetting ) and is written Characterization as limit: If X is complete and C is small, the end can be described as the equalizer in the diagram where the first morphism being equalized is induced by and the second is induced by . Coend The definition of the coend of a functor is the dual of the definition of an end. Thus, a coend of S consists of a pair , where d is an object of X and is an extranatural transformation, such that for every extranatural transformation there exists a unique morphism of X with for every object a of C. The coend d of the functor S is written Characterization as colimit: Dually, if X is cocomplete and C is small, then the coend can be described as the coequalizer in the diagram Examples Natural transformations: Suppose we have functors then . In this case, the category of sets is complete, so we need only form the equalizer and in this case the natural transformations from to . Intuitively, a natural transformation from to is a morphism from to for every in the category with compatibility conditions. Looking at the equalizer diagram defining the end makes the equivalence clear. Geometric realizations: Let be a simplicial set. That is, is a functor . The discrete topology gives a functor , where is the category of topological spaces. Moreover, there is a map sending the object of to the standard -simplex inside . Finally there is a functor that takes the product of two topological spaces. Define to be the composition of this product functor with . The coend of is the geometric realization of . Not
https://en.wikipedia.org/wiki/Dinatural%20transformation
In category theory, a branch of mathematics, a dinatural transformation between two functors written is a function that to every object of associates an arrow of and satisfies the following coherence property: for every morphism of the diagram commutes. The composition of two dinatural transformations need not be dinatural. See also Extranatural transformation Natural transformation
https://en.wikipedia.org/wiki/Evolution%40Home
evolution@home was a volunteer computing project for evolutionary biology, launched in 2001. The aim of evolution@home is to improve understanding of evolutionary processes. This is achieved by simulating individual-based models. The Simulator005 module of evolution@home was designed to better predict the behaviour of Muller's ratchet. The project was operated semi-automatically; participants had to manually download tasks from the webpage and submit results by email using this method of operation. yoyo@home used a BOINC wrapper to completely automate this project by automatically distributing tasks and collecting their results. Therefore, the BOINC version was a complete volunteer computing project. yoyo@home has declared its involvement in this project finished. See also Artificial life Digital organism Evolutionary computation Folding@home List of volunteer computing projects
https://en.wikipedia.org/wiki/Stipe%20%28mycology%29
In mycology, a stipe () is the stem or stalk-like feature supporting the cap of a mushroom. Like all tissues of the mushroom other than the hymenium, the stipe is composed of sterile hyphal tissue. In many instances, however, the fertile hymenium extends down the stipe some distance. Fungi that have stipes are said to be stipitate. The evolutionary benefit of a stipe is generally considered to be in mediating spore dispersal. An elevated mushroom will more easily release its spores into wind currents or onto passing animals. Nevertheless, many mushrooms do not have stipes, including cup fungi, puffballs, earthstars, some polypores, jelly fungi, ergots, and smuts. It is often the case that features of the stipe are required to make a positive identification of a mushroom. Such distinguishing characters include: the texture of the stipe (fibrous, brittle, chalky, leathery, firm, etc.) whether it has remains of a partial veil (such as an annulus or cortina) or universal veil (volva) whether the stipes of many mushrooms fuse at their base its general size and shape whether the stipe extends underground in a root-like structure (a rhizome) When collecting mushrooms for identification it is critical to maintain all these characters intact by digging the mushroom out of the soil, rather than cutting it off mid-stipe. Drawings
https://en.wikipedia.org/wiki/KFRE-TV
KFRE-TV (channel 59) is a television station licensed to Sanger, California, United States, serving the Fresno area as an affiliate of The CW. It is owned by Sinclair Broadcast Group alongside Visalia-licensed Fox affiliate KMPH-TV (channel 26). Both stations share studios on McKinley Avenue in eastern Fresno, while KFRE-TV's transmitter is located on Bear Mountain (near Meadow Lakes). History Early years The station first signed on the air on July 17, 1985, as KMSG-TV; it originally operated as a religious independent mostly with shows like The PTL Club, The 700 Club, Richard Roberts, Jimmy Swaggart, and others as well as Home Shopping Network programming during the overnight hours. This station signed on just as KAIL (then on channel 53, now on channel 7) was evolving from religious to more of a general entertainment format. By 1987, the station evolved into a Spanish-language format during the afternoon and evening hours, and English-language religious programs for about eight hours a day each morning. The station's Spanish programming was sourced from NetSpan, the second Spanish-language television network to launch in the United States (after the Spanish International Network, now Univision); NetSpan was relaunched as Telemundo in 1987. By 1989, the station gradually dropped its inventory of English-language religious programs, and exclusively affiliated with Telemundo. WB affiliation In 2000, KNSO (channel 51, then an affiliate of The WB) signed a deal to become the Fresno market's new Telemundo affiliate; as a result, Pappas Telecasting terminated a local marketing agreement (LMA) between KNSO and Fox affiliate KMPH (channel 26). On January 1, 2001, the LMA with KMPH was transferred to KMSG, which also resulted in the WB affiliation moving to the station from KNSO (becoming the network's third affiliate in the market; The WB's original Fresno affiliate was Clovis-based KGMC (channel 43), which was with the network from its launch in 1995 until 1997); chan
https://en.wikipedia.org/wiki/List%20of%20computer%20size%20categories
This list of computer size categories attempts to list commonly used categories of computer by the physical size of the device and its chassis or case, in descending order of size. One generation's "supercomputer" is the next generation's "mainframe", and a "PDA" does not have the same set of functions as a "laptop", but the list still has value, as it provides a ranked categorization of devices. It also ranks some more obscure computer sizes. There are different sizes like-mini computers, microcomputer, mainframe computer and super computer. Large computers Supercomputer Minisupercomputer Mainframe computer Midrange computer Superminicomputer Minicomputer Microcomputers Interactive kiosk Arcade cabinet Personal computer (PC) Desktop computer—see computer form factor for some standardized sizes of desktop computers Full-size All-in-one Compact Home theater Home computer Mobile computers Desktop replacement computer or desknote Laptop computer Subnotebook computer, also known as a Kneetop computer; clamshell varieties may also be known as minilaptop or ultraportable laptop computers Tablet personal computer Handheld computers, which include the classes: Ultra-mobile personal computer, or UMPC Personal digital assistant or enterprise digital assistant, which include: HandheldPC or Palmtop computer Pocket personal computer Electronic organizer E-reader Pocket computer Calculator, which includes the class: Graphing calculator Scientific calculator Programmable calculator Accounting / Financial Calculator Handheld game console Portable media player Portable data terminal Handheld Smartphone, a class of mobile phone Feature phone Wearable computer Single-board computer Wireless sensor network components Plug computer Stick PC, a single-board computer in a small elongated casing resembling a stick Microcontroller Smartdust Nanocomputer Others Rackmount computer Blade server Blade PC Small form factor personal
https://en.wikipedia.org/wiki/Diplo%C3%AB
Diploë ( or ) is the spongy cancellous bone separating the inner and outer layers of the cortical bone of the skull. It is a subclass of trabecular bone. In the cranial bones, the layers of compact cortical tissue are familiarly known as the tables of the skull; the outer one is thick and tough; the inner is thin, dense, and brittle, and hence is termed the vitreous table. The intervening cancellous tissue is called the diploë. In certain regions of the skull this becomes absorbed so as to leave spaces filled with liquid between the two tables. Etymology From Ancient Greek διπλόη (diplóē, “literally, a fold”), noun use of feminine of διπλόος (diplóos, “double”)
https://en.wikipedia.org/wiki/Peruvian%20sol
The sol (; plural: soles; currency sign: S/) is the currency of Peru; it is subdivided into 100 céntimos ("cents"). The ISO 4217 currency code is PEN. The sol replaced the Peruvian inti in 1991 and the name is a return to that of Peru's historic currency, as the previous incarnation of sol was in use from 1863 to 1985. Although sol in this usage is derived from the Latin solidus (), the word also means "sun" in Spanish. There is thus a continuity with the old Peruvian inti, which was named after Inti, the Sun God of the Incas. At its introduction in 1991, the currency was officially called nuevo sol ("new sol"), but on November 13, 2015, the Peruvian Congress voted to rename the currency simply sol. History Currencies in use before the current Peruvian sol include: The Spanish colonial real from the 16th to 19th centuries, with 8 reales equal to 1 peso. The Peruvian real from 1822 to 1863. Initially worth peso, reales worth peso were introduced in 1858 in their transition to a decimal currency system. The sol or sol de oro from 1863 to 1985, at 1 sol = 10 reales. The inti from 1985 to 1991, at 1 inti = 1,000 soles de oro. Due to the bad state of economy and hyperinflation in the late 1980s, the government was forced to abandon the inti and introduce the sol as the country's new currency. The new currency was put into use on July 1, 1991, by Law No. 25,295, to replace the inti at a rate of 1 sol to 1,000,000 intis. Coins denominated in the new unit were introduced on October 1, 1991, and the first banknotes on November 13, 1991. Since that time, the sol has retained an inflation rate of 1.5%, the lowest ever in either South America or Latin America as a whole. Since the new currency was put into effect, it has managed to maintain an exchange rate between S/2.2 and S/4.13 per US dollar. Coins The current coins were introduced in 1991 in denominations of 1, 5, 10, 20, and 50 céntimos and S/1. The S/2 and S/5 coins were added in 1994. Although one- and
https://en.wikipedia.org/wiki/Fulminant
Fulminant () is a medical descriptor for any event or process that occurs suddenly and escalates quickly, and is intense and severe to the point of lethality, i.e., it has an explosive character. The word comes from Latin fulmināre, to strike with lightning. There are several diseases described by this adjective: Fulminant liver failure Fulminant (Marburg variant) multiple sclerosis. Fulminant colitis Fulminant pre-eclampsia Fulminant meningitis Purpura fulminans Fulminant hepatic venous thrombosis (Budd-Chiari syndrome) Fulminant jejunoileitis Fulminant myocarditis Beyond these particular uses, the term is used more generally as a descriptor for sudden-onset medical conditions that are immediately threatening to life or limb. Some viral hemorrhagic fevers, such as Ebola, Lassa fever, and Lábrea fever, may kill in as little as two to five days. Diseases that cause rapidly developing lung edema, such as some kinds of pneumonia, may kill in a few hours. It was said of the "black death" (pneumonic bubonic plague) that some of its victims would die in a matter of hours after the initial symptoms appeared. Other pathologic conditions that may be fulminating in character are acute respiratory distress syndrome, asthma, acute anaphylaxis, septic shock, and disseminated intravascular coagulation. The term is generally not used to refer to immediate death by trauma, such as gunshot wound, but can refer to trauma-induced secondary conditions, such as commotio cordis, a sudden cardiac arrest caused by a blunt, non-penetrating trauma to the precordium, which causes ventricular fibrillation of the heart. Cardiac arrest and stroke in certain parts of the brain, such as in the brainstem (which controls cardiovascular and respiratory system functions), and massive hemorrhage of the great arteries (such as in perforation of the walls by trauma or by sudden opening of an aneurysm of the aorta) may be very quick, causing "fulminant death". Sudden infant death syndrome
https://en.wikipedia.org/wiki/Ethiopid%20race
Ethiopid (also spelled Aethiopid) is an outdated racial classification of humans indigenous to Northeast Africa, who were typically classified as part of the Caucasian race – the Hamitic sub-branch, or in rare instances the Negroid race. The racial classification was generally made up of mostly Afroasiatic-speaking populations of the Horn of Africa, but to an extent also includes several Nilo-Saharan-speaking populations of the Nile Valley and African Great Lakes region (including but not limited to Nilotic and Sudanic-speaking populations). According to John Baker (1974), in their stable form, their center of distribution was considered to be Horn of Africa, among that region's Hamito-Semitic-speaking populations. Baker described them as being of medium height, with a dolicocephalic or mesocephalic skull (see cephalic index), an essentially Caucasoid facial form, an orthognathic profile (no prognathism) and a rather prominent, narrow nose, often ringlety hair, and an invariably brown skin, with either a reddish or blackish tinge. The concept of dividing humankind into three races called Caucasoid, Mongoloid, and Negroid (originally named "Ethiopian") was introduced in the 1780s by members of the Göttingen School of History and further developed by Western scholars in the context of racist ideologies during the age of colonialism. With the rise of modern genetics, the concept of distinct human races in a biological sense has become obsolete. In 2019, the American Association of Biological Anthropologists stated: "Race does not provide an accurate representation of human biological variation. It was never accurate in the past, and it remains inaccurate when referencing contemporary human populations." See also Arabid race Mediterranean race Brown (racial classification) Notes
https://en.wikipedia.org/wiki/Table%20of%20Newtonian%20series
In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence written in the form where is the binomial coefficient and is the falling factorial. Newtonian series often appear in relations of the form seen in umbral calculus. List The generalized binomial theorem gives A proof for this identity can be obtained by showing that it satisfies the differential equation The digamma function: The Stirling numbers of the second kind are given by the finite sum This formula is a special case of the kth forward difference of the monomial xn evaluated at x = 0: A related identity forms the basis of the Nörlund–Rice integral: where is the Gamma function and is the Beta function. The trigonometric functions have umbral identities: and The umbral nature of these identities is a bit more clear by writing them in terms of the falling factorial . The first few terms of the sin series are which can be recognized as resembling the Taylor series for sin x, with (s)n standing in the place of xn. In analytic number theory it is of interest to sum where B are the Bernoulli numbers. Employing the generating function its Borel sum can be evaluated as The general relation gives the Newton series where is the Hurwitz zeta function and the Bernoulli polynomial. The series does not converge, the identity holds formally. Another identity is which converges for . This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent) See also Binomial transform List of factorial and binomial topics Nörlund–Rice integral Carlson's theorem
https://en.wikipedia.org/wiki/Lead%E2%80%93lag%20compensator
A lead–lag compensator is a component in a control system that improves an undesirable frequency response in a feedback and control system. It is a fundamental building block in classical control theory. Applications Lead–lag compensators influence disciplines as varied as robotics, satellite control, automobile diagnostics, LCDs and laser frequency stabilisation. They are an important building block in analog control systems, and can also be used in digital control. Given the control plant, desired specifications can be achieved using compensators. I, P, PI, PD, and PID, are optimizing controllers which are used to improve system parameters (such as reducing steady state error, reducing resonant peak, improving system response by reducing rise time). All these operations can be done by compensators as well, used in cascade compensation technique. Theory Both lead compensators and lag compensators introduce a pole–zero pair into the open loop transfer function. The transfer function can be written in the Laplace domain as where X is the input to the compensator, Y is the output, s is the complex Laplace transform variable, z is the zero frequency and p is the pole frequency. The pole and zero are both typically negative, or left of the origin in the complex plane. In a lead compensator, , while in a lag compensator . A lead-lag compensator consists of a lead compensator cascaded with a lag compensator. The overall transfer function can be written as Typically , where z1 and p1 are the zero and pole of the lead compensator and z2 and p2 are the zero and pole of the lag compensator. The lead compensator provides phase lead at high frequencies. This shifts the root locus to the left, which enhances the responsiveness and stability of the system. The lag compensator provides phase lag at low frequencies which reduces the steady state error. The precise locations of the poles and zeros depend on both the desired characteristics of the closed
https://en.wikipedia.org/wiki/Dual%20inheritance%20theory
Dual inheritance theory (DIT), also known as gene–culture coevolution or biocultural evolution, was developed in the 1960s through early 1980s to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. Genes and culture continually interact in a feedback loop: changes in genes can lead to changes in culture which can then influence genetic selection, and vice versa. One of the theory's central claims is that culture evolves partly through a Darwinian selection process, which dual inheritance theorists often describe by analogy to genetic evolution. 'Culture', in this context is defined as 'socially learned behavior', and 'social learning' is defined as copying behaviors observed in others or acquiring behaviors through being taught by others. Most of the modelling done in the field relies on the first dynamic (copying) though it can be extended to teaching. Social learning at its simplest involves blind copying of behaviors from a model (someone observed behaving), though it is also understood to have many potential biases, including success bias (copying from those who are perceived to be better off), status bias (copying from those with higher status), homophily (copying from those most like ourselves), conformist bias (disproportionately picking up behaviors that more people are performing), etc. Understanding social learning is a system of pattern replication, and understanding that there are different rates of survival for different socially learned cultural variants, this sets up, by definition, an evolutionary structure: cultural evolution. Because genetic evolution is relatively well understood, most of DIT examines cultural evolution and the interactions between cultural evolution and genetic evolution. Theoretical basis DIT holds that genetic and cultural evolution interacted in the evolution of Homo sapiens. DIT recognizes that the natural selection of genotypes is an
https://en.wikipedia.org/wiki/Smooth%20infinitesimal%20analysis
Smooth infinitesimal analysis is a modern reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. As a theory, it is a subset of synthetic differential geometry. The nilsquare or nilpotent infinitesimals are numbers ε where ε² = 0 is true, but ε = 0 need not be true at the same time. Overview This approach departs from the classical logic used in conventional mathematics by denying the law of the excluded middle, e.g., NOT (a ≠ b) does not imply a = b. In particular, in a theory of smooth infinitesimal analysis one can prove for all infinitesimals ε, NOT (ε ≠ 0); yet it is provably false that all infinitesimals are equal to zero. One can see that the law of excluded middle cannot hold from the following basic theorem (again, understood in the context of a theory of smooth infinitesimal analysis): Every function whose domain is R, the real numbers, is continuous and infinitely differentiable. Despite this fact, one could attempt to define a discontinuous function f(x) by specifying that f(x) = 1 for x = 0, and f(x) = 0 for x ≠ 0. If the law of the excluded middle held, then this would be a fully defined, discontinuous function. However, there are plenty of x, namely the infinitesimals, such that neither x = 0 nor x ≠ 0 holds, so the function is not defined on the real numbers. In typical models of smooth infinitesimal analysis, the infinitesimals are not invertible, and therefore the theory does not contain infinite numbers. However, there are also models that include invertible infinitesimals. Other mathematical systems exist which include infinitesimals, including nonstandard analysis and the surreal numbers. Smooth infinitesimal analysis is like nonstandard analysis in that (1) it is meant to serve as a foundation for analysis, and (2) the infinitesimal quantities do no
https://en.wikipedia.org/wiki/Synthetic%20differential%20geometry
In mathematics, synthetic differential geometry is a formalization of the theory of differential geometry in the language of topos theory. There are several insights that allow for such a reformulation. The first is that most of the analytic data for describing the class of smooth manifolds can be encoded into certain fibre bundles on manifolds: namely bundles of jets (see also jet bundle). The second insight is that the operation of assigning a bundle of jets to a smooth manifold is functorial in nature. The third insight is that over a certain category, these are representable functors. Furthermore, their representatives are related to the algebras of dual numbers, so that smooth infinitesimal analysis may be used. Synthetic differential geometry can serve as a platform for formulating certain otherwise obscure or confusing notions from differential geometry. For example, the meaning of what it means to be natural (or invariant) has a particularly simple expression, even though the formulation in classical differential geometry may be quite difficult. Further reading John Lane Bell, Two Approaches to Modelling the Universe: Synthetic Differential Geometry and Frame-Valued Sets (PDF file) F.W. Lawvere, Outline of synthetic differential geometry (PDF file) Anders Kock, Synthetic Differential Geometry (PDF file), Cambridge University Press, 2nd Edition, 2006. R. Lavendhomme, Basic Concepts of Synthetic Differential Geometry, Springer-Verlag, 1996. Michael Shulman, Synthetic Differential Geometry Ryszard Paweł Kostecki, Differential Geometry in Toposes Differential geometry
https://en.wikipedia.org/wiki/Repoxygen
Repoxygen was the tradename for a type of gene therapy to produce erythropoietin (EPO). It was under preclinical development by Oxford Biomedica as a possible treatment for anaemia but was abandoned in 2003. The project became infamous when it was mentioned during the criminal trial of Thomas Springstein, a former track coach for some German athletes, who was found guilty of giving athletes performance enhancing drugs without their knowledge. An email in which Springstein attempted to obtain Repoxygen was read by a prosecutor, which led to a flurry of media coverage. The World Anti-Doping Agency banned "gene doping" in 2003 and as of 2009 was researching detection methods for substances such as repoxygen.
https://en.wikipedia.org/wiki/Selected%20area%20diffraction
Selected area (electron) diffraction (abbreviated as SAD or SAED) is a crystallographic experimental technique typically performed using a transmission electron microscope (TEM). It is a specific case of electron diffraction used primarily in material science and solid state physics as one of the most common experimental techniques. Especially with appropriate analytical software, SAD patterns (SADP) can be used to determine crystal orientation, measure lattice constants or examine its defects. Principle In transmission electron microscope, a thin crystalline sample is illuminated by parallel beam of electrons accelerated to energy of hundreds of kiloelectron volts. At these energies samples are transparent for the electrons if the sample is thinned enough (typically less than 100 nm). Due to the wave–particle duality, the high-energetic electrons behave as matter waves with wavelength of a few thousandths of a nanometer. The relativistic wavelength is given by where is Planck's constant, is the electron rest mass, is the elementary charge, is the speed of light and is an electric potential accelerating the electrons (also called acceleration voltage). For instance the acceleration voltage of 200 000 kV results in a wavelength of 2.508 pm. Since the spacing between atoms in crystals is about a hundred times larger, the electrons are diffracted on the crystal lattice, acting as a diffraction grating. Due to the diffraction, part of the electrons is scattered at particular angles (diffracted beams), while others pass through the sample without changing their direction (transmitted beams). In order to determine the diffraction angles, the electron beam normally incident to the atomic lattice can be seen as a planar wave, which is re-transmitted by each atom as a spherical wave. Due to the constructive interference, the spherical waves from number of diffracted beams under angles given, approximately, by the Bragg condition where the integer is an
https://en.wikipedia.org/wiki/Gongche%20notation
Gongche notation or gongchepu is a traditional musical notation method, once popular in ancient China. It uses Chinese characters to represent musical notes. It was named after two of the Chinese characters that were used to represent musical notes, namely "" gōng and "" chě. Sheet music written in this notation is still used for traditional Chinese musical instruments and Chinese operas. However usage of the notation has declined, replaced by mostly jianpu (numbered musical notation) and sometimes the standard western notation. The notation usually uses a movable "do" system. There are variations of the character set used for musical notes. A commonly accepted set is shown below with its relation to jianpu and solfege. {| class="wikitable" align="center" |- align="center" !Gongche |width="12%"|shàng |width="12%"|chě |width="12%"|gōng |width="12%"|fán |width="12%"|liù |width="12%"|wǔ |width="12%"|yǐ |- align="center" !Numbered musical notation |1||2||3||(4)||5||6||(7) |- align="center" !Movable do solfège syllable |do||re||mi||(between fa and fa♯)||sol||la||(between ti♭ and ti) |- align="center" !Simplified Japanese notation |||||||||||||L |} Usual variations The three notes just below the central octave are usually represented by special characters: {| class="wikitable" align="center" |- align="center" !Gongche |width="22%"|hé |width="22%"|sì |width="22%"|yī |- align="center" valign="top" !Jianpu |5̣ |6̣ |(7̣) |- align="center" !Solfege |sol||la||(between ti♭ and ti) |- align="center" !Simplified Japanese notation ||||| |} Sometimes "" shì is used instead of "" sì. Sometimes "" yī is not used, or its role is exchanged with "" yǐ. To represent other notes in different octaves, traditions differ among themselves. For Kunqu, the final strokes of "", "", "", "", "", "" and "" are extended by a tiny slash downward for the lower octave; additionally, a left radical "" is added to denote one octave higher than the central, or "" for two octaves higher. For Cantone
https://en.wikipedia.org/wiki/Stuck%20fermentation
A stuck fermentation occurs in brewing beer or winemaking when the yeast become dormant before the fermentation has completed. Unlike an "arrested fermentation" where the winemaker intentionally stops fermentation (such as in the production of fortified wines), a stuck fermentation is an unintentional and unwanted occurrence that can lead to the wine being spoiled by bacteria and oxidation. There are several potential causes of a stuck fermentation; the most common are excessively high temperatures killing off the yeast, or a must deficient in the nitrogen food source needed for the yeast to thrive. Once the fermentation is stuck, it is very difficult to restart due to a chemical compound released by dying yeast cells that inhibit the future growth of yeast cells in the batch. Winemakers often take several steps to limit the possibility of a stuck fermentation occurring, such as adding nitrogen to the must in the form of diammonium phosphate or using cultured yeast with a high temperature and alcohol tolerance. These steps will each have their own subtle or dramatic effect on the resulting flavors and quality of the wine. Possible causes There are several potential instigators of a stuck fermentation. One of the most common found in winemaking is a nitrogen deficient must. Nitrogen is a vital nutrient in the growth and development of yeasts and is usually provided from the wine grapes themselves. Grapes grown in vineyards with soils lacking in nitrogen or grape varieties, such as Chardonnay and Riesling, which are naturally prone to have low nitrogen to sugar ratios will be at greater risk for having a stuck fermentation. Another cause rooted in the vineyard is from overripe grapes. Grapes that are overripe will have high levels of sugars that translates into higher alcohol content. Yeast are unable to reproduce in an environment with 16-18% ABV but in an environment with multiple stressors the fermentation could get stuck even before the alcohol level reaches that
https://en.wikipedia.org/wiki/Protothecosis
Protothecosis, otherwise known as Algaemia, is a disease found in dogs, cats, cattle, and humans caused by a type of green alga known as Prototheca that lacks chlorophyll and enters the human or animal bloodstream. It and its close relative Helicosporidium are unusual in that they are actually green algae that have become parasites. The two most common species are Prototheca wickerhamii and Prototheca zopfii. Both are known to cause disease in dogs, while most human cases are caused by P. wickerhami. Prototheca is found worldwide in sewage and soil. Infection is rare despite high exposure, and can be related to a defective immune system. In dogs, females and Collies are most commonly affected. The first human case was identified in 1964 in Sierra Leone. Cause Prototheca has been thought to be a mutant of Chlorella, a type of single-celled green alga. However, while Chlorella contains galactose and galactosamine in the cell wall, Prototheca lacks these. Also, Chlorella obtains its energy through photosynthesis, while Prototheca is saprotrophic, feeding on dead and decaying organic matter. When Prototheca was first isolated from slime flux of trees in 1894, it was thought to be a type of fungus. Its size varies from 2 to 15 micrometres. Treatment Treatment with amphotericin B has been reported. In cattle Cattle can be affected by protothecal enteritis and mastitis. Protothecal mastitis is endemic worldwide, although most cases of infected herds have been reported in Germany, the United States, and Brazil. In dogs Disseminated protothecosis is most commonly seen in dogs. The algae enters the body through the mouth or nose and causes infection in the intestines. From there it can spread to the eye, brain, and kidneys. Symptoms can include diarrhea, weight loss, weakness, inflammation of the eye (uveitis), retinal detachment, ataxia, and seizures. Dogs with acute blindness and diarrhea that develop exudative retinal detachment should be assessed for protothecos
https://en.wikipedia.org/wiki/Parasitic%20capacitance
Parasitic capacitance is an unavoidable and usually unwanted capacitance that exists between the parts of an electronic component or circuit simply because of their proximity to each other. When two electrical conductors at different voltages are close together, the electric field between them causes electric charge to be stored on them; this effect is capacitance. All practical circuit elements such as inductors, diodes, and transistors have internal capacitance, which can cause their behavior to depart from that of ideal circuit elements. Additionally, there is always some capacitance between any two conductors; this can be significant with closely spaced conductors, such as wires or printed circuit board traces. The parasitic capacitance between the turns of an inductor or other wound component is often described as self-capacitance. However, in electromagnetics, the term self-capacitance more correctly refers to a different phenomenon: the capacitance of a conductive object without reference to another object. Parasitic capacitance is a significant problem in high-frequency circuits and is often the factor limiting the operating frequency and bandwidth of electronic components and circuits. Description When two conductors at different potentials are close to one another, they are affected by each other's electric field and store opposite electric charges like a capacitor. Changing the potential v between the conductors requires a current i into or out of the conductors to charge or discharge them. where C is the capacitance between the conductors. For example, an inductor often acts as though it includes a parallel capacitor, because of its closely spaced windings. When a potential difference exists across the coil, wires lying adjacent to each other are at different potentials. They act like the plates of a capacitor, and store charge. Any change in the voltage across the coil requires extra current to charge and discharge these small 'capacitors'.
https://en.wikipedia.org/wiki/SILO%20%28bootloader%29
The SPARC Improved bootLOader (SILO) is the bootloader used by the SPARC port of the Linux operating system; it can also be used for Solaris as a replacement for the standard Solaris boot loader. SILO generally looks similar to the basic version of LILO, giving a "boot:" prompt, at which the user can press the Tab key to see the available images to boot. The configuration file format is reasonably similar to LILO's, as well as some of the command-line options. However, SILO differs significantly from LILO because it reads and parses the configuration file at boot time, so it is not necessary to re-run it after every change to the file or to the installed kernel images. SILO is able to access ext2, ext3, ext4, UFS, romfs and ISO 9660 file systems, enabling it to boot arbitrary kernels from them (more similar to GRUB). SILO also has support for transparent decompression of gzipped vmlinux images, making the bzImage format unnecessary on SPARC Linux. SILO is loaded from the SPARC PROM. Licensed under the terms of the GNU General Public License (GPL). See also bootman LILO elilo Yaboot NTLDR BCD
https://en.wikipedia.org/wiki/DMS-59
DMS-59 (Dual Monitor Solution, 59 pins) was generally used for computer video cards. It provides two Digital Visual Interface (DVI) or Video Graphics Array (VGA) outputs in a single connector. A Y-style breakout cable is needed for the transition from the DMS-59 output (digital + analogue) to DVI (digital) or VGA (analogue), and different types of adapter cables exist. The connector is four pins high and 15 pins wide, with a single pin missing from the bottom row, in a D-shaped shell, with thumbscrews. , this adapter cable was listed as obsolete by its primary vendor Molex. The advantage of DMS-59 is its ability to support two high resolution displays, such as two DVI Single Link digital channels or two VGA analog channels, with a single DVI-size connector. The compact size lets a half-height card support two high resolution displays, and a full-height card (with two DMS-59 connectors) up to four high resolution displays. The DMS-59 connector is used by e.g. AMD (AMD FireMV), Nvidia and Matrox for video cards sold in some Lenovo ThinkStation models, Viglen Genies and Omninos, Dell, HP and Compaq computers. DMS-59 connectors also appeared on Sun Computers. Some confusion has been caused by the fact that vendors label cards with DMS-59 as "supports DVI", but the cards have no DVI connectors built-in. Such cards, when equipped with only a VGA connector adapter cable, cannot be connected to a monitor with only a DVI-D input. A DMS-59 to DVI adapter cable needs to be used with such monitors. The DMS-59 connector is derived from the LFH-60 Molex low-force helix connector, which could be found in some earlier graphics cards. These ports are similar to DMS-59, but have all 60 pins present, whereas DMS-59 has one pin (pin 58) blocked. A connector plug with all 60 pins (such as a Molex 88766-7610 DVI-I splitter) does not fit into a properly keyed DMS-59 socket. A Dual-DVI breakout cable can be used in connection with two passive DVI-to-HDMI adapters to feed modern displa
https://en.wikipedia.org/wiki/Zonule%20of%20Zinn
The zonule of Zinn () (Zinn's membrane, ciliary zonule) (after Johann Gottfried Zinn) is a ring of fibrous strands forming a zonule (little band) that connects the ciliary body with the crystalline lens of the eye. These fibers are sometimes collectively referred to as the suspensory ligaments of the lens, as they act like suspensory ligaments. Development The non-pigmented ciliary epithelial cells of the eye synthesize portions of the zonules. Anatomy The zonule of Zinn is split into two layers: a thin layer, which lies near the hyaloid fossa, and a thicker layer, which is a collection of zonular fibers. Together, the fibers are known as the suspensory ligament of the lens. The zonules are about 1–2 μm in diameter. The zonules attach to the lens capsule 2 mm anterior and 1 mm posterior to the equator, and arise of the ciliary epithelium from the pars plana region as well as from the valleys between the ciliary processes in the pars plicata. When colour granules are displaced from the zonules of Zinn (by friction against the lens), the irises slowly fade. In some cases those colour granules clog the channels and lead to glaucoma pigmentosa. The zonules are primarily made of fibrillin, a connective tissue protein. Mutations in the fibrillin gene lead to the condition Marfan syndrome, and consequences include an increased risk of lens dislocation. Clinical appearance The zonules of Zinn are difficult to visualize using a slit lamp, but may be seen with exceptional dilation of the pupil, or if a coloboma of the iris or a subluxation of the lens is present. The number of zonules present in a person appears to decrease with age. The zonules insert around the outer margin of the lens (equator), both anteriorly and posteriorly. Function Securing the lens to the optical axis and transferring forces from the ciliary muscle in accommodation. When colour granules are displaced from the zonules of Zinn, caused by friction of the lens, the iris can slowly fade. These
https://en.wikipedia.org/wiki/Vadem%20Clio
The Vadem Clio is a handheld PC released by Vadem in 1999. Models of it used Windows CE H/PC Pro 3.0 (WinCE Core OS 2.11) as the operating system. Data Evolution Corporation currently owns the rights to the Clio. Overview The Clio is a convertible tablet computer released by Vadem and designed by Sohela. Data Evolution Corporation, which runs Microsoft's salar CE operating system and has a "SwingTop" pivoting arm. The 180-degree screen rotation allowed the unit to be used as a touch-screen tablet or as a more traditional notebook with a keyboard. Clio could run for more than 12 hours on a single charge. Along with the Sony VAIO it was one of the first full-sized portable computers that measured only an inch (2.2 cm) thick. The platform was conceived of and created within Vadem by a skunkworks team that was led by Edmond Ku. Clio was first developed without the knowledge of Microsoft and after it was presented to Bill Gates and the CE team, it led to the definition of the Jupiter-class CE platform. Handwriting software was from Vadem's ParaGraph group (acquired from SGI), the same team that provided handwriting recognition technology used in the Apple Newton. Originally introduced in 1998, the Clio product line won numerous awards and accolades, such as Mobile Computing & Communications’ “Best Handheld Design, Keyboard Form Factor;” PC Week “Best of Comdex” finalist; Home Office Computing’s Silver Award; Mobility Award “Notebook Computing, PC Companion” winner; Industrial Designs Excellence Awards (IDEA)—Silver in Business and Industrial Equipment; and IDC’s “Best Design”. In addition, the Clio has been featured in hundreds of articles and has appeared on the covers of a number of magazines, including Pen Computing and Business Week. Design The swing arm and rotating screen concept was conceived by Edmond Ku, Vadem's engineering director. The physical design was the creation of frogdesign, Inc.'s industrial designers Sonia Schieffer and Josh Morenstein and mec
https://en.wikipedia.org/wiki/The%20Baseball%20Network
The Baseball Network was a short-lived American television broadcasting joint venture between ABC, NBC and Major League Baseball (MLB). Under the arrangement, beginning in the 1994 season, the league produced its own broadcasts in-house which were then brokered to air on ABC and NBC. The Baseball Network was the first television network in the United States to be owned by a professional sports league. The package included coverage of games in prime time on selected nights throughout the regular season (under the branding Baseball Night in America), along with coverage of the postseason and the World Series. Unlike previous broadcasting arrangements with the league, there was no national "game of the week" during the regular season; these would be replaced by multiple weekly regional telecasts on certain nights of the week. Additionally, The Baseball Network had exclusive coverage windows; no other broadcaster could televise MLB games during the same night that The Baseball Network was televising games. The arrangement did not last long; due to the effects of a players' strike on the remainder of the 1994 season, and poor reception from fans and critics over how the coverage was implemented, The Baseball Network was disbanded after the 1995 season. While NBC would maintain rights to certain games, the growing Fox network (having established its own sports division two years earlier in 1994) became the league's new national broadcast partner beginning in 1996. Background After the fallout from CBS's financial problems from their exclusive, four-year-long (lasting from 1990 to 1993), US$1.8 billion television contract with Major League Baseball (a contract that ultimately cost the network approximately $500 million), Major League Baseball decided to go into the business of producing the telecasts themselves and market these to advertisers on its own. In reaction to the failed trial with CBS, Major League Baseball was desperately grasping for every available dollar.
https://en.wikipedia.org/wiki/Conductance%20quantum
The conductance quantum, denoted by the symbol , is the quantized unit of electrical conductance. It is defined by the elementary charge e and Planck constant h as: = It appears when measuring the conductance of a quantum point contact, and, more generally, is a key component of the Landauer formula, which relates the electrical conductance of a quantum conductor to its quantum properties. It is twice the reciprocal of the von Klitzing constant (2/RK). Note that the conductance quantum does not mean that the conductance of any system must be an integer multiple of G0. Instead, it describes the conductance of two quantum channels (one channel for spin up and one channel for spin down) if the probability for transmitting an electron that enters the channel is unity, i.e. if transport through the channel is ballistic. If the transmission probability is less than unity, then the conductance of the channel is less than G0. The total conductance of a system is equal to the sum of the conductances of all the parallel quantum channels that make up the system. Derivation In a 1D wire, connecting two reservoirs of potential and adiabatically: The density of states is where the factor 2 comes from electron spin degeneracy, is the Planck constant, and is the electron velocity. The voltage is: where is the electron charge. The 1D current going across is the current density: This results in a quantized conductance: Occurrence Quantized conductance occurs in wires that are ballistic conductors, when the elastic mean free path is much larger than the length of the wire: . B. J. van Wees et al. first observed the effect in a point contact in 1988. Carbon nanotubes have quantized conductance independent of diameter. The quantum hall effect can be used to precisely measure the conductance quantum value. It also occurs in electrochemistry reactions and in association with the quantum capacitance defines the rate with which electrons are transferred between quant
https://en.wikipedia.org/wiki/P110%CE%B4
Phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit delta isoform also known as phosphoinositide 3-kinase (PI3K) delta isoform or p110δ is an enzyme that in humans is encoded by the PIK3CD gene. p110δ regulates immune function. In contrast to the other class IA PI3Ks p110α and p110β, p110δ is principally expressed in leukocytes (white blood cells). Genetic and pharmacological inactivation of p110δ has revealed that this enzyme is important for the function of T cells, B cell, mast cells and neutrophils. Hence, p110δ is a promising target for drugs that aim to prevent or treat inflammation, autoimmunity and transplant rejection. Phosphoinositide 3-kinases (PI3Ks) phosphorylate the 3-prime OH position of the inositol ring of inositol lipids. The class I PI3Ks display a broad phosphoinositide lipid substrate specificity and include p110α, p110β and p110γ. p110α and p110β interact with SH2/SH3-domain-containing p85 adaptor proteins and with GTP-bound Ras. Biochemistry Like the other class IA PI3Ks, p110δ is a catalytic subunit, whose activity and subcellular localisation are controlled by an associated p85α, p55α, p50α or p85β regulatory subunit. The p55γ regulatory subunit is not thought to be expressed at significant levels in immune cells. There is no evidence for selective association between p110α, p110β or p110δ for any particular regulatory subunit. The class IA regulatory subunits (collectively referred to here as p85) bind to proteins that have been phosphorylated on tyrosines. Tyrosine kinases often operate near the plasma membrane and hence control the recruitment of p110δ to the plasma membrane where its substrate PtdIns(4,5)P2 is found. The conversion of PtdIns(4,5)P2 to PtdIns(3,4,5)P3 triggers signal transduction cascades controlled by PKB (also known as Akt), Tec family kinases and other proteins that contain PH domains. In immune cells, antigen receptors, cytokine receptors and costimulatory and accessory receptors stimulate tyrosine k