source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Desymmetrization
Desymmetrization in stereochemistry is the modification of a molecule that results in the loss of one or more symmetry elements. A common application of this class of reactions involves the introduction of chirality. Formally, such conversions required the loss of an improper axis of rotation (mirror plane, center of inversion, rotation-reflection axis). In other words, desymmetrisations convert prochiral precursors into chiral products. Examples Typical substrates are epoxides, diols, dienes, and carboxylic acid anhydrides. One example is the conversion of cis-3,5-diacetoxycyclopentene to monoacetate. In this transformation, the plane of symmetry in the precursor is lost, and the product is asymmetric. The desymmetrisation itself is not usually considered useful. The enantioselective desymmetrisation however delivers a useful product. This particular conversion utilizes the enzyme cholinesterase. In another example, a symmetrical cyclic imide is subjected to asymmetric deprotonation resulting in a chiral product with high enantioselectivity. Transfer hydrogenation converts benzil (PhC(O)C(O)Ph) into one enantiomer of hydrobenzoin: PhC(O)C(O)Ph + 2 H2 → PhCH(OH)CH(OH)Ph The precursor benzil has C2v symmetry, and the product is C2 symmetric. Citric acid is also a symmetric molecule that can be desymmetrized by partial methylation.
https://en.wikipedia.org/wiki/Food%20Products%20Association
The Food Products Association (formerly the National Food Processors Association or NFPA) was the principal U.S. scientific and technical trade association representing the food processing industry until 2007. FPA was headquartered in Washington, D.C., with branches in Dublin, CA, and Seattle, WA. The association emphasized governmental and regulatory affairs, scientific research, technical assistance, education, communications, and crisis management. History FPA started in 1907 as the National Canners Association. It became the National Food Processors Association in 1978 and the Food Products Association in 2005. On January 1, 2007, FPA merged with the Grocery Manufacturers Association and formed the world's largest trade association representing the food, beverage, and consumer products industry, the Grocery Manufacturers Association/Food Products Association, abbreviated GMA/FPA. On January 1, 2008, the association rebranded using the name Grocery Manufacturers Association, and FPA's former Seattle, Washington office became independently incorporated under the name Seafood Products Association. In 2020, the Grocery Manufacturers Association rebranded as Consumer Brands Association (CBA).
https://en.wikipedia.org/wiki/Statisticians%27%20and%20engineers%27%20cross-reference%20of%20statistical%20terms
The following terms are used by electrical engineers in statistical signal processing studies instead of typical statistician's terms. In other engineering fields, particularly mechanical engineering, uncertainty analysis examines systematic and random components of variations in measurements associated with physical experiments. Notes
https://en.wikipedia.org/wiki/Tonelli%E2%80%93Hobson%20test
In mathematics, the Tonelli–Hobson test gives sufficient criteria for a function ƒ on R2 to be an integrable function. It is often used to establish that Fubini's theorem may be applied to ƒ. It is named for Leonida Tonelli and E. W. Hobson. More precisely, the Tonelli–Hobson test states that if ƒ is a real-valued measurable function on R2, and either of the two iterated integrals or is finite, then ƒ is Lebesgue-integrable on R2. Integral calculus Theorems in analysis
https://en.wikipedia.org/wiki/USC%20Jane%20Goodall%20Research%20Center
The USC Jane Goodall Research Center is a part of the department of Anthropology at the University of Southern California. It is co-directed by professors of anthropology Craig Stanford, Chris Boehm, Nayuta Yamashita, and Roberto Delgado. The center was established in 1991 with the joint appointment of Jane Goodall as Distinguished Emeritus Professor in Anthropology and Occupational Science. The Center offers USC students the chance to study in Gombe. See also USC Center for Visual Anthropology
https://en.wikipedia.org/wiki/Thermomagnetic%20convection
Ferrofluids can be used to transfer heat, since heat and mass transport in such magnetic fluids can be controlled using an external magnetic field. B. A. Finlayson first explained in 1970 (in his paper "Convective instability of ferromagnetic fluids", Journal of Fluid Mechanics, 40:753-767) how an external magnetic field imposed on a ferrofluid with varying magnetic susceptibility, e.g., due to a temperature gradient, results in a nonuniform magnetic body force, which leads to thermomagnetic convection. This form of heat transfer can be useful for cases where conventional convection fails to provide adequate heat transfer, e.g., in miniature microscale devices or under reduced gravity conditions. Ozoe group has studied thermomagnetic convection both experimentally and numerically. They showed how to enhance, suppress and invert the convection modes. They have also carried out scaling analysis for paramagnetic fluids in microgravity conditions. A comprehensive review of thermomagnetic convection (in A. Mukhopadhyay, R. Ganguly, S. Sen, and I. K. Puri, "Scaling analysis to characterize thermomagnetic convection", International Journal of Heat and Mass Transfer 48:3485-3492, (2005)) also shows that this form of convection can be correlated with a dimensionless magnetic Rayleigh number. Subsequently, this group explained that fluid motion occurs due to the presence of a Kelvin body force that has two terms. The first term can be treated as a magnetostatic pressure, while the second is important only if there is a spatial gradient of the fluid susceptibility, e.g., in a non-isothermal system. Colder fluid that has a larger magnetic susceptibility is attracted towards regions with larger field strength during thermomagnetic convection, which displaces warmer fluid of lower susceptibility. They showed that thermomagnetic convection can be correlated with a dimensionless magnetic Rayleigh number. Heat transfer due to this form of convection can be much more effective th
https://en.wikipedia.org/wiki/Interosseous%20membrane%20of%20leg
The interosseous membrane of the leg (middle tibiofibular ligament) extends between the interosseous crests of the tibia and fibula, helps stabilize the Tib-Fib relationship and separates the muscles on the front from those on the back of the leg. It consists of a thin, aponeurotic joint lamina composed of oblique fibers, which for the most part run downward and lateralward; some few fibers, however, pass in the opposite direction. It is broader above than below. Its upper margin does not quite reach the tibiofibular joint, but presents a free concave border, above which is a large, oval aperture for the passage of the anterior tibial vessels to the front of the leg. In its lower part is an opening for the passage of the anterior peroneal vessels. It is continuous below with the interosseous ligament of the tibiofibular syndesmosis, and presents numerous perforations for the passage of small vessels. It is in relation, in front, with the Tibialis anterior, Extensor digitorum longus, Extensor hallucis proprius, Peronæus tertius, and the anterior tibial vessels and deep peroneal nerve; behind, with the Tibialis posterior and Flexor hallucis longus. Additional images
https://en.wikipedia.org/wiki/MPEG%20LA
MPEG LA is an American company based in Denver, Colorado that licenses patent pools covering essential patents required for use of the MPEG-2, MPEG-4, IEEE 1394, VC-1, ATSC, MVC, MPEG-2 Systems, AVC/H.264 and HEVC standards. Via Licensing Corp acquired MPEG LA in April 2023 and formed a new patent pool administration company called Via Licensing Alliance. History MPEG LA started operations in July 1997 immediately after receiving a Department of Justice Business Review Letter. During formation of the MPEG-2 standard, a working group of companies that participated in the formation of the MPEG-2 standard recognized that the biggest challenge to adoption was efficient access to essential patents owned by many patent owners. That ultimately led to a group of various MPEG-2 patent owners to form MPEG LA, which in turn created the first modern-day patent pool as a solution. The majority of patents underlying MPEG-2 technology were owned by three companies: Sony (311 patents), Thomson (198 patents) and Mitsubishi Electric (119 patents). In June 2012, MPEG LA announced a call for patents essential to the High Efficiency Video Coding (HEVC) standard. In September 2012, MPEG LA launched Librassay, which makes diagnostic patent rights from some of the world's leading research institutions available to everyone through a single license. Organizations which have included patents in Librassay include Johns Hopkins University; Ludwig Institute for Cancer Research; Memorial Sloan Kettering Cancer Center; National Institutes of Health (NIH); Partners HealthCare; The Board of Trustees of the Leland Stanford Junior University; The Trustees of the University of Pennsylvania; The University of California, San Francisco; and Wisconsin Alumni Research Foundation (WARF). On September 29, 2014, the MPEG LA announced their HEVC license which covers the patents from 23 companies. The license is US$0.20 per HEVC product after the first 100,000 units each year with an annual cap. The licen
https://en.wikipedia.org/wiki/Reproductive%20synchrony
Reproductive synchrony is a term used in evolutionary biology and behavioral ecology. Reproductive synchrony—sometimes termed "ovulatory synchrony"—may manifest itself as "breeding seasonality". Where females undergo regular menstruation, "menstrual synchrony" is another possible term. Reproduction is said to be synchronised when fertile matings across a population are temporarily clustered, resulting in multiple conceptions (and consequent births) within a restricted time window. In marine and other aquatic contexts, the phenomenon may be referred to as mass spawning. Mass spawning has been observed and recorded in a large number of phyla, including in coral communities within the Great Barrier Reef. In primates, reproductive synchrony usually takes the form of conception and birth seasonality. The regulatory "clock", in this case, is the sun's position in relation to the tilt of the earth. In nocturnal or partly nocturnal primates—for example, owl monkeys—the periodicity of the moon may also come into play. Synchrony in general is for primates an important variable determining the extent of "paternity skew"—defined as the extent to which fertile matings can be monopolised by a fraction of the population of males. The greater the precision of female reproductive synchrony—the greater the number of ovulating females who must be guarded simultaneously—the harder it is for any dominant male to succeed in monopolising a harem all to himself. This is simply because, by attending to any one fertile female, the male unavoidably leaves the others at liberty to mate with his rivals. The outcome is to distribute paternity more widely across the total male population, reducing paternity skew (figures a, b). Reproductive synchrony can never be perfect. On the other hand, theoretical models predict that group-living species will tend to synchronise wherever females can benefit by maximising the number of males offered chances of paternity, minimising reproductive skew. For e
https://en.wikipedia.org/wiki/Math%20Blaster%20Episode%20II%3A%20Secret%20of%20the%20Lost%20City
Math Blaster Episode II: Secret of the Lost City is an educational game in the Blaster Learning System by Davidson & Associates and is the sequel to Math Blaster Episode I: In Search of Spot. In the plot of this math game, the evil Dr. Minus shoots down Blasternaut, Spot, and Galactic Commander as they search for the Lost City in their Galactic Cruiser. They crash, thankfully, next to the Lost City. Dr. Minus' 'Negatrons' try to stop them as they unlock the secret of the Lost City to save Galactic Command. Levels Number Hunt - the players must pull chains, hit pressure pads, or punch buttons to activate doors or elevators to collect operations and numbers to complete the equation. Maze Craze - the players solve the equation above using five estimations while jumping from platform to platform. Position Splash - the players need to get the Negatrons with the right number (the one that completes the equation) into the bottom tubes. They must hit the Negatrons with the wrong numbers with 'Position Pods' to keep them out. Creature Creator - the players create a creature by using tools. Each arrow indicates the number of changes that occur from box-to-box. The players must complete all of the other sections before unlocking this section. There are different gameplay levels: Space Rookie, Space Captain, and Master Blaster. There are also three different math levels as well as different operations: Addition, Subtraction, Division, Multiplication, Percents, Decimals, and Fractions. Reception See also Blaster Learning System Math Blaster Episode I: In Search of Spot Reading Blaster 2000
https://en.wikipedia.org/wiki/Two-dimensional%20gas
A two-dimensional gas is a collection of objects constrained to move in a planar or other two-dimensional space in a gaseous state. The objects can be: classical ideal gas elements such as rigid disks undergoing elastic collisions; elementary particles, or any ensemble of individual objects in physics which obeys laws of motion without binding interactions. The concept of a two-dimensional gas is used either because: the issue being studied actually takes place in two dimensions (as certain surface molecular phenomena); or, the two-dimensional form of the problem is more tractable than the analogous mathematically more complex three-dimensional problem. While physicists have studied simple two body interactions on a plane for centuries, the attention given to the two-dimensional gas (having many bodies in motion) is a 20th-century pursuit. Applications have led to better understanding of superconductivity, gas thermodynamics, certain solid state problems and several questions in quantum mechanics. Classical mechanics Research at Princeton University in the early 1960s posed the question of whether the Maxwell–Boltzmann statistics and other thermodynamic laws could be derived from Newtonian laws applied to multi-body systems rather than through the conventional methods of statistical mechanics. While this question appears intractable from a three-dimensional closed form solution, the problem behaves differently in two-dimensional space. In particular an ideal two-dimensional gas was examined from the standpoint of relaxation time to equilibrium velocity distribution given several arbitrary initial conditions of the ideal gas. Relaxation times were shown to be very fast: on the order of mean free time . In 1996 a computational approach was taken to the classical mechanics non-equilibrium problem of heat flow within a two-dimensional gas. This simulation work showed that for N>1500, good agreement with continuous systems is obtained. Electron gas While the pri
https://en.wikipedia.org/wiki/Calcium%20tartrate
Calcium tartrate, exactly calcium L-tartrate, is a byproduct of the wine industry, prepared from wine fermentation dregs. It is the calcium salt of L-tartaric acid, an acid most commonly found in grapes. Its solubility decreases with lower temperature, which results in the forming of whitish (in red wine often reddish) crystalline clusters as it precipitates. As E number E354, it finds use as a food preservative and acidity regulator. Like tartaric acid, calcium tartrate has two asymmetric carbons, hence it has two chiral isomers and a non-chiral isomer (meso-form). Most calcium tartrate of biological origin is the chiral levorotatory (–) isomer.
https://en.wikipedia.org/wiki/Entoloma%20sinuatum
Entoloma sinuatum (commonly known as the livid entoloma, livid agaric, livid pinkgill, leaden entoloma, and lead poisoner) is a poisonous mushroom found across Europe and North America. Some guidebooks refer to it by its older scientific names of Entoloma lividum or Rhodophyllus sinuatus. The largest mushroom of the genus of pink-spored fungi known as Entoloma, it is also the type species. Appearing in late summer and autumn, fruit bodies are found in deciduous woodlands on clay or chalky soils, or nearby parklands, sometimes in the form of fairy rings. Solid in shape, they resemble members of the genus Tricholoma. The ivory to light grey-brown cap is up to across with a margin that is rolled inward. The sinuate gills are pale and often yellowish, becoming pink as the spores develop. The thick whitish stem has no ring. When young, it may be mistaken for the edible St George's mushroom (Calocybe gambosa) or the miller (Clitopilus prunulus). It has been responsible for many cases of mushroom poisoning in Europe. E. sinuatum causes primarily gastrointestinal problems that, though not generally life-threatening, have been described as highly unpleasant. Delirium and depression are uncommon sequelae. It is generally not considered to be lethal, although one source has reported deaths from the consumption of this mushroom. Name and relationships The saga of this species' name begins in 1788 with the publication of part 8 of Jean Baptiste Bulliard's Herbier de la France. In it was plate 382, representing a mushroom which he called Agaricus lividus. In 1872, Lucien Quélet took up a species which he called "Entoloma lividus Bull."; although all subsequent agree that this is a fairly clear reference to Bulliard's name, Quélet gave a description that is generally considered to be that of a different species from Bulliard's. In the meantime, 1801 had seen the description of Agaricus sinuatus by Christian Persoon in his Synopsis Methodica Fungorum. He based that name on ano
https://en.wikipedia.org/wiki/S-Adenosyl-L-homocysteine
{{DISPLAYTITLE:S-Adenosyl-L-homocysteine}} S-Adenosyl-L-homocysteine (SAH) is the biosynthetic precursor to homocysteine. SAH is formed by the demethylation of S-adenosyl-L-methionine. Adenosylhomocysteinase converts SAH into homocysteine and adenosine. Biological role DNA methyltransferases are inhibited by SAH. Two S-adenosyl-L-homocysteine cofactor products can bind the active site of DNA methyltransferase 3B and prevent the DNA duplex from binding to the active site, which inhibits DNA methylation.
https://en.wikipedia.org/wiki/Eva%20Nogales
Eva Nogales (born in Colmenar Viejo, Spain) is a Spanish-American biophysicist at the Lawrence Berkeley National Laboratory and a professor at the University of California, Berkeley, where she served as head of the Division of Biochemistry, Biophysics and Structural Biology of the Department of Molecular and Cell Biology (2015–2020). She is a Howard Hughes Medical Institute investigator. Nogales is a pioneer in using electron microscopy for the structural and functional characterization of macromolecular complexes. She used electron crystallography to obtain the first structure of tubulin and identify the binding site of the important anti-cancer drug taxol. She is a leader in combining cryo-EM, computational image analysis and biochemical assays to gain insights into function and regulation of biological complexes and molecular machines. Her work has uncovered aspects of cellular function that are relevant to the treatment of cancer and other diseases. Early life and education Eva Nogales obtained her BS degree in physics from the Autonomous University of Madrid in 1988. She later earned her PhD from the University of Keele in 1992 while working at the Synchrotron Radiation Source under the supervision of Joan Bordas. Career During her post-doctoral work in the laboratory of Ken Downing at the Lawrence Berkeley National Laboratory, Eva Nogales was the first to determine the atomic structure of tubulin and the location of the taxol-binding site by electron crystallography. She became an assistant professor in the Department of Molecular and Cell Biology at the University of California, Berkeley in 1998. In 2000 she became an investigator in the Howard Hughes Medical Institute. As cryo-EM techniques became more powerful, she became a leader in applying cryo-EM to the study of microtubule structure and function and other large macromolecular assemblies such as eukaryotic transcription and translation initiation complexes, the polycomb complex PRC2, and telomerase
https://en.wikipedia.org/wiki/History%20of%20Lorentz%20transformations
The history of Lorentz transformations comprises the development of linear transformations forming the Lorentz group or Poincaré group preserving the Lorentz interval and the Minkowski inner product . In mathematics, transformations equivalent to what was later known as Lorentz transformations in various dimensions were discussed in the 19th century in relation to the theory of quadratic forms, hyperbolic geometry, Möbius geometry, and sphere geometry, which is connected to the fact that the group of motions in hyperbolic space, the Möbius group or projective special linear group, and the Laguerre group are isomorphic to the Lorentz group. In physics, Lorentz transformations became known at the beginning of the 20th century, when it was discovered that they exhibit the symmetry of Maxwell's equations. Subsequently, they became fundamental to all of physics, because they formed the basis of special relativity in which they exhibit the symmetry of Minkowski spacetime, making the speed of light invariant between different inertial frames. They relate the spacetime coordinates of two arbitrary inertial frames of reference with constant relative speed v. In one frame, the position of an event is given by x,y,z and time t, while in the other frame the same event has coordinates x′,y′,z′ and t′. Mathematical prehistory Using the coefficients of a symmetric matrix A, the associated bilinear form, and a linear transformations in terms of transformation matrix g, the Lorentz transformation is given if the following conditions are satisfied: It forms an indefinite orthogonal group called the Lorentz group O(1,n), while the case det g=+1 forms the restricted Lorentz group SO(1,n). The quadratic form becomes the Lorentz interval in terms of an indefinite quadratic form of Minkowski space (being a special case of pseudo-Euclidean space), and the associated bilinear form becomes the Minkowski inner product. Long before the advent of special relativity it was used in topics su
https://en.wikipedia.org/wiki/Matrix%20pencil
In linear algebra, if are complex matrices for some nonnegative integer , and (the zero matrix), then the matrix pencil of degree is the matrix-valued function defined on the complex numbers A particular case is a linear matrix pencil with where and are complex (or real) matrices. We denote it briefly with the notation . A pencil is called regular if there is at least one value of such that . We call eigenvalues of a matrix pencil all complex numbers for which (see eigenvalue for comparison). The set of the eigenvalues is called the spectrum of the pencil and is written . Moreover, the pencil is said to have one or more eigenvalues at infinity if has one or more 0 eigenvalues. Applications Matrix pencils play an important role in numerical linear algebra. The problem of finding the eigenvalues of a pencil is called the generalized eigenvalue problem. The most popular algorithm for this task is the QZ algorithm, which is an implicit version of the QR algorithm to solve the associated eigenvalue problem without forming explicitly the matrix (which could be impossible or ill-conditioned if is singular or near-singular) Pencil generated by commuting matrices If , then the pencil generated by and : consists only of matrices similar to a diagonal matrix, or has no matrices in it similar to a diagonal matrix, or has exactly one matrix in it similar to a diagonal matrix. See also Generalized eigenvalue problem Generalized pencil-of-function method Nonlinear eigenproblem Quadratic eigenvalue problem Generalized Rayleigh quotient Notes
https://en.wikipedia.org/wiki/Hankinson%27s%20equation
Hankinson's equation (also called Hankinson's formula or Hankinson's criterion) is a mathematical relationship for predicting the off-axis uniaxial compressive strength of wood. The formula can also be used to compute the fiber stress or the stress wave velocity at the elastic limit as a function of grain angle in wood. For a wood that has uniaxial compressive strengths of parallel to the grain and perpendicular to the grain, Hankinson's equation predicts that the uniaxial compressive strength of the wood in a direction at an angle to the grain is given by Even though the original relation was based on studies of spruce, Hankinson's equation has been found to be remarkably accurate for many other types of wood. A generalized form of the Hankinson formula has also been used for predicting the uniaxial tensile strength of wood at an angle to the grain. This formula has the form where the exponent can take values between 1.5 and 2. The stress wave velocity at angle to the grain at the elastic limit can similarly be obtained from the Hankinson formula where is the velocity parallel to the grain, is the velocity perpendicular to the grain and is the grain angle. See also Material failure theory Linear elasticity Hooke's law Orthotropic material Transverse isotropy
https://en.wikipedia.org/wiki/Anomalous%20diffusion
Anomalous diffusion is a diffusion process with a non-linear relationship between the mean squared displacement (MSD), , and time. This behavior is in stark contrast to Brownian motion, the typical diffusion process described by Einstein and Smoluchowski, where the MSD is linear in time (namely, with d being the number of dimensions and D the diffusion coefficient). It has been found that equations describing normal diffusion are not capable of characterizing some complex diffusion processes, for instance, diffusion process in inhomogeneous or heterogeneous medium, e.g. porous media. Fractional diffusion equations were introduced in order to characterize anomalous diffusion phenomena. Examples of anomalous diffusion in nature have been observed in ultra-cold atoms, harmonic spring-mass systems, scalar mixing in the interstellar medium, telomeres in the nucleus of cells, ion channels in the plasma membrane, colloidal particle in the cytoplasm, moisture transport in cement-based materials, and worm-like micellar solutions. Classes of anomalous diffusion Unlike typical diffusion, anomalous diffusion is described by a power law, where is the so-called generalized diffusion coefficient and is the elapsed time. The classes of anomalous diffusions are classified as follows: α < 1: subdiffusion. This can happen due to crowding or walls. For example, a random walker in a crowded room, or in a maze, is able to move as usual for small random steps, but cannot take large random steps, creating subdiffusion. This appears for example in protein diffusion within cells, or diffusion through porous media. Subdiffusion has been proposed as a measure of macromolecular crowding in the cytoplasm. α = 1: Brownian motion. : superdiffusion. Superdiffusion can be the result of active cellular transport processes or due to jumps with a heavy-tail distribution. α = 2: ballistic motion. The prototypical example is a particle moving at constant velocity: . : hyperballistic. It h
https://en.wikipedia.org/wiki/Poincar%C3%A9%20residue
In mathematics, the Poincaré residue is a generalization, to several complex variables and complex manifold theory, of the residue at a pole of complex function theory. It is just one of a number of such possible extensions. Given a hypersurface defined by a degree polynomial and a rational -form on with a pole of order on , then we can construct a cohomology class . If we recover the classical residue construction. Historical construction When Poincaré first introduced residues he was studying period integrals of the form for where was a rational differential form with poles along a divisor . He was able to make the reduction of this integral to an integral of the form for where , sending to the boundary of a solid -tube around on the smooth locus of the divisor. Ifon an affine chart where is irreducible of degree and (so there is no poles on the line at infinity page 150). Then, he gave a formula for computing this residue aswhich are both cohomologous forms. Construction Preliminary definition Given the setup in the introduction, let be the space of meromorphic -forms on which have poles of order up to . Notice that the standard differential sends Define as the rational de-Rham cohomology groups. They form a filtrationcorresponding to the Hodge filtration. Definition of residue Consider an -cycle . We take a tube around (which is locally isomorphic to ) that lies within the complement of . Since this is an -cycle, we can integrate a rational -form and get a number. If we write this as then we get a linear transformation on the homology classes. Homology/cohomology duality implies that this is a cohomology class which we call the residue. Notice if we restrict to the case , this is just the standard residue from complex analysis (although we extend our meromorphic -form to all of . This definition can be summarized as the map Algorithm for computing this class There is a simple recursive method for computing the residues which redu
https://en.wikipedia.org/wiki/Biosignal
A biosignal is any signal in living beings that can be continually measured and monitored. The term biosignal is often used to refer to bioelectrical signals, but it may refer to both electrical and non-electrical signals. The usual understanding is to refer only to time-varying signals, although spatial parameter variations (e.g. the nucleotide sequence determining the genetic code) are sometimes subsumed as well. Electrical biosignals Electrical biosignals, or bioelectrical time signals, usually refers to the change in electric current produced by the sum of an electrical potential difference across a specialized tissue, organ or cell system like the nervous system. Thus, among the best-known bioelectrical signals are: Electroencephalogram (EEG) Electrocardiogram (ECG) Electromyogram (EMG) Electrooculogram (EOG) Electroretinogram (ERG) Electrogastrogram (EGG) Galvanic skin response (GSR) or electrodermal activity (EDA) EEG, ECG, EOG and EMG are measured with a differential amplifier which registers the difference between two electrodes attached to the skin. However, the galvanic skin response measures electrical resistance and the Magnetoencephalography (MEG) measures the magnetic field induced by electrical currents (electroencephalogram) of the brain. With the development of methods for remote measurement of electric fields using new sensor technology, electric biosignals such as EEG and ECG can be measured without electric contact with the skin. This can be applied, for example, for remote monitoring of brain waves and heart beat of patients who must not be touched, in particular patients with serious burns. Electrical currents and changes in electrical resistances across tissues can also be measured from plants. Biosignals may also refer to any non-electrical signal that is capable of being monitored from biological beings, such as mechanical signals (e.g. the mechanomyogram or MMG), acoustic signals (e.g. phonetic and non-phonetic utterances, bre
https://en.wikipedia.org/wiki/Appia%20%28software%29
Appia is a free and open-source layered communication toolkit implemented in Java, and licensed under the Apache License, version 2.0. It was born in the University of Lisbon, Portugal, by the DIALNP research group that is hosted in the LaSIGE research unit. Components Appia is composed by a core that is used to compose protocols, and a set of protocols that provide group communication, ordering guaranties, atomic broadcast, among other properties. Core The Appia core offers a clean way for the application to express inter-channel constraints. This feature is obtained as an extension to the functionality provided by current systems. Thus, Appia retains a flexible and modular design that allows communication stacks to be composed and reconfigured in run-time. Protocols The existing protocols include interface with TCP and UDP sockets, virtual synchrony, several implementations of total order, causal order, among others. See also Protocol stack
https://en.wikipedia.org/wiki/IAR%20Systems
IAR Systems is a Swedish computer software company that offers development tools for embedded systems. IAR Systems was founded in 1983, and is listed on Nasdaq Nordic in Stockholm. IAR is an abbreviation of Ingenjörsfirma Anders Rundgren, which means Anders Rundgren Engineering Company. IAR Systems develops C and C++ language compilers, debuggers, and other tools for developing and debugging firmware for 8-, 16-, and 32-bit processors. The firm began in the 8-bit market, but moved into the expanding 32-bit market, more so for 32-bit microcontrollers. IAR Systems is headquartered in Uppsala, Sweden, and has more than 200 employees globally. The company operates subsidiaries in Germany, France, Japan, South Korea, China, United States, and United Kingdom and reaches the rest of the world through distributors. IAR Systems is a subsidiary of IAR Systems Group. Products IAR Embedded Workbench – a development environment that includes a C/C++ compiler, code analysis tools C-STAT and C-RUN, security tools C-Trust and Embedded Trust, and debugging and trace probes Functional Safety Certification option Visual State – a design tool for developing event-driven programming systems based on the event-driven finite-state machine paradigm. IAR Visual State presents the developer with the finite-state machine subset of Unified Modeling Language (UML) for C/C++ code generation. By restricting the design abilities to state machines, it is possible to employ formal model checking to find and flag unwanted properties like state dead-ends and unreachable parts of the design. It is not a full UML editor. IAR KickStart Kit – a series of software and hardware evaluation environments based on various microcontrollers. IAR Embedded Workbench The toolchain IAR Embedded Workbench, which supports more than 30 different processor families, is a complete integrated development environment (IDE) with compiler, analysis tools, debugger, functional safety, and security. The development too
https://en.wikipedia.org/wiki/History%20of%20hard%20disk%20drives
In 1953, IBM recognized the immediate application for what it termed a "Random Access File" having high capacity and rapid random access at a relatively low cost. After considering technologies such as wire matrices, rod arrays, drums, drum arrays, etc., the engineers at IBM's San Jose California laboratory invented the hard disk drive. The disk drive created a new level in the computer data hierarchy, then termed Random Access Storage but today known as secondary storage, less expensive and slower than main memory (then typically drums and later core memory) but faster and more expensive than tape drives. The commercial usage of hard disk drives (HDD) began in 1957, with the shipment of a production IBM 305 RAMAC system including IBM Model 350 disk storage. US Patent 3,503,060 issued March 24, 1970, and arising from the IBM RAMAC program is generally considered to be the fundamental patent for disk drives. Each generation of disk drives replaced larger, more sensitive and more cumbersome devices. The earliest drives were usable only in the protected environment of a data center. Later generations progressively reached factories, offices and homes, eventually becoming ubiquitous. Disk media diameter was initially 24 inches, but over time it has been reduced to today's 3.5-inch and 2.5-inch standard sizes. Drives with the larger 24-inch- and 14-inch-diameter media were typically mounted in standalone boxes (resembling washing machines) or large equipment rack enclosures. Individual drives often required high-current AC power due to the large motors required to spin the large disks. Drives with smaller media generally conformed to de facto standard form factors. The capacity of hard drives has grown exponentially over time. When hard drives became available for personal computers, they offered 5-megabyte capacity. During the mid-1990s the typical hard disk drive for a PC had a capacity in the range of 500 megabyte to 1 gigabyte. hard disk drives up to 22 TB were
https://en.wikipedia.org/wiki/List%20of%20hyperaccumulators
This article covers known hyperaccumulators, accumulators or species tolerant to the following: Aluminium (Al), Silver (Ag), Arsenic (As), Beryllium (Be), Chromium (Cr), Copper (Cu), Manganese (Mn), Mercury (Hg), Molybdenum (Mo), Naphthalene, Lead (Pb), Selenium (Se) and Zinc (Zn). See also: Hyperaccumulators table – 2 : Nickel Hyperaccumulators table – 3 : Cd, Cs, Co, Pu, Ra, Sr, U, radionuclides, hydrocarbons, organic solvents, etc. Hyperaccumulators table – 1 Cs-137 activity was much smaller in leaves of larch and sycamore maple than of spruce: spruce > larch > sycamore maple.
https://en.wikipedia.org/wiki/True%20Monster%20Stories
True Monster Stories, written by Terry Deary, is the first of the non-fiction True Stories Series of books. It was published in 1992 by Hippo Books from Scholastic. Overview The book details strange but apparently "true" encounters with a variety of monsters. The book is divided into eight sections; ranging from wild-men, to bigfoot/sasquatch, through to sea creatures (including Loch Ness Monster), vampires and werewolves. Each section opens with an introduction into that particular set of monsters/creatures. Accounts and brief details then follow of supposed encounters, and each account then ends with a fact file. These fact files present a brief analysis of the events in the accounts, and then present miscellaneous related facts from other similar events. The book is written so as to let the reader decide for themselves whether they believe the events therein to be true or not. Audience As with all the True Stores books, it was aimed at an 11+ market, but found popularity with adults and youngsters alike. See also Paranormal Notes Children's non-fiction books 1992 children's books British children's books Cryptozoology
https://en.wikipedia.org/wiki/Height%20Modernization
Height Modernization is the name of a series of state-by-state programs recently begun by the United States' National Geodetic Survey, a division of the National Oceanic and Atmospheric Administration. The goal of each state program is to place GPS base stations at various locations within each participating state to measure topographic changes in the directions of latitude and longitude caused by subsidence or earthquakes, as well as to measure changes in height (elevation).
https://en.wikipedia.org/wiki/Shack%E2%80%93Hartmann%20wavefront%20sensor
A Shack–Hartmann (or Hartmann–Shack) wavefront sensor (SHWFS) is an optical instrument used for characterizing an imaging system. It is a wavefront sensor commonly used in adaptive optics systems. It consists of an array of lenses (called lenslets) of the same focal length. Each is focused onto a photon sensor (typically a CCD array or CMOS array or quad-cell). If the sensor is placed at the geometric focal plane of the lenslet, and is uniformly illuminated, then, the integrated gradient of the wavefront across the lenslet is proportional to the displacement of the centroid. Consequently, any phase aberration can be approximated by a set of discrete tilts. By sampling the wavefront with an array of lenslets, all of these local tilts can be measured and the whole wavefront reconstructed. Since only tilts are measured the Shack–Hartmann cannot detect discontinuous steps in the wavefront. The design of this sensor improves upon an array of holes in a mask that had been developed in 1904 by Johannes Franz Hartmann as a means of tracing individual rays of light through the optical system of a large telescope, thereby testing the quality of the image. In the late 1960s, Roland Shack and Ben Platt modified the Hartmann screen by replacing the apertures in an opaque screen by an array of lenslets. The terminology as proposed by Shack and Platt was Hartmann screen. The fundamental principle seems to be documented even before Huygens by the Jesuit philosopher, Christopher Scheiner, in Austria. Shack–Hartmann sensors are used in astronomy to measure telescopes and in medicine to characterize eyes for corneal treatment of complex refractive errors. Recently, Pamplona et al. developed and patented an inverse of the Shack–Hartmann system to measure one's eye lens aberrations. While Shack–Hartmann sensors measure the localized slope of the wavefront error using spot displacement in the sensor plane, Pamplona et al. replace the sensor plane with a high resolution visual display (
https://en.wikipedia.org/wiki/Disk%20encryption
Disk encryption is a technology which protects information by converting it into code that cannot be deciphered easily by unauthorized people or processes. Disk encryption uses disk encryption software or hardware to encrypt every bit of data that goes on a disk or disk volume. It is used to prevent unauthorized access to data storage. The expression full disk encryption (FDE) (or whole disk encryption) signifies that everything on the disk is encrypted, but the master boot record (MBR), or similar area of a bootable disk, with code that starts the operating system loading sequence, is not encrypted. Some hardware-based full disk encryption systems can truly encrypt an entire boot disk, including the MBR. Transparent encryption Transparent encryption, also known as real-time encryption and on-the-fly encryption (OTFE), is a method used by some disk encryption software. "Transparent" refers to the fact that data is automatically encrypted or decrypted as it is loaded or saved. With transparent encryption, the files are accessible immediately after the key is provided, and the entire volume is typically mounted as if it were a physical drive, making the files just as accessible as any unencrypted ones. No data stored on an encrypted volume can be read (decrypted) without using the correct password/keyfile(s) or correct encryption keys. The entire file system within the volume is encrypted (including file names, folder names, file contents, and other meta-data). To be transparent to the end-user, transparent encryption usually requires the use of device drivers to enable the encryption process. Although administrator access rights are normally required to install such drivers, encrypted volumes can typically be used by normal users without these rights. In general, every method in which data is seamlessly encrypted on write and decrypted on read, in such a way that the user and/or application software remains unaware of the process, can be called transparent encr
https://en.wikipedia.org/wiki/Thermal%20barrier%20coating
Thermal barrier coatings (TBCs) are advanced materials systems usually applied to metallic surfaces on parts operating at elevated temperatures, such as gas turbine combustors and turbines, and in automotive exhaust heat management. These 100 μm to 2 mm thick coatings of thermally insulating materials serve to insulate components from large and prolonged heat loads and can sustain an appreciable temperature difference between the load-bearing alloys and the coating surface. In doing so, these coatings can allow for higher operating temperatures while limiting the thermal exposure of structural components, extending part life by reducing oxidation and thermal fatigue. In conjunction with active film cooling, TBCs permit working fluid temperatures higher than the melting point of the metal airfoil in some turbine applications. Due to increasing demand for more efficient engines running at higher temperatures with better durability/lifetime and thinner coatings to reduce parasitic mass for rotating/moving components, there is significant motivation to develop new and advanced TBCs. The material requirements of TBCs are similar to those of heat shields, although in the latter application emissivity tends to be of greater importance. Structure An effective TBC needs to meet certain requirements to perform well in aggressive thermo-mechanical environments. To deal with thermal expansion stresses during heating and cooling, adequate porosity is needed, as well as appropriate matching of thermal expansion coefficients with the metal surface that the TBC is coating. Phase stability is required to prevent significant volume changes (which occur during phase changes), which would cause the coating to crack or spall. In air-breathing engines, oxidation resistance is necessary, as well as decent mechanical properties for rotating/moving parts or parts in contact. Therefore, general requirements for an effective TBC can be summarized as needing: 1) a high melting point. 2) no p
https://en.wikipedia.org/wiki/Business%20process%20network
Business process networks (BPN), also referred to as business service networks or business process hubs, enable the efficient execution of multi-enterprise operational processes, including supply chain planning and execution. A BPN extends and implements an organization's Service-orientation in Enterprise Applications. To execute such processes, BPNs combine integration services with application services, often to support a particular industry or process, such as order management, logistics management, or automated shipping and receiving. Purpose Most organizations derive their primary value (e.g., revenue) and attain their goals outside of the 'four walls' of the enterprise-—by selling to consumers (B2C) or to other businesses (Business-to-business). Thus, businesses seek to efficiently manage processes that span multiple organizations. One such process is supply chain management; BPNs are gaining in popularity partly because of the changing nature of supply chains, which have become global. Trends such as global sourcing and offshoring to Asia, India and other low-cost production regions of the world continue to add complexity to effective trading partner management and supply chain visibility. Complications The transition to global sourcing has been challenging for some companies. Few companies have the requisite strategies, infrastructure and extended process control to effectively make the transition to global sourcing. The majority of supply managers continue to use a mix of e-mail, phone and fax to collaborate with offshore suppliers—-none of which are standardized nor easily integrated to enable informed business decisions and actions. Further, distant trading partners introduce new standards, new systems, multiple time zones, new processes and different levels of technological maturity into the supply chain. BPNs help reduce this complexity by providing a common framework for information exchange, visibility and collaboration. BPNs are also increasi
https://en.wikipedia.org/wiki/Prentice%20Hall%20International%20Series%20in%20Computer%20Science
Prentice Hall International Series in Computer Science was a series of books on computer science published by Prentice Hall. The series' founding editor was Tony Hoare. Richard Bird subsequently took over editing the series. Many of the books in the series have been in the area of formal methods in particular. Selected books The following books were published in the series: R. S. Bird, Introduction to Functional Programming using Haskell, 2nd edition, 1998. . R. S. Bird and O. de Moor, Algebra of Programming, 1996. . (100th volume in the series.) O.-J. Dahl, Verifiable Programming, 1992. . D. M. Gabbay, Elementary Logics: A Procedural Perspective, 1998. . I. J. Hayes (ed.), Specification Cases Studies, 2nd edition, 1993. . M. G. Hinchey and J. P. Bowen (eds.), Applications of Formal Methods, 1996. . C. A. R. Hoare, Communicating Sequential Processes, 1985. hardback or paperback. C. A. R. Hoare and M. J. C. Gordon, Mechanized Reasoning and Hardware Design, 1998. . C. A. R. Hoare and He Jifeng, Unifying Theories of Programming, 1998. . INMOS Limited, Occam 2 Reference Manual, 1988. . Cliff Jones, Systematic Software Development Using VDM, 1986. hardback or paperback. M. Joseph (ed.), Real-Time Systems: Specification, Verification and Analysis, 1996. . Bertrand Meyer, Object-Oriented Software Construction (first edition only). Robin Milner, Communication and Concurrency, 1989. (for the paperback). C. C. Morgan, Programming from Specifications, 2nd edition, 1994. . P. N. Nissanke, Realtime Systems, 1997. . B. Potter, J. Sinclair and D. Till, An Introduction to Formal Specification and Z, 2nd edition, 1996. . A. W. Roscoe (ed.), A Classical Mind: Essays in Honour of C. A. R. Hoare, 1994. . A. W. Roscoe, The Theory and Practice of Concurrency, 1997. . J. M. Spivey, The Z Notation: A Reference Manual, 2nd edition, 1992. . J. C. P. Woodcock and J. W. Davies, Using Z: Specification, Refinement and Proof, 1996. .
https://en.wikipedia.org/wiki/Kurtosis%20risk
In statistics and decision theory, kurtosis risk is the risk that results when a statistical model assumes the normal distribution, but is applied to observations that have a tendency to occasionally be much farther (in terms of number of standard deviations) from the average than is expected for a normal distribution. Overview Kurtosis risk applies to any kurtosis-related quantitative model that assumes the normal distribution for certain of its independent variables when the latter may in fact have kurtosis much greater than does the normal distribution. Kurtosis risk is commonly referred to as "fat tail" risk. The "fat tail" metaphor explicitly describes the situation of having more observations at either extreme than the tails of the normal distribution would suggest; therefore, the tails are "fatter". Ignoring kurtosis risk will cause any model to understate the risk of variables with high kurtosis. For instance, Long-Term Capital Management, a hedge fund cofounded by Myron Scholes, ignored kurtosis risk to its detriment. After four successful years, this hedge fund had to be bailed out by major investment banks in the late 1990s because it understated the kurtosis of many financial securities underlying the fund's own trading positions. Research by Mandelbrot Benoit Mandelbrot, a French mathematician, extensively researched this issue. He felt that the extensive reliance on the normal distribution for much of the body of modern finance and investment theory is a serious flaw of any related models including the Black–Scholes option model developed by Myron Scholes and Fischer Black, and the capital asset pricing model developed by William F. Sharpe. Mandelbrot explained his views and alternative finance theory in his book: The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin, and Reward published on September 18, 1997. See also Kurtosis Skewness risk Stochastic volatility Holy grail distribution Taleb distribution The Black Swan: The Impact of the Hig
https://en.wikipedia.org/wiki/Instructograph
The Instructograph was a paper tape-based machine used for the study of Morse code. The paper tape mechanism consisted of two reels which passed a paper tape across a reading device that actuated a set of contacts which changed state dependent on the presence or absence of hole punches in the tape. The contacts could operate an audio oscillator for the study of International Morse Code (used by radio), or a sounder for the study of American Morse Code (used by railroads), or a light bulb (Aldis Lamp - used by Navy ship to ship or by Heliograph). The Instructograph was in production from about 1920 through 1983. The first US patent, No. 1,725,145, was granted to Otto Bernard Kirkpatrick, of Chicago, IL, on August 20, 1929. Most of them would be wound by hand or be plugged into a wall outlet. Most plugin outlet based instructographs would have a set of knobs that can control the speed and volume. The latest version of the Instructograph was the model 500 which included a built in solid state oscillator. This model was available to be purchased as new through at least 1986. External links Instructograph advertisement, ca. 1950 Model 500 Photograph History of telecommunications Morse code
https://en.wikipedia.org/wiki/Genetics%20of%20aggression
The field of psychology has been greatly influenced by the study of genetics. Decades of research have demonstrated that both genetic and environmental factors play a role in a variety of behaviors in humans and animals (e.g. Grigorenko & Sternberg, 2003). The genetic basis of aggression, however, remains poorly understood. Aggression is a multi-dimensional concept, but it can be generally defined as behavior that inflicts pain or harm on another. The genetic-developmental theory states that individual differences in a continuous phenotype result from the action of a large number of genes, each exerting an effect that works with environmental factors to produce the trait. This type of trait is influenced by multiple factors making it more complex and difficult to study than a simple Mendelian trait (one gene for one phenotype). History Past thoughts on genetic factors influencing aggression, specifically in regard to sex chromosomes, tended to seek answers from chromosomal abnormalities. Four decades ago, the XYY genotype was (erroneously) believed by many to be correlated with aggression. In 1965 and 1966, researchers at the MRC Clinical & Population Cytogenetics Research Unit led by Dr. Court Brown at Western General Hospital in Edinburgh reported finding a much higher than expected nine XYY men (2.9%) averaging almost 6 ft. tall in a survey of 314 patients at the State Hospital for Scotland; seven of the nine XYY patients were mentally retarded. In their initial reports published before examining the XYY patients, the researchers suggested they might have been hospitalized because of aggressive behavior. When the XYY patients were examined, the researchers found their assumptions of aggressive behavior were incorrect. Unfortunately, many science and medicine textbooks quickly and uncritically incorporated the initial, incorrect assumptions about XYY and aggression—including psychology textbooks on aggression. The XYY genotype first gained wide notoriety in 196
https://en.wikipedia.org/wiki/Engineering%20design%20process
The engineering design process, also known as the engineering method, is a common series of steps that engineers use in creating functional products and processes. The process is highly iterative – parts of the process often need to be repeated many times before another can be entered – though the part(s) that get iterated and the number of such cycles in any given project may vary. It is a decision making process (often iterative) in which the basic sciences, mathematics, and engineering sciences are applied to convert resources optimally to meet a stated objective. Among the fundamental elements of the design process are the establishment of objectives and criteria, synthesis, analysis, construction, testing and evaluation. Common stages of the engineering design process It's important to understand that there are various framings/articulations of the engineering design process. Different terminology employed may have varying degrees of overlap, affecting what steps are explicitly stated or deemed "high level" versus subordinate in any given model. This, of course, applies as much to any particular example steps/sequences given here. One example framing of the engineering design process delineates the following stages: research, conceptualization, feasibility assessment, establishing design requirements, preliminary design, detailed design, production planning and tool design, and production. Others, noting that "different authors (in both research literature and in textbooks) define different phases of the design process with varying activities occurring within them," have suggested more simplified/generalized models – such as problem definition, conceptual design, preliminary design, detailed design, and design communication. Another summary of the process, from European engineering design literature, includes clarification of the task, conceptual design, embodiment design, detail design. (NOTE: In these examples, other key aspects – such as concept evaluati
https://en.wikipedia.org/wiki/History%20of%20molecular%20theory
In chemistry, the history of molecular theory traces the origins of the concept or idea of the existence of strong chemical bonds between two or more atoms. A modern conceptualization of molecules began to develop in the 19th century along with experimental evidence for pure chemical elements and how individual atoms of different chemical elements such as hydrogen and oxygen can combine to form chemically stable molecules such as water molecules. Ancient world The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids. Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact. Prior to this, Heraclitus had claimed that fire or change was fundamental to our existence, created through the combination of opposite properties. In the Timaeus, Plato, following Pythagoras, considered mathematical entities such as number, point, line and triangle as the fundamental building blocks or elements of this ephemeral world, and considered the four elements of fire, air, water and earth as states of substances through which the true mathematical principles or elements would pass. A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies. The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe. Greek atomism The earliest views on the shapes and connectivity of atoms was that proposed by Leucippus, Democritus, and Epicurus who reasoned that the solidness of the material corresponded to the shape of the atoms involved. Thus, iron atoms are solid and strong with hooks that lock them into a solid; water atoms are smooth and slippery; salt atoms, because of their taste, are sharp
https://en.wikipedia.org/wiki/Ignition%20timing
In a spark ignition internal combustion engine, ignition timing is the timing, relative to the current piston position and crankshaft angle, of the release of a spark in the combustion chamber near the end of the compression stroke. The need for advancing (or retarding) the timing of the spark is because fuel does not completely burn the instant the spark fires. The combustion gases take a period of time to expand and the angular or rotational speed of the engine can lengthen or shorten the time frame in which the burning and expansion should occur. In a vast majority of cases, the angle will be described as a certain angle advanced before top dead center (BTDC). Advancing the spark BTDC means that the spark is energized prior to the point where the combustion chamber reaches its minimum size, since the purpose of the power stroke in the engine is to force the combustion chamber to expand. Sparks occurring after top dead center (ATDC) are usually counter-productive (producing wasted spark, back-fire, engine knock, etc.) unless there is need for a supplemental or continuing spark prior to the exhaust stroke. Setting the correct ignition timing is crucial in the performance of an engine. Sparks occurring too soon or too late in the engine cycle are often responsible for excessive vibrations and even engine damage. The ignition timing affects many variables including engine longevity, fuel economy, and engine power. Many variables also affect what the "best" timing is. Modern engines that are controlled in real time by an engine control unit use a computer to control the timing throughout the engine's RPM and load range. Older engines that use mechanical distributors rely on inertia (by using rotating weights and springs) and manifold vacuum in order to set the ignition timing throughout the engine's RPM and load range. Early cars required the driver to adjust timing via controls according to driving conditions, but this is now automated. There are many factors tha
https://en.wikipedia.org/wiki/Mediation%20%28statistics%29
In statistics, a mediation model seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusion of a third hypothetical variable, known as a mediator variable (also a mediating variable, intermediary variable, or intervening variable). Rather than a direct causal relationship between the independent variable and the dependent variable, a mediation model proposes that the independent variable influences the mediator variable, which in turn influences the dependent variable. Thus, the mediator variable serves to clarify the nature of the relationship between the independent and dependent variables. Mediation analyses are employed to understand a known relationship by exploring the underlying mechanism or process by which one variable influences another variable through a mediator variable. In particular, mediation analysis can contribute to better understanding the relationship between an independent variable and a dependent variable when these variables do not have an obvious direct connection. Baron and Kenny's (1986) steps for mediation analysis Baron and Kenny (1986) laid out several requirements that must be met to form a true mediation relationship. They are outlined below using a real-world example. See the diagram above for a visual representation of the overall mediating relationship to be explained. Note: Hayes (2009) critiqued Baron and Kenny's mediation steps approach, and as of 2019, David A. Kenny on his website stated that mediation can exist in the absence of a 'significant' total effect, and therefore step 1 below may not be needed. This situation is sometimes referred to as "inconsistent mediation". Later publications by Hayes also questioned the concepts of full or partial mediation and advocated for these terms, along with the classical mediation steps approach outlined below, to be abandoned. Step 1 Regress the dependent variable on the i
https://en.wikipedia.org/wiki/Internodal%20segment
An internodal segment (or internode) is the portion of a nerve fiber between two Nodes of Ranvier. The neurolemma or primitive sheath is not interrupted at the nodes, but passes over them as a continuous membrane.
https://en.wikipedia.org/wiki/Nuclear%20criticality%20safety
Nuclear criticality safety is a field of nuclear engineering dedicated to the prevention of nuclear and radiation accidents resulting from an inadvertent, self-sustaining nuclear chain reaction. Nuclear criticality safety is concerned with mitigating the consequences of a nuclear criticality accident. A nuclear criticality accident occurs from operations that involve fissile material and results in a sudden and potentially lethal release of radiation. Nuclear criticality safety practitioners attempt to prevent nuclear criticality accidents by analyzing normal and credible abnormal conditions in fissile material operations and designing safe arrangements for the processing of fissile materials. A common practice is to apply a double contingency analysis to the operation in which two or more independent, concurrent and unlikely changes in process conditions must occur before a nuclear criticality accident can occur. For example, the first change in conditions may be complete or partial flooding and the second change a re-arrangement of the fissile material. Controls (requirements) on process parameters (e.g., fissile material mass, equipment) result from this analysis. These controls, either passive (physical), active (mechanical), or administrative (human), are implemented by inherently safe or fault-tolerant plant designs, or, if such designs are not practicable, by administrative controls such as operating procedures, job instructions and other means to minimize the potential for significant process changes that could lead to a nuclear criticality accident. Principles As a simplistic analysis, a system will be exactly critical if the rate of neutron production from fission is exactly balanced by the rate at which neutrons are either absorbed or lost from the system due to leakage. Safely subcritical systems can be designed by ensuring that the potential combined rate of absorption and leakage always exceeds the potential rate of neutron production. The para
https://en.wikipedia.org/wiki/Fin%20%28extended%20surface%29
In the study of heat transfer, fins are surfaces that extend from an object to increase the rate of heat transfer to or from the environment by increasing convection. The amount of conduction, convection, or radiation of an object determines the amount of heat it transfers. Increasing the temperature gradient between the object and the environment, increasing the convection heat transfer coefficient, or increasing the surface area of the object increases the heat transfer. Sometimes it is not feasible or economical to change the first two options. Thus, adding a fin to an object, increases the surface area and can sometimes be an economical solution to heat transfer problems. One-piece finned heat sinks are produced by extrusion, casting, skiving, or milling. General case To create a tractable equation for the heat transfer of a fin, many assumptions need to be made: Steady state Constant material properties (independent of temperature) No internal heat generation One-dimensional conduction Uniform cross-sectional area Uniform convection across the surface area With these assumptions, conservation of energy can be used to create an energy balance for a differential cross section of the fin: Fourier’s law states that where is the cross-sectional area of the differential element. Furthermore, the convective heat flux can be determined via the definition of the heat transfer coefficient h, where is the temperature of the surroundings. The differential convective heat flux can then be determined from the perimeter of the fin cross-section P, The equation of energy conservation can now be expressed in terms of temperature, Rearranging this equation and using the definition of the derivative yields the following differential equation for temperature, ; the derivative on the left can be expanded to the most general form of the fin equation, The cross-sectional area, perimeter, and temperature can all be functions of x. Uniform cross-sectional area If the
https://en.wikipedia.org/wiki/Food%20microbiology
Food microbiology is the study of the microorganisms that inhabit, create, or contaminate food. This includes the study of microorganisms causing food spoilage; pathogens that may cause disease (especially if food is improperly cooked or stored); microbes used to produce fermented foods such as cheese, yogurt, bread, beer, and wine; and microbes with other useful roles, such as producing probiotics. Subgroups of bacteria that affect food In the study of bacteria in food, important groups have been subdivided based on certain characteristics. These groupings are not of taxonomic significance: Lactic acid bacteria are bacteria that use carbohydrates to produce lactic acid. The main genera are Lactococcus, Leuconostoc, Pediococcus, Lactobacillus and Streptococcus thermophilus. Acetic acid bacteria like Acetobacter aceti produce acetic acid. Bacteria such as Propionibacterium freudenreichii that produce propionic acid are used to ferment dairy products. Some Clostridium spp. Clostridium butyricum produce butyric acid. Proteolytic bacteria hydrolyze proteins by producing extracellulat proteinases. This group includes bacteria species from the Micrococcus, Staphylococcus, Bacillus, Clostridium, Pseudomonas, Alteromonas, Flavobacterium and Alcaligenes genera, and more limited from Enterobacteriaceae and Brevibacterium. Lipolytic bacteria hydrolyze triglycerides by production of extracellular lipases. This group includes bacteria species from the Micrococcus, Staphylococcus, Pseudomonas, Alteromonas and Flavobacterium genera. Saccharolytic bacteria hydrolyze complex carbohydrates. This group includes bacteria species from the Bacillus, Clostridium, Aeromonas, Pseudomonas and Enterobacter genera. Thermophilic bacteria are able to thrive in high temperatures above 50 Celsius, including genera Bacillus, Clostridium, Pediococcus, Streptococcus, and Lactobacillus. Thermoduric bacteria, including spores, can survive pasteurization. Bacteria that grow in cold temperature
https://en.wikipedia.org/wiki/Suillus%20granulatus
Suillus granulatus is a pored mushroom of the genus Suillus in the family Suillaceae. It is similar to the related S. luteus, but can be distinguished by its ringless stalk. Like S. luteus, it is an edible mushroom that often grows in a symbiosis (mycorrhiza) with pine. It has been commonly known as the weeping bolete, or the granulated bolete. Previously thought to exist in North America, that species has now been confirmed to be the rediscovered Suillus weaverae. Taxonomy Suillus granulatus was first described by Carl Linnaeus in 1753 as a species of Boletus. It was given its current name by French naturalist Henri François Anne de Roussel when he transferred it to Suillus in 1796. Suillus is an ancient term for fungi, and is derived from the word "swine". Granulatus means "grainy" and refers to the glandular dots on the upper part of the stem. However, in some specimens the glandular dots may be inconspicuous and not darkening with age; thus the name S. lactifluus, "oozing milk" was formerly applied to this form as it is not notably characterized by glandular dots. Description The orange-brown, to brown-yellow cap is viscid (sticky) when wet, and shiny when dry, and is usually 4 to 12 cm in diameter. The stem is pale yellow, of uniform thickness, with tiny brownish granules at the apex, and about 4–8 tall, 1–2 cm wide. It is without a ring. The tubes and pores are small, pale yellow, and exude pale milky droplets when young. The flesh is also pale yellow. Suillus granulatus is often confused with Suillus luteus, which is another common and widely distributed species occurring in the same habitat. S. luteus has conspicuous a partial veil and ring, and lacks the milky droplets on the pores. Also similar is Suillus brevipes, which has a short stipe in relation to the cap, and which does not ooze droplets from the pore surface. Suillus pungens is also similar. Bioleaching Bioleaching is the industrial process of using living organisms to extract metals from
https://en.wikipedia.org/wiki/Dawn%20simulation
Dawn simulation is a technique that involves timing lights, often called wake up lights, sunrise alarm clock or natural light alarm clocks, in the bedroom to come on gradually, over a period of 30 minutes to 2 hours, before awakening to simulate dawn. History The concept of dawn simulation was first patented in 1890 as "mechanical sunrise". Modern electronic units were patented in 1973. Variations and improvements seem to get patented every few years. Clinical trials were conducted by David Avery, MD, in the 1980s at Columbia University following a long line of basic laboratory research that showed animals' circadian rhythms to be exquisitely sensitive to the dim, gradually rising dawn signal at the end of the night. Clinical use There are two types of dawn that have been used effectively in a clinical setting: a naturalistic dawn mimicking a springtime sunrise (but used in mid-winter when it is still dark outside), and a sigmoidal-shaped dawn (30 minutes to 2 hours). When used successfully, patients are able to sleep through the dawn and wake up easily at the simulated sunrise, after which the day's treatment is over. The theory behind dawn simulation is based on the fact that early morning light signals are much more effective at advancing the biological clock than are light signals given at other times of day (see Phase response curve). Comparison with bright light therapy Dawn simulation generally uses light sources that range in illuminance from 100 to 300 lux, while bright light boxes are usually in the 10,000-lux range. Approximately 19% of patients discontinue post-awakening bright light therapy due to inconvenience. Because the entire treatment is complete before awakening, dawn simulation may be a more convenient alternative to post-awakening bright light therapy. In terms of efficacy, some studies have shown dawn simulation to be more effective than standard bright light therapy while others have shown no difference or shown that bright light therapy i
https://en.wikipedia.org/wiki/Lighting%20ratio
Lighting ratio in photography refers to the comparison of key light (the main source of light from which shadows fall) to the total fill light (the light that fills in the shadow areas). The higher the lighting ratio, the higher the contrast of the image; the lower the ratio, the lower the contrast. The lighting ratio is the ratio of the light levels on the brightest-lit to the least-lit parts of the subject; the brightest-lit areas are lit by both key (K) and fill (F). The American Society of Cinematographers (ASC) defines lighting ratio as (key+fill):fill, or (key+Σfill):Σfill, where Σfill is the sum of all fill lights. Light can be measured in footcandles. A key light of 200 footcandles and fill light of 100 footcandles have a 3:1 ratio (a ratio of three to one) — (200 + 100):100. A key light of 800 footcandles and a fill light of 200 footcandles has a ratio of 5:1 according to the lighting ratio formula — (800 + 200):200 = 1000 / 200 = 5 : 1. The ratio can be determined in relation to F stops since each increase in f-stop is equal to double the amount of light: 2 to the power of the difference in f stops is equal to the first factor in the ratio. For example, a difference in two f-stops between key and fill is 2 squared, or 4:1 ratio. A difference in 3 stops is 2 cubed, or an 8:1 ratio. No difference is equal to 2 to the power of 0, for a 1:1 ratio. See also High-key lighting Low-key lighting Silhouette
https://en.wikipedia.org/wiki/Ventral%20ramus%20of%20spinal%20nerve
The ventral ramus (: rami) (Latin for 'branch') is the anterior division of a spinal nerve. The ventral rami supply the antero-lateral parts of the trunk and the limbs. They are mainly larger than the dorsal rami. Shortly after a spinal nerve exits the intervertebral foramen, it branches into the dorsal ramus, the ventral ramus, and the ramus communicans. Each of these three structures carries both sensory and motor information. Each spinal nerve carries both sensory and motor information, via efferent and afferent nerve fibers—ultimately via the motor cortex in the frontal lobe and to somatosensory cortex in the parietal lobe—but also through the phenomenon of reflex. Spinal nerves are referred to as "mixed nerves". In the thoracic region they remain distinct from each other and each innervates a narrow strip of muscle and skin along the sides, chest, ribs, and abdominal wall. These rami are called the intercostal nerves. In regions other than the thoracic, ventral rami converge with each other to form networks of nerves called nerve plexuses. Within each plexus, fibers from the various ventral rami branch and become redistributed so that each nerve exiting the plexus has fibers from several different spinal nerves. One advantage to having plexes is that damage to a single spinal nerve will not completely paralyze a limb. There are four main plexuses formed by the ventral rami: the cervical plexus contains ventral rami from spinal nerves C1–C4. Branches of the cervical plexus, which include the phrenic nerve, innervate muscles of the neck, the diaphragm, and the skin of the neck and upper chest. The brachial plexus contains ventral rami from spinal nerves C5–T1. This plexus innervates the pectoral girdle and upper limb. The lumbar plexus contains ventral rami from spinal nerves L1–L4. The sacral plexus contains ventral rami from spinal nerves L4–S4. The lumbar and sacral plexuses innervate the pelvic girdle and lower limbs. Ventral rami, including the sinuvert
https://en.wikipedia.org/wiki/Amphiphysin
Amphiphysin is a protein that in humans is encoded by the AMPH gene. Function This gene encodes a protein associated with the cytoplasmic surface of synaptic vesicles. A subset of patients with stiff person syndrome who were also affected by breast cancer are positive for autoantibodies against this protein. Alternate splicing of this gene results in two transcript variants encoding different isoforms. Additional splice variants have been described, but their full length sequences have not been determined. Amphiphysin is a brain-enriched protein with an N-terminal lipid interaction, dimerisation and membrane bending BAR domain, a middle clathrin and adaptor binding domain and a C-terminal SH3 domain. In the brain, its primary function is thought to be the recruitment of dynamin to sites of clathrin-mediated endocytosis. There are 2 mammalian amphiphysins with similar overall structure. A ubiquitous splice form of amphiphysin-2 (BIN1) that does not contain clathrin or adaptor interactions is highly expressed in muscle tissue and is involved in the formation and stabilization of the T-tubule network. In other tissues amphiphysin is likely involved in other membrane bending and curvature stabilization events. Interactions Amphiphysin has been shown to interact with DNM1, Phospholipase D1, CDK5R1, PLD2, CABIN1 and SH3GL2. See also AP180 Epsin
https://en.wikipedia.org/wiki/Sequence%20%28biology%29
A sequence in biology is the one-dimensional ordering of monomers, covalently linked within a biopolymer; it is also referred to as the primary structure of a biological macromolecule. While it can refer to many different molecules, the term sequence is most often used to refer to a DNA sequence. See also Protein sequence DNA sequence Genotype Self-incompatibility in plants List of geneticists Human Genome Project Dot plot (bioinformatics) Multiplex Ligation-dependent Probe Amplification Sequence analysis Molecular biology
https://en.wikipedia.org/wiki/Difference%20density%20map
In X-ray crystallography, a difference density map shows the spatial distribution of the difference between the measured electron density of the crystal and the electron density explained by the current model. These coefficients are derived from the gradient of the likelihood function of the observed structure factors on the basis of the current model. Display Conventionally, they are displayed as isosurfaces with positive density—electron density where there's nothing in the model, usually corresponding to some constituent of the crystal that hasn't been modelled, for example a ligand or a crystallisation adjutant -- in green, and negative density—parts of the model not backed up by electron density, indicating either that an atom has been disordered by radiation damage or that it is modelled in the wrong place—in red. Calculation Difference density maps are usually calculated using Fourier coefficients which are the differences between the observed structure factor amplitudes from the X-ray diffraction experiment and the calculated structure factor amplitudes from the current model, using the phase from the model for both terms (since no phases are available for the observed data). The two sets of structure factors must be on the same scale. It is now normal to also include weighting terms which take into account the estimated errors in the current model: where m is a figure of merit which is an estimate of the cosine of the error in the phase, and D is a scale factor.
https://en.wikipedia.org/wiki/Write%20combining
Write combining (WC) is a computer bus technique for allowing data to be combined and temporarily stored in a buffer the write combine buffer (WCB) to be released together later in burst mode instead of writing (immediately) as single bits or small chunks. Technique Write combining cannot be used for general memory access (data or code regions) due to the weak ordering. Write-combining does not guarantee that the combination of writes and reads is done in the expected order. For example, a write/read/write combination to a specific address would lead to the write combining order of read/write/write which can lead to obtaining wrong values with the first read (which potentially relies on the write before). In order to avoid the problem of read/write order described above, the write buffer can be treated as a fully associative cache and added into the memory hierarchy of the device in which it is implemented. Adding complexity slows down the memory hierarchy so this technique is often only used for memory which does not need strong ordering (always correct) like the frame buffers of video cards. See also Framebuffer (FB), and when linear: LFB Memory type range registers (MTRR) – the older x86 cache control mechanism Page attribute table (PAT) – x86 page table extension that allows fine-grained cache control, including write combining Page table Uncacheable speculative write combining (USWC) Video Graphics Array (VGA), and Banked (BVGA) Frame Buffer
https://en.wikipedia.org/wiki/Sonido%2013
Sonido 13 is a theory of microtonal music created by the Mexican composer Julián Carrillo around 1900 and described by Nicolas Slonimsky as "the field of sounds smaller than the twelve semitones of the tempered scale." Carrillo developed this theory in 1895 while he was experimenting with his violin. Though he became internationally recognized for his system of notation, it was never widely applied. His first composition in demonstration of his theories was Preludio a Colón (1922). The Western musical convention up to this day divides an octave into twelve different pitches that can be arranged or tempered in different intervals. Carrillo termed his new system Sonido 13, which is Spanish for "Thirteenth Sound" or Sound 13, because it enabled musicians to go beyond the twelve notes that comprise an octave in conventional Western music. Julián Carrillo wrote: "The thirteenth sound will be the beginning of the end and the point of departure of a new musical generation which will transform everything." History Early life Carrillo attended the National Conservatory of Music in Mexico City, where he studied violin, composition, physics, acoustics, and mathematics. The laws that define music intervals instantly amazed Carrillo, which led him to conduct experiments on his violin. He began analyzing the way the pitch of a string changed depending on the finger position, concluding that there had to be a way to split the string into an infinite number of parts. One day, Carrillo was able to divide the fourth string of his violin with a razor into 16 parts in the interval between the notes G and A, thus creating 16 unique sounds. This event was the beginning of Sonido 13 that led Carrillo to study more about physics and the nature of intervals. Professional life Carrillo was, "closely associated with the Díaz regime," and preferred neo-classicism to nationalism.
https://en.wikipedia.org/wiki/Burst%20mode%20%28computing%29
Burst mode is a generic electronics term referring to any situation in which a device is transmitting data repeatedly without going through all the steps required to transmit each piece of data in a separate transaction. Advantages The main advantage of burst mode over single mode is that the burst mode typically increases the throughput of data transfer. Any bus transaction is typically handled by an arbiter, which decides when it should change the granted master and slaves. In case of burst mode, it is usually more efficient if you allow a master to complete a known length transfer sequence. The total delay in a data transaction can be typically written as a sum of initial access latency plus sequential access latency. Here the sequential latency is same in both single mode and burst mode, but the total initial latency is decreased in burst mode, since the initial delay (usually depends on FSM for the protocol) is caused only once in burst mode. Hence the total latency of the burst transfer is reduced, and hence the data transfer throughput is increased. It can also be used by slaves that can optimise their responses if they know in advance how many data transfers there will be. The typical example here is a DRAM which has a high initial access latency, but sequential accesses after that can be performed with fewer wait states. Beats in burst transfer A beat in a burst transfer is the number of write (or read) transfers from master to slave, that takes place continuously in a transaction. In a burst transfer, the address for write or read transfer is just an incremental value of previous address. Hence in a 4-beat incremental burst transfer (write or read), if the starting address is 'A', then the consecutive addresses will be 'A+m', 'A+2*m', 'A+3*m'. Similarly, in a 8-beat incremental burst transfer (write or read), the addresses will be 'A', 'A+n', 'A+2*n', 'A+3*n', 'A+4*n', 'A+5*n', 'A+6*n', 'A+7*n'. Example Q:- A certain SoC master uses a burst mo
https://en.wikipedia.org/wiki/Myc
Myc is a family of regulator genes and proto-oncogenes that code for transcription factors. The Myc family consists of three related human genes: c-myc (MYC), l-myc (MYCL), and n-myc (MYCN). c-myc (also sometimes referred to as MYC) was the first gene to be discovered in this family, due to homology with the viral gene v-myc. In cancer, c-myc is often constitutively (persistently) expressed. This leads to the increased expression of many genes, some of which are involved in cell proliferation, contributing to the formation of cancer. A common human translocation involving c-myc is critical to the development of most cases of Burkitt lymphoma. Constitutive upregulation of Myc genes have also been observed in carcinoma of the cervix, colon, breast, lung and stomach. Myc is thus viewed as a promising target for anti-cancer drugs. Unfortunately, Myc possesses several features that have rendered it difficult to drug to date, such that any anti-cancer drugs aimed at inhibiting Myc may continue to require perturbing the protein indirectly, such as by targeting the mRNA for the protein rather than via a small molecule that targets the protein itself. c-Myc also plays an important role in stem cell biology and was one of the original Yamanaka factors used to reprogram somatic cells into induced pluripotent stem cells. In the human genome, C-myc is located on chromosome 8 and is believed to regulate expression of 15% of all genes through binding on enhancer box sequences (E-boxes). In addition to its role as a classical transcription factor, N-myc may recruit histone acetyltransferases (HATs). This allows it to regulate global chromatin structure via histone acetylation. Discovery The Myc family was first established after discovery of homology between an oncogene carried by the Avian virus, Myelocytomatosis (v-myc; ) and a human gene over-expressed in various cancers, cellular Myc (c-Myc). Later, discovery of further homologous genes in humans led to the addition of
https://en.wikipedia.org/wiki/Avidity
In biochemistry, avidity refers to the accumulated strength of multiple affinities of individual non-covalent binding interactions, such as between a protein receptor and its ligand, and is commonly referred to as functional affinity. Avidity differs from affinity, which describes the strength of a single interaction. However, because individual binding events increase the likelihood of occurrence of other interactions (i.e., increase the local concentration of each binding partner in proximity to the binding site), avidity should not be thought of as the mere sum of its constituent affinities but as the combined effect of all affinities participating in the biomolecular interaction. A particular important aspect relates to the phenomenon of 'avidity entropy'. Biomolecules often form heterogenous complexes or homogeneous oligomers and multimers or polymers. If clustered proteins form an organized matrix, such as the clathrin-coat, the interaction is described as a matricity. Antibody-antigen interaction Avidity is commonly applied to antibody interactions in which multiple antigen-binding sites simultaneously interact with the target antigenic epitopes, often in multimerized structures. Individually, each binding interaction may be readily broken; however, when many binding interactions are present at the same time, transient unbinding of a single site does not allow the molecule to diffuse away, and binding of that weak interaction is likely to be restored. Each antibody has at least two antigen-binding sites, therefore antibodies are bivalent to multivalent. Avidity (functional affinity) is the accumulated strength of multiple affinities. For example, IgM is said to have low affinity but high avidity because it has 10 weak binding sites for antigen as opposed to the 2 stronger binding sites of IgG, IgE and IgD with higher single binding affinities. Affinity Binding affinity is a measure of dynamic equilibrium of the ratio of on-rate (kon) and off-rate (koff) u
https://en.wikipedia.org/wiki/Angle-resolved%20photoemission%20spectroscopy
Angle-resolved photoemission spectroscopy (ARPES) is an experimental technique used in condensed matter physics to probe the allowed energies and momenta of the electrons in a material, usually a crystalline solid. It is based on the photoelectric effect, in which an incoming photon of sufficient energy ejects an electron from the surface of a material. By directly measuring the kinetic energy and emission angle distributions of the emitted photoelectrons, the technique can map the electronic band structure and Fermi surfaces. ARPES is best suited for the study of one- or two-dimensional materials. It has been used by physicists to investigate high-temperature superconductors, graphene, topological materials, quantum well states, and materials exhibiting charge density waves. ARPES systems consist of a monochromatic light source to deliver a narrow beam of photons, a sample holder connected to a manipulator used to position the sample of a material, and an electron spectrometer. The equipment is contained within an ultra-high vacuum (UHV) environment, which protects the sample and prevents scattering of the emitted electrons. After being dispersed along two perpendicular directions with respect to kinetic energy and emission angle, the electrons are directed to a detector and counted to provide ARPES spectra—slices of the band structure along one momentum direction. Some ARPES instruments can extract a portion of the electrons alongside the detector to measure the polarization of their spin. Principle Electrons in crystalline solids can only populate states of certain energies and momenta, others being forbidden by quantum mechanics. They form a continuum of states known as the band structure of the solid. The band structure determines if a material is an insulator, a semiconductor, or a metal, how it conducts electricity and in which directions it conducts best, or how it behaves in a magnetic field. Angle-resolved photoemission spectroscopy determines the band
https://en.wikipedia.org/wiki/Centrosymmetric%20matrix
In mathematics, especially in linear algebra and matrix theory, a centrosymmetric matrix is a matrix which is symmetric about its center. More precisely, an n×n matrix A = [Ai,j] is centrosymmetric when its entries satisfy Ai,j = An−i + 1,n−j + 1 for i, j ∊{1, ..., n}. If J denotes the n×n exchange matrix with 1 on the antidiagonal and 0 elsewhere (that is, Ji,n + 1 − i = 1; Ji,j = 0 if j ≠ n +1− i), then a matrix A is centrosymmetric if and only if AJ = JA. Examples All 2×2 centrosymmetric matrices have the form All 3×3 centrosymmetric matrices have the form Symmetric Toeplitz matrices are centrosymmetric. Algebraic structure and properties If A and B are centrosymmetric matrices over a field F, then so are A + B and cA for any c in F. Moreover, the matrix product AB is centrosymmetric, since JAB = AJB = ABJ. Since the identity matrix is also centrosymmetric, it follows that the set of n×n centrosymmetric matrices over F is a subalgebra of the associative algebra of all n×n matrices. If A is a centrosymmetric matrix with an m-dimensional eigenbasis, then its m eigenvectors can each be chosen so that they satisfy either x = Jx or x = −Jx where J is the exchange matrix. If A is a centrosymmetric matrix with distinct eigenvalues, then the matrices that commute with A must be centrosymmetric. The maximum number of unique elements in a m × m centrosymmetric matrix is . Related structures An n×n matrix A is said to be skew-centrosymmetric if its entries satisfy Ai,j = −An−i+1,n−j+1 for i, j ∊ {1, ..., n}. Equivalently, A is skew-centrosymmetric if AJ = −JA, where J is the exchange matrix defined above. The centrosymmetric relation AJ = JA lends itself to a natural generalization, where J is replaced with an involutory matrix K (i.e., K2 = I) or, more generally, a matrix K satisfying Km = I for an integer m > 1. The inverse problem for the commutation relation of identifying all involutory K that commute with a fixed matrix A has also been studied. Symmetri
https://en.wikipedia.org/wiki/Virtual%20circuit%20multiplexing
Virtual circuit multiplexing or VC-MUX is one of the two (the other being LLC encapsulation) mechanisms for identifying the protocol carried in ATM Adaptation Layer 5 (AAL5) frames specified by , Multiprotocol Encapsulation over ATM. With virtual circuit multiplexing, the communicating hosts agree to send only one packets belonging to a single high-level protocol on any given ATM virtual circuit, and multiple virtual circuits may need to be set up. It has the advantage of not requiring additional protocol-identifying information in a packet, which minimizes the overhead. For example, if the hosts agree to transfer IP, a sender can pass each datagram directly to AAL5 to transfer, nothing needs to be sent besides the datagram and the AAL5 trailer. This reduction in overhead tends to further improve efficiency (e.g., an IPv4 datagram containing a TCP ACK-only packet with neither IP nor TCP options exactly fits into a single ATM cell). The chief disadvantage of such a scheme lies in duplication of virtual circuits: a host must create a separate virtual circuit for each high-level protocol if more than one protocol is used. Because most carriers charge for each virtual circuit, customers try to avoid using multiple circuits because it adds unnecessary cost. It is commonly used in conjunction with PPPoA which is used in various xDSL implementations. Internet Standards
https://en.wikipedia.org/wiki/PS210%20experiment
The PS210 experiment was the first experiment that led to the observation of antihydrogen atoms produced at the Low Energy Antiproton Ring (LEAR) at CERN in 1995. The antihydrogen atoms were produced in flight and moved at nearly the speed of light. They made unique electrical signals in detectors that destroyed them almost immediately after they formed by matter–antimatter annihilation. Eleven signals were observed, of which two were attributed to other processes. In 1997 similar observations were announced at Fermilab from the E862 experiment. The first measurement demonstrated the existence of antihydrogen, the second (with improved setup and intensity monitoring) measured the production rate. Both experiments, one at each of the only two facilities with suitable antiprotons, were stimulated by calculations which suggested the possibility of making very fast antihydrogen within existing circular accelerators.
https://en.wikipedia.org/wiki/Universal%20testing%20machine
A universal testing machine (UTM), also known as a universal tester, materials testing machine or materials test frame, is used to test the tensile strength and compressive strength of materials. An earlier name for a tensile testing machine is a tensometer. The "universal" part of the name reflects that it can perform many standard tensile and compression tests on materials, components, and structures (in other words, that it is versatile). Components Several variations are in use. Common components include: Load frame - Usually consisting of two strong supports for the machine. Some small machines have a single support. Load cell - A force transducer or other means of measuring the load is required. Periodic calibration is usually required by governing regulations or quality system. Cross head - A movable cross head (crosshead) is controlled to move up or down. Usually this is at a constant speed: sometimes called a constant rate of extension (CRE) machine. Some machines can program the crosshead speed or conduct cyclical testing, testing at constant force, testing at constant deformation, etc. Electromechanical, servo-hydraulic, linear drive, and resonance drive are used. Means of measuring extension or deformation - Many tests require a measure of the response of the test specimen to the movement of the cross head. Extensometers are sometimes used. Output device - A means of providing the test result is needed. Some older machines have dial or digital displays and chart recorders. Many newer machines have a computer interface for analysis and printing. Conditioning - Many tests require controlled conditioning (temperature, humidity, pressure, etc.). The machine can be in a controlled room or a special environmental chamber can be placed around the test specimen for the test. Test fixtures, specimen holding jaws, and related sample making equipment are called for in many test methods. Use The set-up and usage are detailed in a test method,
https://en.wikipedia.org/wiki/Micropower
Micropower describes the use of very small electric generators and prime movers or devices to convert heat or motion to electricity, for use close to the generator. The generator is typically integrated with microelectronic devices and produces "several watts of power or less." These devices offer the promise of a power source for portable electronic devices which is lighter weight and has a longer operating time than batteries. Microturbine technology The components of any turbine engine — the gas compressor, the combustion chamber, and the turbine rotor — are fabricated from etched silicon, much like integrated circuits. The technology holds the promise of ten times the operating time of a battery of the same weight as the micropower unit, and similar efficiency to large utility gas turbines. Researchers at Massachusetts Institute of Technology have thus far succeeded in fabricating the parts for such a micro turbine out of six etched and stacked silicon wafers, and are working toward combining them into a functioning engine about the size of a U.S. quarter coin. Researchers at Georgia Tech have built a micro generator 10 mm wide, which spins a magnet above an array of coils fabricated on a silicon chip. The device spins at 100,000 revolutions per minute, and produces 1.1 watts of electrical power, sufficient to operate a cell phone. Their goal is to produce 20 to 50 watts, sufficient to power a laptop computer. Scientists at Lehigh University are developing a hydrogen generator on a silicon chip that can convert methanol, diesel, or gasoline into fuel for a microengine or a miniature fuel cell. Professor Sanjeev Mukerjee of Northeastern University's chemistry department is developing fuel cells for the military that will burn hydrogen to power portable electronic equipment, such as night vision goggles, computers, and communication equipment. In his system, a cartridge of methanol would be used to produce hydrogen to run a small fuel cell for up to 5,000 ho
https://en.wikipedia.org/wiki/Steven%20Woods
Steven Gregory Woods (born June 16, 1965) is a Canadian entrepreneur. He is best known for co-founding Quack.com, the first popular Voice portal platform, in 1998. Woods became the head of engineering for Google Canada where he was until 2021, when he joined Canadian Venture capital firm iNovia Capital as partner and CTO, following in the footsteps of Patrick Pichette, Google's CFO who also joined iNovia after leaving Google. Career Born in Melfort, Saskatchewan, Woods holds a Ph.D and M.Math from the David R. Cheriton School of Computer Science at the University of Waterloo in Canada and a B.Sc. from the University of Saskatchewan. He was the first Ph.D student of Professor Qiang Yang. Woods' Ph.D was published in 1997 as a book co-written with Alex Quilici and Qiang Yang entitled "Constraint-Based Design Recovery for Software Reengineering: Theory and Experiments" He then worked for Carnegie Mellon's Software Engineering Institute on product line development and practical software architectural reconstruction and analysis. In 1998 Woods co-founded Pittsburgh-based voice-portal infrastructure company Quackware with Jeromy Carriere and Alex Quilici. Quackware became Quack.com in 1999 and moved headquarters to Silicon Valley. It was funded September 1, 1999, and acquired August 31, 2000 by America Online. Quack.com claimed to be the first Voice Portal, and held patents around interactive voice applications, including "System and method for voice access to internet-based information". Woods was vice president of voice services for America Online and Netscape after the acquisition of Quack.com in September, 2000. He founded NeoEdge in 2002 under the name Kinitos along with former America Online, Netscape, and Quack.com colleagues including Jeromy Carriere. He served as an officer and on the board of directors from 2001 through 2010 in Palo Alto, California. Woods was the engineering director for Google in Canada from 2008 to 2021, where he joined Canadian Ventu
https://en.wikipedia.org/wiki/Mathland
MathLand was one of several elementary mathematics curricula that were designed around the 1989 NCTM standards. It was developed and published by Creative Publications and was initially adopted by the U.S. state of California and schools run by the US Department of Defense by the mid 1990s. Unlike curricula such as Investigations in Numbers, Data, and Space, by 2007 Mathland was no longer offered by the publisher, and has since been dropped by many early adopters. Its demise may have been, at least in part, a result of intense scrutiny by critics (see below). Adoption Mathland was among the math curricula rated as "promising" by an Education Department panel, although subsequently 200 mathematicians and scientists, including four Nobel Prize recipients and two winners of the Fields Medal, published a letter in the Washington Post deploring the findings of that panel. MathLand was adopted in many California school districts as its material most closely fit the legal mandate of the 1992 California Framework. That framework has since been discredited and abandoned as misguided and replaced by a newer standard based on traditional mathematics. It bears noting that the process by which the framework was replaced itself came under serious scrutiny. Concept Mathland focuses on "attention to conceptual understanding, communication, reasoning and problem solving." Children meet in small groups and invent their own ways to add, subtract, multiply and divide, which spares young learners from "teacher-imposed rules." In the spirit of not chaining instruction to fixed content, MathLand does away with textbooks. A textbook as well as other practice books were available to reinforce concepts taught in the lesson. Standard Arithmetic MathLand does not teach standard arithmetic algorithms, including carrying and borrowing. Such methods familiar to adults are absent from the curriculum, and so would need to be supplemented if desired. The standard method for multi-digit multipli
https://en.wikipedia.org/wiki/Chocolate%20ice%20cream
Chocolate ice cream is ice cream with natural or artificial chocolate flavoring. One of the oldest flavors of ice creams, it is also one of the world's most popular. While most often sold alone, it is also a base for many other flavors. History The earliest frozen chocolate recipes were published in Naples, Italy, in 1693 in Antonio Latini's The Modern Steward. Chocolate was one of the first ice cream flavors, created before vanilla, as common drinks such as hot chocolate, coffee, and tea were the first food items to be turned into frozen desserts. Hot chocolate had become a popular drink in seventeenth-century Europe, alongside coffee and tea, and all three beverages were used to make frozen and unfrozen desserts. Latini produced two recipes for ices based on the drink, both of which contained only chocolate and sugar. In 1775, Italian doctor Filippo Baldini wrote a treatise entitled , in which he recommended chocolate ice cream as a remedy for various medical conditions, including gout and scurvy. Chocolate ice cream became popular in the United States in the late nineteenth century. The first advertisement for ice cream in America started in New York on May 12, 1777, when Philip Lenzi announced that ice cream was officially available "almost every day". Until 1800, ice cream was a rare and exotic dessert enjoyed mostly by the elite. Around 1800 insulated ice houses were invented and manufacturing ice cream soon became an industry in America. Production Chocolate ice cream is generally made by blending cocoa powder, and the eggs, cream, vanilla, and sugar used to make vanilla ice cream. Sometimes chocolate liquor is used in addition to cocoa powder or used exclusively to create the chocolate flavor. Cocoa powder gives chocolate ice cream its brown color, and it is uncommon to add other colorings. The Codex Alimentarius, which provides an international set of standards for food, states that the flavor in chocolate ice cream must come from nonfat cocoa solids
https://en.wikipedia.org/wiki/SunPCi
SunPCi is a series of single-board computers with a connector that effectively allows a PC motherboard to be fitted in Sun Microsystems SPARC-based workstations based on the PCI architecture adding the capability for the workstation to act as a 'IBM PC compatible' computer. The Sun PCi cards included an x86 processor, RAM, expansion ports, and an onboard graphics controller, allowing a complete Wintel operating environment on a Solaris system. The SunPCi software running on Solaris emulates the disk drives that contain the PC filesystem. The PC software running on the embedded hardware is displayed in an X window on the host desktop; there is also a connector on the edge of the board that can optionally be used to connect a PC monitor. History The product arose from the issue of people who were working on a Unix workstation that was typically not Intel-based being sent a file from a Microsoft Windows based PC and being unable to handle the file. Sun termed this problem interoperability. By the year 2000 solutions to the problem such as emulators were available but their performance at the time was quite problematic. With Sun workstations adopting the PCI hardware bus standard this became possible. These cards were the successor to the earlier SunPC cards that had been available for Sun SBus or VME systems. Prior to this a software only application binary interface and DOS emulator called Wabi was used. SunPC was offered as a replacement software emulator that could be used to run more advanced applications, with higher performance, by adding an X86 hardware accelerator. In 1992 the SunPC Accelerator SX (16 MHz 486SX) or SunPC Accelerator DX (25 MHz 486DX) were available for SBus workstations, though the SunPC program emulates the PC memory with or without the accelerator present. An accelerator card is needed for software that requires 80386 or 80486 hardware, such as Windows 3.11 running in enhanced mode or Windows 95; without this hardware SunPC runs in softw
https://en.wikipedia.org/wiki/Voice%20portal
Voice portals are the voice equivalent of web portals, giving access to information through spoken commands and voice responses. Ideally a voice portal could be an access point for any type of information, services, or transactions found on the Internet. Common uses include movie time listings and stock trading. In telecommunications circles, voice portals may be referred to as interactive voice response (IVR) systems, but this term also includes DTMF services. With the emergence of conversational assistants such as Apple's Siri, Amazon Alexa, Google Assistant, Microsoft Cortana, and Samsung's Bixby, Voice Portals can now be accessed through mobile devices and Far Field voice smart speakers such as the Amazon Echo and Google Home. Advantages Voice portals have no dependency on the access device; even low end mobile handsets can access the service. Voice portals talk to users in their local language and there is reduced customer learning required for using voice services compared to Internet/SMS based services. A complex search query that otherwise would take multiple widgets (drop down, check box, text box filling), can easily and effortlessly be formulated by anyone who can speak without needing to be familiar with any visual interfaces. For instance, one can say, "Find me an eyeliner, not too thick, dark brown, from Estee Lauder MAC, that's below thirty dollars" or "What is the closest liquor store from here and what time do they close?" Limitations Voice is the most natural communication medium, but the information that can be provided is limited compared to visual media. For example, most Internet users try a search term, scan results, then adjust the search term to eliminate irrelevant results. They may take two or three quick iterations to get a list that they are confident will contain what they are looking for. The equivalent approach is not practical when results are spoken, as it would take far too long. In this case, a multimodal interaction would
https://en.wikipedia.org/wiki/Helenalin
Helenalin, or (-)-4-Hydroxy-4a,8-dimethyl-3,3a,4a,7a,8,9,9a-octahydroazuleno[6,5-b]furan-2,5-dione, is a toxic sesquiterpene lactone which can be found in several plants such as Arnica montana and Arnica chamissonis Helenalin is responsible for the toxicity of the Arnica spp. Although toxic, helenalin possesses some in vitro anti-inflammatory and anti-neoplastic effects. Helenalin can inhibit certain enzymes, such as 5-lipoxygenase and leukotriene C4 synthase. For this reason the compound or its derivatives may have potential medical applications. Structure and reactivity Helenalin belongs to the group of sesquiterpene lactones which are characterised by a lactone ring. Beside this ring, the structure of helenalin has two reactive groups (α-methylene-γ-butyrolactone and a cyclopentenone group) that can undergo a Michael addition. The double bond in the carbonyl group can undergo a Michael addition with a thiol group, also called a sulfhydryl group. Therefore, helenalin can interact with proteins by forming covalent bonds to the thiol groups of cysteine-containing proteins/peptides, such as glutathione. This effect can disrupt the molecule's biological function. Addition reactions can occur because thiol groups are strong nucleophiles; a thiol has a lone pair of electrons. Chemical derivatives There are several derivatives of helenaline known within the same sesquiterpene lactone group; pseudoguaianolides. Most of these derivatives occur naturally, such as the compound dihydrohelenalin, but there are also some semi-synthetic derivatives known, such as 2β-(S-glutathionyl)-2,3-dihydrohelenalin. In general, most derivatives are more toxic than helenalin itself. Among these, derivatives with the shortest ester groups are most likely to contain a higher toxicity. Other derivatives include 11α,13-dihydrohelenalin acetate, 2,3-dehydrohelenalin and 6-O-isobutyrylhelenalin. The molecular conformation differs between helenalin and its derivatives, which affects the lipo
https://en.wikipedia.org/wiki/Hot-carrier%20injection
Hot carrier injection (HCI) is a phenomenon in solid-state electronic devices where an electron or a “hole” gains sufficient kinetic energy to overcome a potential barrier necessary to break an interface state. The term "hot" refers to the effective temperature used to model carrier density, not to the overall temperature of the device. Since the charge carriers can become trapped in the gate dielectric of a MOS transistor, the switching characteristics of the transistor can be permanently changed. Hot-carrier injection is one of the mechanisms that adversely affects the reliability of semiconductors of solid-state devices. Physics The term “hot carrier injection” usually refers to the effect in MOSFETs, where a carrier is injected from the conducting channel in the silicon substrate to the gate dielectric, which usually is made of silicon dioxide (SiO2). To become “hot” and enter the conduction band of SiO2, an electron must gain a kinetic energy of ~3.2 eV. For holes, the valence band offset in this case dictates they must have a kinetic energy of 4.6 eV. The term "hot electron" comes from the effective temperature term used when modelling carrier density (i.e., with a Fermi-Dirac function) and does not refer to the bulk temperature of the semiconductor (which can be physically cold, although the warmer it is, the higher the population of hot electrons it will contain all else being equal). The term “hot electron” was originally introduced to describe non-equilibrium electrons (or holes) in semiconductors. More broadly, the term describes electron distributions describable by the Fermi function, but with an elevated effective temperature. This greater energy affects the mobility of charge carriers and as a consequence affects how they travel through a semiconductor device. Hot electrons can tunnel out of the semiconductor material, instead of recombining with a hole or being conducted through the material to a collector. Consequent effects include increa
https://en.wikipedia.org/wiki/Oswald%20Veblen%20Prize%20in%20Geometry
The Oswald Veblen Prize in Geometry is an award granted by the American Mathematical Society for notable research in geometry or topology. It was funded in 1961 in memory of Oswald Veblen and first issued in 1964. The Veblen Prize is now worth US$5000, and is awarded every three years. The first seven prize winners were awarded for works in topology. James Harris Simons and William Thurston were the first ones to receive it for works in geometry (for some distinctions, see geometry and topology). As of 2020, there have been thirty-four prize recipients. List of recipients 1964 Christos Papakyriakopoulos 1964 Raoul Bott 1966 Stephen Smale 1966 Morton Brown and Barry Mazur 1971 Robion Kirby 1971 Dennis Sullivan 1976 William Thurston 1976 James Harris Simons 1981 Mikhail Gromov for: Manifolds of negative curvature. Journal of Differential Geometry 13 (1978), no. 2, 223–230. Almost flat manifolds. Journal of Differential Geometry 13 (1978), no. 2, 231–241. Curvature, diameter and Betti numbers. Comment. Math. Helv. 56 (1981), no. 2, 179–195. Groups of polynomial growth and expanding maps. Inst. Hautes Études Sci. Publ. Math. 53 (1981), 53–73. Volume and bounded cohomology. Inst. Hautes Études Sci. Publ. Math. 56 (1982), 5–99 1981 Shing-Tung Yau for: On the regularity of the solution of the n-dimensional Minkowski problem. Comm. Pure Appl. Math. 29 (1976), no. 5, 495–516. (with Shiu-Yuen Cheng) On the regularity of the Monge-Ampère equation . Comm. Pure Appl. Math. 30 (1977), no. 1, 41–68. (with Shiu-Yuen Cheng) Calabi's conjecture and some new results in algebraic geometry. Proc. Natl. Acad. Sci. U.S.A. 74 (1977), no. 5, 1798–1799. On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. I. Comm. Pure Appl. Math. 31 (1978), no. 3, 339–411. On the proof of the positive mass conjecture in general relativity. Comm. Math. Phys. 65 (1979), no. 1, 45–76. (with Richard Schoen) Topology of three-dimensional manifolds and the emb
https://en.wikipedia.org/wiki/Hyman%20Bass
Hyman Bass (; born October 5, 1932) is an American mathematician, known for work in algebra and in mathematics education. From 1959 to 1998 he was Professor in the Mathematics Department at Columbia University. He is currently the Samuel Eilenberg Distinguished University Professor of Mathematics and Professor of Mathematics Education at the University of Michigan. Life Born to a Jewish family in Houston, Texas, he earned his B.A. in 1955 from Princeton University and his Ph.D. in 1959 from the University of Chicago. His thesis, titled Global dimensions of rings, was written under the supervision of Irving Kaplansky. He has held visiting appointments at the Institute for Advanced Study in Princeton, New Jersey, Institut des Hautes Études Scientifiques and École Normale Supérieure (Paris), Tata Institute of Fundamental Research (Bombay), University of Cambridge, University of California, Berkeley, University of Rome, IMPA (Rio), National Autonomous University of Mexico, Mittag-Leffler Institute (Stockholm), and the University of Utah. He was president of the American Mathematical Society. Bass formerly chaired the Mathematical Sciences Education Board (1992–2000) at the National Academy of Sciences, and the Committee on Education of the American Mathematical Society. He was the President of ICMI from 1999 to 2006. Since 1996 he has been collaborating with Deborah Ball and her research group at the University of Michigan on the mathematical knowledge and resources entailed in the teaching of mathematics at the elementary level. He has worked to build bridges between diverse professional communities and stakeholders involved in mathematics education. Work His research interests have been in algebraic K-theory, commutative algebra and algebraic geometry, algebraic groups, geometric methods in group theory, and ζ functions on finite simple graphs. Awards and recognitions Bass was elected as a member of the National Academy of Sciences in 1982. In 1983, he was elec
https://en.wikipedia.org/wiki/Kelvin%27s%20circulation%20theorem
In fluid mechanics, Kelvin's circulation theorem (named after William Thomson, 1st Baron Kelvin who published it in 1869) states:In a barotropic, ideal fluid with conservative body forces, the circulation around a closed curve (which encloses the same fluid elements) moving with the fluid remains constant with time. Stated mathematically: where is the circulation around a material contour Stated more simply, this theorem says that if one observes a closed contour at one instant, and follows the contour over time (by following the motion of all of its fluid elements), the circulation over the two locations of this contour are equal. This theorem does not hold in cases with viscous stresses, nonconservative body forces (for example the Coriolis force) or non-barotropic pressure-density relations. Mathematical proof The circulation around a closed material contour is defined by: where u is the velocity vector, and ds is an element along the closed contour. The governing equation for an inviscid fluid with a conservative body force is where D/Dt is the convective derivative, ρ is the fluid density, p is the pressure and Φ is the potential for the body force. These are the Euler equations with a body force. The condition of barotropicity implies that the density is a function only of the pressure, i.e. . Taking the convective derivative of circulation gives For the first term, we substitute from the governing equation, and then apply Stokes' theorem, thus: The final equality arises since owing to barotropicity. We have also made use of the fact that the curl of any gradient is necessarily 0, or for any function . For the second term, we note that evolution of the material line element is given by Hence The last equality is obtained by applying gradient theorem. Since both terms are zero, we obtain the result Poincaré–Bjerknes circulation theorem A similar principle which conserves a quantity can be obtained for the rotating frame also, known as th
https://en.wikipedia.org/wiki/Inverse%20image%20functor
In mathematics, specifically in algebraic topology and algebraic geometry, an inverse image functor is a contravariant construction of sheaves; here “contravariant” in the sense given a map , the inverse image functor is a functor from the category of sheaves on Y to the category of sheaves on X. The direct image functor is the primary operation on sheaves, with the simplest definition. The inverse image exhibits some relatively subtle features. Definition Suppose we are given a sheaf on and that we want to transport to using a continuous map . We will call the result the inverse image or pullback sheaf . If we try to imitate the direct image by setting for each open set of , we immediately run into a problem: is not necessarily open. The best we could do is to approximate it by open sets, and even then we will get a presheaf and not a sheaf. Consequently, we define to be the sheaf associated to the presheaf: (Here is an open subset of and the colimit runs over all open subsets of containing .) For example, if is just the inclusion of a point of , then is just the stalk of at this point. The restriction maps, as well as the functoriality of the inverse image follows from the universal property of direct limits. When dealing with morphisms of locally ringed spaces, for example schemes in algebraic geometry, one often works with sheaves of -modules, where is the structure sheaf of . Then the functor is inappropriate, because in general it does not even give sheaves of -modules. In order to remedy this, one defines in this situation for a sheaf of -modules its inverse image by . Properties While is more complicated to define than , the stalks are easier to compute: given a point , one has . is an exact functor, as can be seen by the above calculation of the stalks. is (in general) only right exact. If is exact, f is called flat. is the left adjoint of the direct image functor . This implies that there are natural unit and
https://en.wikipedia.org/wiki/Boolean%20circuit
In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length. Boolean circuits are defined in terms of the logic gates they contain. For example, a circuit might contain binary AND and OR gates and unary NOT gates, or be entirely described by binary NAND gates. Each gate corresponds to some Boolean function that takes a fixed number of bits as input and outputs a single bit. Boolean circuits provide a model for many digital components used in computer engineering, including multiplexers, adders, and arithmetic logic units, but they exclude sequential logic. They are an abstraction that omits many aspects relevant to designing real digital logic circuits, such as metastability, fanout, glitches, power consumption, and propagation delay variability. Formal definition In giving a formal definition of Boolean circuits, Vollmer starts by defining a basis as set B of Boolean functions, corresponding to the gates allowable in the circuit model. A Boolean circuit over a basis B, with n inputs and m outputs, is then defined as a finite directed acyclic graph. Each vertex corresponds to either a basis function or one of the inputs, and there is a set of exactly m nodes which are labeled as the outputs. The edges must also have some ordering, to distinguish between different arguments to the same Boolean function. As a special case, a propositional formula or Boolean expression is a Boolean circuit with a single output node in which every other node has fan-out of 1. Thus, a Boolean circuit can be regarded as a generalization that allows shared subformulas and multiple outputs. A common basis for Boolean circuits is the set {AND, OR, NOT}, which is functionally complete, i. e. from which all other Boolean functions can be constructed. Computational complexity Background A partic
https://en.wikipedia.org/wiki/Flat%20rate
A flat fee, also referred to as a flat rate or a linear rate refers to a pricing structure that charges a single fixed fee for a service, regardless of usage. Less commonly, the term may refer to a rate that does not vary with usage or time of use. Advantages A business can develop a dependable stance in a market, as consumers have a well-rounded price before the service is undertaken. For instance, a technician may charge $150 for his labor. Potential costs can be covered. The service may result in inevitable expenses like the parts needed to fix the issue or the items required to complete the order. No restricted structure is needed, as the pricing system can be adjusted to suit the business using it. Management can thus work out the pricing that best matches the company's objectives, efforts, costs, etc. Disadvantages The fixed pricing restricts the company's capability to meet the needs of individual consumers, and people search for cheaper alternatives. Pricing competition thickens, with other companies in the same industry compete for the lowest pricing, and tough competition occurs. Inflation can cause unprecedented losses, and companies must raise the charge to keep up with costs. Examples Postage There are flat rates in the postal service, regarding the delivery of items. Postage companies use different forms of post, boxes or envelopes, to avoid having to weigh items. The on-hand cost lets consumers identify the cost and removes the hassle of estimate the cost for items. The United States Postal Service offers flat-rate pricing for packages selling different postage options varying in size and shape. That provides consumers with an array of options upfront, creating a sense of ease. When shipped in higher volumes, it saves money but there are issues if both the flat rate and regular delivery systems are used simultaneously. Advertising Flat rate also passes into advertising. Purchasing advertisements on websites such as Facebook, Twitter a
https://en.wikipedia.org/wiki/Child%20Exploitation%20Tracking%20System
Child Exploitation Tracking System (CETS) is a Microsoft software based solution that assists in managing and linking worldwide cases related to child protection. CETS was developed in collaboration with law enforcement in Canada. Administered by the loose partnership of Microsoft and law enforcement agencies, CETS offers tools to gather and share evidence and information so they can identify, prevent and punish those who commit crimes against children. About the CETS partnership In 2003, Detective Sergeant Paul Gillespie, Officer in Charge of the Child Exploitation Section of the Toronto Police Service's Sex Crimes Unit, made a request directly to Bill Gates, CEO and Chief Architect at Microsoft at the time, for assistance with these types of crimes. Agencies experienced in tracking and apprehending those who perpetrate such crimes were involved in the design, implementation, and policy. The solution needed to assist law enforcement agencies from the initial point of detection, through the investigative phase, to arrest, prosecution, and conviction of the criminal. In addition, it was imperative that the solution adhered to existing rights and civil liberties of the citizens of the various countries. This included remaining independent of Internet traffic and any individual user’s computer. Finally, such a solution needed to be global in nature and enable collaboration among nations and agencies. In order to increase the effectiveness of investigators worldwide, such a system would allow law enforcement entities to: Collect evidence of online child exploitation gathered by multiple law enforcement agencies. Organize and store the information safely and securely. Search the database of information. Securely share the information with other agencies, across jurisdictions. Analyze the information and provide pertinent matches. Adhere to global software industry standards. Law enforcement partnerships worldwide A number of law enforcement agencies use or are
https://en.wikipedia.org/wiki/Victorian%20Ornithological%20Research%20Group
The Victorian Ornithological Research Group (VORG) is a small project-focused ornithological group of amateurs and professionals based in Victoria, Australia. It was formed in 1962. It publishes a bulletin, VORG Notes. The objectives of the group are to: promote and encourage the study of all aspects of bird life by any means, including: field studies participation in the Australian Bird and Bat Banding Scheme cooperation with the National Parks and Wildlife Division of the Department of Conservation and Natural Resources or its equivalent for the time being encourage and assist in the publication of the results of such work provide a meeting place for discussion groups, the reading of papers and social activities make provision for the retention of field records, diaries and other relevant documents cooperate with persons or organisations having interests similar to those of VORG coordinate the activities of groups engaged in work consistent with the above objects Current projects include Movements of particular bird species in the suburban Melbourne, Mornington and Bellarine Peninsula Regions Albert Park Survey Penguin Study Short-tailed Shearwater Banding Flame Robin Banding Little Raven & Forest Raven Study Lorikeets of the Melbourne Region Study External links VORG Ornithological organisations in Australia 1962 establishments in Australia
https://en.wikipedia.org/wiki/Finite-difference%20frequency-domain%20method
The finite-difference frequency-domain (FDFD) method is a numerical solution method for problems usually in electromagnetism and sometimes in acoustics, based on finite-difference approximations of the derivative operators in the differential equation being solved. While "FDFD" is a generic term describing all frequency-domain finite-difference methods, the title seems to mostly describe the method as applied to scattering problems. The method shares many similarities to the finite-difference time-domain (FDTD) method, so much so that the literature on FDTD can be directly applied. The method works by transforming Maxwell's equations (or other partial differential equation) for sources and fields at a constant frequency into matrix form . The matrix A is derived from the wave equation operator, the column vector x contains the field components, and the column vector b describes the source. The method is capable of incorporating anisotropic materials, but off-diagonal components of the tensor require special treatment. Strictly speaking, there are at least two categories of "frequency-domain" problems in electromagnetism. One is to find the response to a current density J with a constant frequency ω, i.e. of the form , or a similar time-harmonic source. This frequency-domain response problem leads to an system of linear equations as described above. An early description of a frequency-domain response FDTD method to solve scattering problems was published by Christ and Hartnagel (1987). Another is to find the normal modes of a structure (e.g. a waveguide) in the absence of sources: in this case the frequency ω is itself a variable, and one obtains an eigenproblem (usually, the eigenvalue λ is ω2). An early description of an FDTD method to solve electromagnetic eigenproblems was published by Albani and Bernardi (1974). Implementing the method Use a Yee grid because it offers the following benefits: (1) it implicitly satisfies the zero divergence conditio
https://en.wikipedia.org/wiki/Abel%27s%20test
In mathematics, Abel's test (also known as Abel's criterion) is a method of testing for the convergence of an infinite series. The test is named after mathematician Niels Henrik Abel. There are two slightly different versions of Abel's test – one is used with series of real numbers, and the other is used with power series in complex analysis. Abel's uniform convergence test is a criterion for the uniform convergence of a series of functions dependent on parameters. Abel's test in real analysis Suppose the following statements are true: is a convergent series, {bn} is a monotone sequence, and {bn} is bounded. Then is also convergent. It is important to understand that this test is mainly pertinent and useful in the context of non absolutely convergent series . For absolutely convergent series, this theorem, albeit true, is almost self evident. This theorem can be proved directly using summation by parts. Abel's test in complex analysis A closely related convergence test, also known as Abel's test, can often be used to establish the convergence of a power series on the boundary of its circle of convergence. Specifically, Abel's test states that if a sequence of positive real numbers is decreasing monotonically (or at least that for all n greater than some natural number m, we have ) with then the power series converges everywhere on the closed unit circle, except when z = 1. Abel's test cannot be applied when z = 1, so convergence at that single point must be investigated separately. Notice that Abel's test implies in particular that the radius of convergence is at least 1. It can also be applied to a power series with radius of convergence R ≠ 1 by a simple change of variables ζ = z/R. Notice that Abel's test is a generalization of the Leibniz Criterion by taking z = −1. Proof of Abel's test: Suppose that z is a point on the unit circle, z ≠ 1. For each , we define By multiplying this function by (1 − z), we obtain The first summand is constant,
https://en.wikipedia.org/wiki/Lacuna%20%28histology%29
In histology, a lacuna is a small space, containing an osteocyte in bone, or chondrocyte in cartilage. Bone The lacunae are situated between the lamellae, and consist of a number of oblong spaces. In an ordinary microscopic section, viewed by transmitted light, they appear as fusiform opaque spots. Each lacuna is occupied during life by a branched cell, termed an osteocyte, bone-cell or bone-corpuscle. Lacunae are connected to one another by small canals called canaliculi. A lacuna never contains more than one osteocyte. Sinuses are an example of lacuna. Cartilage The cartilage cells or chondrocytes are contained in cavities in the matrix, called cartilage lacunae; around these, the matrix is arranged in concentric lines as if it had been formed in successive portions around the cartilage cells. This constitutes the so-called capsule of the space. Each lacuna is generally occupied by a single cell, but during the division of the cells, it may contain two, four, or eight cells. Lacunae are found between narrow sheets of calcified matrix that are known as lamellae ( ). See also Lacunar stroke
https://en.wikipedia.org/wiki/Myelin%20incisure
Myelin incisures (also known as Schmidt-Lanterman clefts, Schmidt-Lanterman incisures, clefts of Schmidt-Lanterman, segments of Lanterman, medullary segments) are small pockets of cytoplasm left behind during the Schwann cell myelination process. They are histological evidence of the small amount of cytoplasm that remains in the inner layer of the myelin sheath created by Schwann cells wrapping tightly around an axon (nerve fiber). Development In the peripheral nervous system (PNS) axons can be either myelinated or unmyelinated. Myelination refers to the insulation of an axon with concentric surrounding layers of lipid membrane (myelin) produced by Schwann cells. These layers are generally uniform and continuous, but due to imperfect nature of the process by which Schwann cells wrap the nerve axon, this wrapping process can sometimes leave behind small pockets of residual cytoplasm displaced to the periphery during the formation of the myelin sheath. These pockets, or "incisures", can subdivide the myelinated axon into irregular portions. These staggered clefts also provide communication channels between layers by connecting the outer collar of cytoplasm of the Schwann cell to the deepest layer of myelin sheath. Primary incisures appear ab initio in myelination and always extend across the whole radial thickness of the myelin sheath but initially around only part of its circumference. Secondary incisures appear later, in regions of a compact myelin sheath, initially traversing only part of its radial thickness but commonly occupying its whole circumference.
https://en.wikipedia.org/wiki/Coordinated%20Video%20Timings
Coordinated Video Timings (CVT; VESA-2013-3 v1.2) is a standard by VESA which defines the timings of the component video signal. Initially intended for use by computer monitors and video cards, the standard made its way into consumer televisions. The parameters defined by standard include horizontal blanking and vertical blanking intervals, horizontal frequency and vertical frequency (collectively, pixel clock rate or video signal bandwidth), and horizontal/vertical sync polarity. The standard was adopted in 2002 and superseded the Generalized Timing Formula. Reduced blanking CVT timings include the necessary pauses in picture data (known as "blanking intervals") to allow CRT displays to reposition their electron beam at the end of each horizontal scan line, as well as the vertical repositioning necessary at the end of each frame. CVT also specifies a mode ("CVT-R") which significantly reduces these blanking intervals (to a period insufficient for CRT displays to work correctly) in the interests of saving video signal bandwidth when modern displays such as LCD monitors are being used, since such displays typically do not require these pauses in the picture data. In revision 1.2, released in 2013, a new "Reduced Blanking Timing Version 2" mode was added which further reduces the horizontal blanking interval from 160 to 80 pixels, increases pixel clock precision from ±0.25 MHz to ±0.001 MHz, and adds the option for a 1000/1001 modifier for ATSC/NTSC video-optimized timing modes (e.g. 59.94 Hz instead of 60.00 Hz or 23.976 Hz instead of 24.000). CEA-861-H introduced RBv3. RBv3 defines ways to specify different VBLANK and HBLANK duration formulae. CEA-861-I introduced "Optimized Video Timings" (OVT), a standard timing calculation that covers resolution/refresh rate combinations not supported by CVT. Bandwidth See also Extended display identification data
https://en.wikipedia.org/wiki/Tasklist
In computing, tasklist is a command available in Microsoft Windows and in the AROS shell. It is equivalent to the ps command in Unix and Unix-like operating systems and can also be compared with the Windows task manager (taskmgr). Windows NT 4.0, the Windows 98 Resource Kit, the Windows 2000 Support Tools, and ReactOS include the similar tlist command. Additionally, Microsoft provides the similar PsList command as part of Windows Sysinternals. Usage Microsoft Windows On Microsoft Windows tasklist shows all of the different local computer processes currently running. tasklist may also be used to show the processes of a remote system by using the command: tasklist /S "SYSTEM". Optionally, they can be listed sorted by either the imagename, the PID or the amount of computer usage. But by default, they are sorted by chronological order: See also Task manager nmon — a system monitor tool for the AIX and Linux operating systems. pgrep pstree top
https://en.wikipedia.org/wiki/Vanillylmandelic%20acid
Vanillylmandelic acid (VMA) is a chemical intermediate in the synthesis of artificial vanilla flavorings and is an end-stage metabolite of the catecholamines (epinephrine, and norepinephrine). It is produced via intermediary metabolites. Chemical synthesis VMA synthesis is the first step of a two-step process practiced by Rhodia since the 1970s to synthesize artificial vanilla. Specifically the reaction entails the condensation of guaiacol and glyoxylic acid in an ice cold, aqueous solution with sodium hydroxide. Biological elimination VMA is found in the urine, along with other catecholamine metabolites, including homovanillic acid (HVA), metanephrine, and normetanephrine. In timed urine tests the quantity excreted (usually per 24 hours) is assessed along with creatinine clearance, and the quantity of cortisols, catecholamines, and metanephrines excreted is also measured. Clinical significance Urinary VMA is elevated in patients with tumors that secrete catecholamines. These urinalysis tests are used to diagnose an adrenal gland tumor called pheochromocytoma, a tumor of catecholamine-secreting chromaffin cells. These tests may also be used to diagnose neuroblastomas, and to monitor treatment of these conditions. Norepinephrine is metabolised into normetanephrine and VMA. Norepinephrine is one of the hormones produced by the adrenal glands, which are found on top of the kidneys. These hormones are released into the blood during times of physical or emotional stress, which are factors that may skew the results of the test.
https://en.wikipedia.org/wiki/Agroinfiltration
Agroinfiltration is a method used in plant biology and especially lately in plant biotechnology to induce transient expression of genes in a plant, or isolated leaves from a plant, or even in cultures of plant cells, in order to produce a desired protein. In the method, a suspension of Agrobacterium tumefaciens is introduced into a plant leaf by direct injection or by vacuum infiltration, or brought into association with plant cells immobilised on a porous support (plant cell packs), whereafter the bacteria transfer the desired gene into the plant cells via transfer of T-DNA. The main benefit of agroinfiltration when compared to the more traditional plant transformation is speed and convenience, although yields of the recombinant protein are generally also higher and more consistent. The first step is to introduce a gene of interest to a strain of Agrobacterium tumefaciens. Subsequently, the strain is grown in a liquid culture and the resulting bacteria are washed and suspended into a suitable buffer solution. For injection, this solution is then placed in a syringe (without a needle). The tip of the syringe is pressed against the underside of a leaf while simultaneously applying gentle counterpressure to the other side of the leaf. The Agrobacterium suspension is then injected into the airspaces inside the leaf through stomata, or sometimes through a tiny incision made to the underside of the leaf. Vacuum infiltration is another way to introduce Agrobacterium deep into plant tissue. In this procedure, leaf disks, leaves, or whole plants are submerged in a beaker containing the solution, and the beaker is placed in a vacuum chamber. The vacuum is then applied, forcing air out of the intercellular spaces within the leaves via the stomata. When the vacuum is released, the pressure difference forces the "Agrobacterium" suspension into the leaves through the stomata into the mesophyll tissue. This can result in nearly all of the cells in any given leaf being in c
https://en.wikipedia.org/wiki/Materialism%20and%20Empirio-criticism
Materialism and Empirio-criticism (Russian: Материализм и эмпириокритицизм, Materializm i empiriokrititsizm) is a philosophical work by Vladimir Lenin, published in 1909. It was an obligatory subject of study in all institutions of higher education in the Soviet Union, as a seminal work of dialectical materialism, a part of the curriculum called "Marxist–Leninist Philosophy". Lenin argued that human minds are capable of forming representations of the world that portray the world as it is. Thus, Lenin argues, our beliefs about the world can be objectively true; a belief is true when it accurately refects the facts. According to Lenin, absolute truth is possible, but our theories are often only relatively true. Scientific theories can therefore constitute knowledge of the world. Lenin formulates the fundamental philosophical contradiction between idealism and materialism as follows: "Materialism is the recognition of 'objects in themselves' or objects outside the mind; the ideas and sensations are copies or images of these objects. The opposite doctrine (idealism) says: the objects do not exist, outside the mind '; they are 'connections of sensations'." Background The book, whose full title is Materialism and Empirio-criticism. Critical Comments on a Reactionary Philosophy, was written by Lenin from February through October 1908 while he was exiled in Geneva and London and was published in Moscow in May 1909 by Zveno Publishers. The original manuscript and preparatory materials have been lost. Most of the book was written when Lenin was in Geneva, apart from the one month spent in London, where he visited the library of the British Museum to access modern philosophical and natural science material. The index lists in excess of 200 sources for the book. In December 1908, Lenin moved from Geneva to Paris, where he worked until April 1909 on correcting the proofs. Some passages were edited to avoid tsarist censorship. It was published in Imperial Russia with great di
https://en.wikipedia.org/wiki/Characteristic%20mode%20analysis
Characteristic modes (CM) form a set of functions which, under specific boundary conditions, diagonalizes operator relating field and induced sources. Under certain conditions, the set of the CM is unique and complete (at least theoretically) and thereby capable of describing the behavior of a studied object in full. This article deals with characteristic mode decomposition in electromagnetics, a domain in which the CM theory has originally been proposed. Background CM decomposition was originally introduced as set of modes diagonalizing a scattering matrix. The theory has, subsequently, been generalized by Harrington and Mautz for antennas. Harrington, Mautz and their students also successively developed several other extensions of the theory. Even though some precursors were published back in the late 1940s, the full potential of CM has remained unrecognized for an additional 40 years. The capabilities of CM were revisited in 2007 and, since then, interest in CM has dramatically increased. The subsequent boom of CM theory is reflected by the number of prominent publications and applications. Definition For simplicity, only the original form of the CM – formulated for perfectly electrically conducting (PEC) bodies in free space — will be treated in this article. The electromagnetic quantities will solely be represented as Fourier's images in frequency domain. Lorenz's gauge is used. The scattering of an electromagnetic wave on a PEC body is represented via a boundary condition on the PEC body, namely with representing unitary normal to the PEC surface, representing incident electric field intensity, and representing scattered electric field intensity defined as with being imaginary unit, being angular frequency, being vector potential being vacuum permeability, being scalar potential being vacuum permittivity, being scalar Green's function and being wavenumber. The integro-differential operator is the one to be diagonalized via characteristi
https://en.wikipedia.org/wiki/Norm%20%28artificial%20intelligence%29
Norms can be considered from different perspectives in artificial intelligence to create computers and computer software that are capable of intelligent behaviour. In artificial intelligence and law, legal norms are considered in computational tools to automatically reason upon them. In multi-agent systems (MAS), a branch of artificial intelligence (AI), a norm is a guide for the common conduct of agents, thereby easing their decision-making, coordination and organization. Since most problems concerning regulation of the interaction of autonomous agents are linked to issues traditionally addressed by legal studies, and since law is the most pervasive and developed normative system, efforts to account for norms in artificial intelligence and law and in normative multi-agent systems often overlap. Artificial intelligence and law With the arrival of computer applications into the legal domain, and especially artificial intelligence applied to it, logic has been used as the major tool to formalize legal reasoning and has been developed in many directions, ranging from deontic logics to formal systems of argumentation. The knowledge base of legal reasoning systems usually includes legal norms (such as governmental regulations and contracts), and as a consequence, legal rules are the focus of knowledge representation and reasoning approaches to automatize and solve complex legal tasks. Legal norms are typically represented into a logic-based formalism, such a deontic logic. Artificial intelligence and law applications using an explicit representation of norms range from checking the compliance of business processes and the automatic execution of smart contracts to legal expert systems advising people on legal matters. Multi-agent systems Norms in multi-agent systems may appear with different degrees of explicitness ranging from fully unambiguous written prescriptions to implicit unwritten norms or tacit emerging patterns. Computer scientists’ studies mirror this
https://en.wikipedia.org/wiki/Ziggurat%20algorithm
The ziggurat algorithm is an algorithm for pseudo-random number sampling. Belonging to the class of rejection sampling algorithms, it relies on an underlying source of uniformly-distributed random numbers, typically from a pseudo-random number generator, as well as precomputed tables. The algorithm is used to generate values from a monotonically decreasing probability distribution. It can also be applied to symmetric unimodal distributions, such as the normal distribution, by choosing a value from one half of the distribution and then randomly choosing which half the value is considered to have been drawn from. It was developed by George Marsaglia and others in the 1960s. A typical value produced by the algorithm only requires the generation of one random floating-point value and one random table index, followed by one table lookup, one multiply operation and one comparison. Sometimes (2.5% of the time, in the case of a normal or exponential distribution when using typical table sizes) more computations are required. Nevertheless, the algorithm is computationally much faster than the two most commonly used methods of generating normally distributed random numbers, the Marsaglia polar method and the Box–Muller transform, which require at least one logarithm and one square root calculation for each pair of generated values. However, since the ziggurat algorithm is more complex to implement it is best used when large quantities of random numbers are required. The term ziggurat algorithm dates from Marsaglia's paper with Wai Wan Tsang in 2000; it is so named because it is conceptually based on covering the probability distribution with rectangular segments stacked in decreasing order of size, resulting in a figure that resembles a ziggurat. Theory of operation The ziggurat algorithm is a rejection sampling algorithm; it randomly generates a point in a distribution slightly larger than the desired distribution, then tests whether the generated point is inside the
https://en.wikipedia.org/wiki/PVR-resistant%20advertising
PVR (DVR)-resistant advertising is a form of advertising which is designed specifically to remain viewable despite a user skipping through the commercials when using a device such as a TiVo or other digital video recorder. For instance, a black bar with a product's tagline and logo or the title of a promoted television program or film and its release date may appear on the top of the screen and remain visible much longer being fast-forwarded than a usual commercial. This was used first by cable network FX's British network when advertising Brotherhood.
https://en.wikipedia.org/wiki/Metacomputing
Metacomputing is all computing and computing-oriented activity which involves computing knowledge (science and technology) utilized for the research, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of large computer networks/grids, such as the Internet, intranet and other territorially distributed computer networks for special purposes. Uses In computer science Metacomputing, as a computing of computing, includes: the organization of large computer networks, choice of the design criteria (for example: peer-to-peer or centralized solution) and metacomputing software (middleware, metaprogramming) development where, in the specific domains, the concept metacomputing is used as a description of software meta-layers which are networked platforms for the development of user-oriented calculations, for example for computational physics and bio-informatics. Here, serious scientific problems of systems/networks complexity emerge, not only related to domain-dependent complexities but focused on systemic meta-complexity of computer network infrastructures. Metacomputing is also a useful descriptor for self-referential programming systems. Often these systems are functional as fifth-generation computer languages which require the use of an underlying metaprocessor software operating system in order to be operative. Typically metacomputing occurs in an interpreted or real-time compiling system since the changing nature of information in processing results may result in an unpredictable compute state throughout the existence of the metacomputer (the information state operated upon by the metacomputing platform). In socio-cognitive engineering From the human and social perspectives, metacomputing is especially focused on:
https://en.wikipedia.org/wiki/Damp%20proofing
Damp proofing in construction is a type of moisture control applied to building walls and floors to prevent moisture from passing into the interior spaces. Dampness problems are among the most frequent problems encountered in residences. Damp proofing is defined by the American Society for Testing and Materials (ASTM) as a material that resists the passage of water with no hydrostatic pressure. Waterproof is defined by the ASTM as a treatment that resists the passage of water under pressure. Generally, damp proofing keeps moisture out of a building, where vapor barriers keep interior moisture from getting into walls. Moisture resistance is not necessarily absolute; it is usually defined by a specific test method, limits, and engineering tolerances. Methods Damp proofing is accomplished several ways including all : A damp-proof course (DPC) is a barrier through the structure designed to prevent moisture rising by capillary action such as through a phenomenon known as rising damp. Rising damp is the effect of water rising from the ground into property. The damp proof course may be horizontal or vertical. A DPC layer is usually laid below all masonry walls, regardless if the wall is a load bearing wall or a partition wall. A damp-proof membrane (DPM) is a membrane material applied to prevent moisture transmission. A common example is polyethylene sheeting laid under a concrete slab to prevent the concrete from gaining moisture through capillary action. A DPM may be used for the DPC. Integral damp proofing in concrete involves adding materials to the concrete mix to make the concrete itself impermeable. Surface suppressant coating with thin water proof materials such as epoxy resin for resistance to non-pressurized moisture such as rain water or a coating of cement sprayed on such as shotcrete which can resist water under pressure. Cavity wall construction, such as rainscreen construction, is where the interior walls are separated from the exterior walls by a ca
https://en.wikipedia.org/wiki/Feodor%20Deahna
Heinrich Wilhelm Feodor Deahna (8 July 1815 – 8 January 1844) was a German mathematician. He is known for providing proof of what is now known as Frobenius theorem in differential topology, which he published in Crelle's Journal in 1840. Deahna was born near Bayreuth on July 8, 1815, and was a student at the University of Göttingen in 1834. In 1843 he became an assistant mathematics teacher at the Fulda Gymnasium, but he died soon afterwards in Fulda, on January 8, 1844. Selected works Deahna, F. "Über die Bedingungen der Integrabilitat ....", J. Reine Angew. Math. 20 (1840) 340-350.
https://en.wikipedia.org/wiki/J.%20Howard%20Redfield
John Howard Redfield (June 8, 1879 – April 17, 1944) was an American mathematician, best known for discovery of what is now called Pólya enumeration theorem (PET) in 1927, ten years ahead of similar but independent discovery made by George Pólya. Redfield was a great-grandson of William Charles Redfield, one of the founders and the first president of AAAS. Solution to MacMahon's conjecture Redfield's ability is evident in letters exchanged among Redfield, Percy MacMahon, and Sir Thomas Muir, following the publication of Redfield's paper [1] in 1927. Apparently Redfield sent a copy of his paper to MacMahon. In reply (letter of November 19, 1927), MacMahon expresses the view that Redfield has made a valuable contribution to the subject and goes on to mention a conjecture which he himself made in his recently delivered Rouse-Ball memorial lecture. He also says that it is probable that Redfield's work would lead to a proof of it. Such was the case: in a draft reply dated December 26, 1927, Redfield writes: "I am now able to demonstrate your conjectured expression...". MacMahon, who had failed to prove it himself and then put the matter before men at both Cambridge and Oxford "without effect", delightedly wrote to Redfield (letter of January 9, 1928): "when you first wrote to me I formed the opinion that with your powerful handling of the theory of substitutions it would be childs play to you and I was right. I congratulate you and feel sure that your methods will carry you far." MacMahon urged Redfield to publish his new results and also informed Muir about them. In a letter to Redfield dated December 31, 1931, Muir also encourages him to publish his verification "without waiting for MacMahon's executors" and suggests the Journal of the London Mathematical Society as an appropriate medium. As far as is known, Redfield did not follow up this suggestion, but the proof of MacMahon's conjecture was included in an unpublished manuscript which appears to be a sequel to
https://en.wikipedia.org/wiki/Uranium%20tailings
Uranium tailings or uranium tails are a radioactive waste byproduct (tailings) of conventional uranium mining and uranium enrichment. They contain the radioactive decay products from the uranium decay chains, mainly the U-238 chain, and heavy metals. Long-term storage or disposal of tailings may pose a danger for public health and safety. Production Uranium mill tailings are primarily the sandy process waste material from a conventional uranium mill. Milling is the first step in making fuel for nuclear reactors from natural uranium ore. The uranium extract is transformed into yellowcake. The raw uranium ore is brought to the surface and crushed into a fine sand. The valuable uranium-bearing minerals are then removed via heap leaching with the use of acids or bases, and the remaining radioactive sludge, called "uranium tailings", is stored in huge impoundments. A short ton (907 kg) of ore yields one to five pounds (0.45 to 2.3 kg) of uranium depending on the uranium content of the mineral. Uranium tailings can retain up to 85% of the ore's original radioactivity. Composition The tailings contain mainly decay products from the decay chain involving Uranium-238. Uranium tailings contain over a dozen radioactive nuclides, which are the primary hazard posed by the tailings. The most important of these are thorium-230, radium-226, radon-222 (radon gas) and the daughter isotopes of radon decay, including polonium-210. All of those are naturally occurring radioactive materials or "NORM". Health risks Tailings contain heavy metals and radioactive radium. Radium then decays over thousands of years and radioactive radon gas is produced. Tailings are kept in piles for long-term storage or disposal and need to be maintained and monitored for leaks over the long term. If uranium tailings are stored aboveground and allowed to dry out, the radioactive sand can be carried great distances by the wind, entering the food chain and bodies of water. The danger posed by such sa
https://en.wikipedia.org/wiki/Neighborhood%20operation
In computer vision and image processing a neighborhood operation is a commonly used class of computations on image data which implies that it is processed according to the following pseudo code: Visit each point p in the image data and do { N = a neighborhood or region of the image data around the point p result(p) = f(N) } This general procedure can be applied to image data of arbitrary dimensionality. Also, the image data on which the operation is applied does not have to be defined in terms of intensity or color, it can be any type of information which is organized as a function of spatial (and possibly temporal) variables in . The result of applying a neighborhood operation on an image is again something which can be interpreted as an image, it has the same dimension as the original data. The value at each image point, however, does not have to be directly related to intensity or color. Instead it is an element in the range of the function , which can be of arbitrary type. Normally the neighborhood is of fixed size and is a square (or a cube, depending on the dimensionality of the image data) centered on the point . Also the function is fixed, but may in some cases have parameters which can vary with , see below. In the simplest case, the neighborhood may be only a single point. This type of operation is often referred to as a point-wise operation. Examples The most common examples of a neighborhood operation use a fixed function which in addition is linear, that is, the computation consists of a linear shift invariant operation. In this case, the neighborhood operation corresponds to the convolution operation. A typical example is convolution with a low-pass filter, where the result can be interpreted in terms of local averages of the image data around each image point. Other examples are computation of local derivatives of the image data. It is also rather common to use a fixed but non-linear function . This includes median f
https://en.wikipedia.org/wiki/Gas%20thermometer
A gas thermometer is a thermometer that measures temperature by the variation in volume or pressure of a gas. Volume Thermometer This thermometer functions by Charles's Law. Charles's Law states that when the temperature of a gas increases, so does the volume. Using Charles's Law, the temperature can be measured by knowing the volume of gas at a certain temperature by using the formula, written below. Translating it to the correct levels of the device that is holding the gas. This works on the same principle as mercury thermometers. or is the volume, is the thermodynamic temperature, is the constant for the system. is not a fixed constant across all systems and therefore needs to be found experimentally for a given system through testing with known temperature values. Pressure Thermometer and Absolute Zero The constant volume gas thermometer plays a crucial role in understanding how absolute zero could be discovered long before the advent of cryogenics. Consider a graph of pressure versus temperature made not far from standard conditions (well above absolute zero) for three different samples of any ideal gas (a, b, c). To the extent that the gas is ideal, the pressure depends linearly on temperature, and the extrapolation to zero pressure occurs at absolute zero. Note that data could have been collected with three different amounts of the same gas, which would have rendered this experiment easy to do in the eighteenth century. History See also Thermodynamic instruments Boyle's law Combined gas law Gay-Lussac's law Avogadro's law Ideal gas law
https://en.wikipedia.org/wiki/Tristyly
Tristyly is a rare floral polymorphism that consists of three floral morphs that differ in regard to the length of the stamens and style within the flower. This type of floral mechanism is thought to encourage outcross pollen transfer and is usually associated with heteromorphic self-incompatibility to reduce inbreeding. It is an example of heterostyly and reciprocal herkogamy, like distyly, which is the more common form of heterostyly. Darwin first described tristylous species in 1877 in terms of the incompatibility of these three morphs. Description The three floral morphs of tristylous plants are based on the positioning of the male and female reproductive structures, as either long-, mid-, or short-styled morphs. Often this is shortened to L, M and S morphs. There are two different lengths of stamens in each flower morph that oppose the length of the style. For example, in the short-styled morph, the two sets of stamen are arranged in the mid and long position in order to prevent autogamy. In trimorphic incompatibility system, full seed set is accomplished only with pollination of stigmas by pollen from anthers of the same height. This incompatibility system produces pollen and styles with three different incompatibility phenotypes because of the three style and stamen lengths. Tristylous species have been found in several angiosperm families including the Oxalidaceae, Pontederiaceae, Amaryllidaceae, Connaraceae, Linaceae and Lythraceae, though several others have been proposed. There is not a consistent consensus on the specific criteria defining tristyly. In a 1993 review of tristylous evolutionary biology, Barrett proposes three common features for tristylous plants, 1) three floral morphs with differing style and stamen height, 2) a trimorphic incompatibility system, and 3) additional polymorphisms of the stigmas and pollen. Heteromorphic Incompatibility System This incompatibility system is a specific mechanism employed by heterostylous species, where
https://en.wikipedia.org/wiki/Structure%20tensor
In mathematics, the structure tensor, also referred to as the second-moment matrix, is a matrix derived from the gradient of a function. It describes the distribution of the gradient in a specified neighborhood around a point and makes the information invariant respect the observing coordinates. The structure tensor is often used in image processing and computer vision. The 2D structure tensor Continuous version For a function of two variables , the structure tensor is the 2×2 matrix where and are the partial derivatives of with respect to x and y; the integrals range over the plane ; and w is some fixed "window function" (such as a Gaussian blur), a distribution on two variables. Note that the matrix is itself a function of . The formula above can be written also as , where is the matrix-valued function defined by If the gradient of is viewed as a 2×1 (single-column) matrix, where denotes transpose operation, turning a row vector to a column vector, the matrix can be written as the matrix product or tensor or outer product . Note however that the structure tensor cannot be factored in this way in general except if is a Dirac delta function. Discrete version In image processing and other similar applications, the function is usually given as a discrete array of samples , where p is a pair of integer indices. The 2D structure tensor at a given pixel is usually taken to be the discrete sum Here the summation index r ranges over a finite set of index pairs (the "window", typically for some m), and w[r] is a fixed "window weight" that depends on r, such that the sum of all weights is 1. The values are the partial derivatives sampled at pixel p; which, for instance, may be estimated from by by finite difference formulas. The formula of the structure tensor can be written also as , where is the matrix-valued array such that Interpretation The importance of the 2D structure tensor stems from the fact eigenvalues (which can be ordered so that )
https://en.wikipedia.org/wiki/DMSMS
Diminishing manufacturing sources and material shortages (DMSMS) or diminishing manufacturing sources (DMS) is defined as: "The loss or impending loss of manufacturers of items or suppliers of items or raw materials." DMSMS and obsolescence are terms that are often used interchangeably. However, obsolescence refers to a lack of availability due to statutory or process changes and new designs, whereas DMSMS is a lack of sources or materials. Impact Although it is not strictly limited to electronic systems, much of the effort regarding DMSMS deals with electronic components that have a relatively short lifetime. Causes Primary components DMSMS is a multifaceted problem because there are at least three main components that need to be considered. First, a primary concern is the ongoing improvement in technology. As new products are designed, the technology that was used in their predecessors becomes outdated, making it more difficult to repair the equipment. Second, the mechanical parts may be harder to acquire because fewer are produced as the demand for these parts decreases. Third, the materials required to manufacture a piece of equipment may no longer be readily available. Product life cycle It is widely accepted that all electronic devices are subject to the product life cycle. As products evolve into updated versions, they require parts and technology distinct from their predecessors. However, the earlier versions of the product often still need to be maintained throughout their life cycle. As the new product becomes predominant, there are fewer parts available to fix the earlier versions and the technology becomes outdated. According to EIA-724 there are 6 distinct phases of a product's life cycle: Introduction, Growth, Maturity, Saturation, Decline, and Phase-Out. To the uninitiated these terms often seem abstract and odd. These terms are often used in databases covering parts life cycle so it is important to have an understanding of what they mean.
https://en.wikipedia.org/wiki/Nina%20Bari
Nina Karlovna Bari (; 19 November 1901 – 15 July 1961) was a Soviet mathematician known for her work on trigonometric series.<ref name="asc">Biography of Nina Karlovna Bari, by Giota Soublis, Agnes Scott College.</ref> She is also well-known for two textbooks, Higher Algebra and The Theory of Series''. Early life and education Nina Bari was born in Russia on 19 November 1901, the daughter of Olga and Karl Adolfovich Bari, a physician. In 1918, she became one of the first women to be accepted to the Department of Physics and Mathematics at the prestigious Moscow State University. She graduated in 1921—just three years after entering the university. After graduation, Bari began her teaching career. She lectured at the Moscow Forestry Institute, the Moscow Polytechnic Institute, and the Sverdlov Communist Institute. Bari applied for and received the only paid research fellowship awarded by the newly created Research Institute of Mathematics and Mechanics. As a student, Bari was drawn to an elite group nicknamed the Luzitania—an informal academic and social organization. She studied trigonometric series and functions under the tutelage of Nikolai Luzin, becoming one of his star students. She presented the main result of her research to the Moscow Mathematical Society in 1922—the first woman to address the society. In 1926, Bari completed her doctoral work on the topic of trigonometric expansions, winning the Glavnauk Prize for her thesis work. In 1927, Bari took advantage of an opportunity to study in Paris at the Sorbonne and the College de France. She then attended the Polish Mathematical Congress in Lwów, Poland; a Rockefeller grant enabled her to return to Paris to continue her studies. Bari's decision to travel may have been influenced by the disintegration of the Luzitanians. Luzin's irascible, demanding personality had alienated many of the mathematicians who had gathered around him. By 1930, all traces of the Luzitania movement had vanished, and Luzin left Mos