source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Mouthfeel
Mouthfeel refers to the physical sensations in the mouth caused by food or drink, making it distinct from taste. It is a fundamental sensory attribute which, along with taste and smell, determines the overall flavor of a food item. Mouthfeel is also sometimes referred to as texture. It is used in many areas related to the testing and evaluating of foodstuffs, such as wine-tasting and food rheology. It is evaluated from initial perception on the palate, to first bite, through chewing to swallowing and aftertaste. In wine-tasting, for example, mouthfeel is usually used with a modifier (big, sweet, tannic, chewy, etc.) to the general sensation of the wine in the mouth. Research indicates texture and mouthfeel can also influence satiety with the effect of viscosity most significant. Mouthfeel is often related to a product's water activity—hard or crisp products having lower water activities and soft products having intermediate to high water activities. Qualities perceived Chewiness: The sensation of sustained, elastic resistance from food while it is chewed. Cohesiveness: Degree to which the sample deforms before rupturing when biting with molars. Crunchiness: The audible grinding of a food when it is chewed. Density: Compactness of cross section of the sample after biting completely through with the molars. Dryness: Degree to which the sample feels dry in the mouth. Exquisiteness: Perceived quality of the item in question. Fracturability: Force with which the sample crumbles, cracks or shatters. Fracturability encompasses crumbliness, crispiness, crunchiness and brittleness. Graininess: Degree to which a sample contains small grainy particles. Gumminess: Energy required to disintegrate a semi-solid food to a state ready for swallowing. Hardness: Force required to deform the product to a given distance, i.e., force to compress between molars, bite through with incisors, compress between tongue and palate. Heaviness: Weight of product perceived when fir
https://en.wikipedia.org/wiki/Amplitude%20modulation%20signalling%20system
The amplitude modulation signalling system (AMSS or the AM signalling system) is a digital system for adding low bit rate information to an analogue amplitude modulated broadcast signal in the same manner as the Radio Data System (RDS) for frequency modulated (FM) broadcast signals. This system has been standardized in March 2006 by ETSI (TS 102 386) as an extension to the Digital Radio Mondiale (DRM) system. Broadcasting AMSS data are broadcast from the following transmitters: LW RTL France: 234 kHz SW BBC World Service: 15.575 MHz Formerly it was also used by: MW Truckradio 531 kHz BBC World Service: 648 kHz Deutschlandradio Kultur: 990 kHz External links ETSI TS 102 386 V1.2.1 (2006-03) directly from ETSI Publications Download Area (account or free registration required) Radio technology Broadcast engineering 2006 introductions 2006 establishments
https://en.wikipedia.org/wiki/Hudson%20Soft%20HuC6270
HuC6270 is a video display controller (VDC) developed by Hudson Soft and manufactured for Hudson Soft by Seiko Epson. The VDC was used in the PC Engine game console produced by NEC Corporation, and the upgraded PC Engine SuperGrafx. Technical specification The HuC6270 generates a display signal composed of background (with x y scrolling) and sprites. It uses external VRAM via a 16-bit address bus. It can display up to 64 sprites on screen, with a maximum of 16 sprites per horizontal scan line. Uses The HuC6270 was used in consoles PC Engine and PC engine SuperGrafx consoles. Additionally, the VDC was used in two arcade games. The arcade version of Bloody Wolf ran on a custom version of the PC Engine. The arcade hardware is missing the second 16-bit graphic chip, the HuC6260 video color encoder, that is in the PC Engine. This means the VDC directly accesses palette RAM and builds out the display signals/timing. A rare Capcom quiz-type arcade game also ran on a modified version of the SuperGrafx hardware, which used two VDCs.
https://en.wikipedia.org/wiki/MuSK%20protein
MuSK (for Muscle-Specific Kinase) is a receptor tyrosine kinase required for the formation and maintenance of the neuromuscular junction. It is activated by a nerve-derived proteoglycan called agrin, which is similarly also required for neuromuscular junction formation. MuSK signaling Upon activation by its ligand agrin, MuSK signals via the proteins called casein kinase 2 (CK2), Dok-7 and rapsyn, to induce "clustering" of acetylcholine receptors (AChR). Both CK2 and Dok-7 are required for MuSK-induced formation of the neuromuscular junction, since mice lacking Dok-7 failed to form AChR clusters or neuromuscular synapses, and since downregulation of CK2 also impedes recruitment of AChR to the primary MuSK scaffold. In addition to the proteins mentioned, other proteins are then gathered, to form the endplate to the neuromuscular junction. The nerve terminates onto the endplate, forming the neuromuscular junction - a structure required to transmit nerve impulses to the muscle, and thus initiating muscle contraction. Role in disease Antibodies directed against this protein (Anti-MuSK autoantibodies) are found in some people with myasthenia gravis not demonstrating antibodies to the acetylcholine receptor. The disease still causes loss of acetylcholine receptor activity, but the symptoms affected people experience may differ from those of people with other causes of myasthenia gravis.
https://en.wikipedia.org/wiki/Paris%E2%80%93Harrington%20theorem
In mathematical logic, the Paris–Harrington theorem states that a certain combinatorial principle in Ramsey theory, namely the strengthened finite Ramsey theorem, which is expressible in Peano arithmetic, is not provable in this system. The combinatorial principle is however provable in slightly stronger systems. This result has been described by some (such as the editor of the Handbook of Mathematical Logic in the references below) as the first "natural" example of a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic; it was already known that such statements existed by Gödel's first incompleteness theorem. Strengthened finite Ramsey theorem The strengthened finite Ramsey theorem is a statement about colorings and natural numbers and states that: For any positive integers n, k, m, such that m ≥ n, one can find N with the following property: if we color each of the n-element subsets of S = {1, 2, 3,..., N} with one of k colors, then we can find a subset Y of S with at least m elements, such that all n-element subsets of Y have the same color, and the number of elements of Y is at least the smallest element of Y. Without the condition that the number of elements of Y is at least the smallest element of Y, this is a corollary of the finite Ramsey theorem in , with N given by: Moreover, the strengthened finite Ramsey theorem can be deduced from the infinite Ramsey theorem in almost exactly the same way that the finite Ramsey theorem can be deduced from it, using a compactness argument (see the article on Ramsey's theorem for details). This proof can be carried out in second-order arithmetic. The Paris–Harrington theorem states that the strengthened finite Ramsey theorem is not provable in Peano arithmetic. Paris–Harrington theorem Roughly speaking, Jeff Paris and Leo Harrington (1977) showed that the strengthened finite Ramsey theorem is unprovable in Peano arithmetic by showing that in Peano
https://en.wikipedia.org/wiki/Simple%20%28abstract%20algebra%29
In mathematics, the term simple is used to describe an algebraic structure which in some sense cannot be divided by a smaller structure of the same type. Put another way, an algebraic structure is simple if the kernel of every homomorphism is either the whole structure or a single element. Some examples are: A group is called a simple group if it does not contain a nontrivial proper normal subgroup. A ring is called a simple ring if it does not contain a nontrivial two sided ideal. A module is called a simple module if it does not contain a nontrivial submodule. An algebra is called a simple algebra if it does not contain a nontrivial two sided ideal. The general pattern is that the structure admits no non-trivial congruence relations. The term is used differently in semigroup theory. A semigroup is said to be simple if it has no nontrivial ideals, or equivalently, if Green's relation J is the universal relation. Not every congruence on a semigroup is associated with an ideal, so a simple semigroup may have nontrivial congruences. A semigroup with no nontrivial congruences is called congruence simple. See also semisimple simple universal algebra Abstract algebra
https://en.wikipedia.org/wiki/Duplicate%20code
In computer programming, duplicate code is a sequence of source code that occurs more than once, either within a program or across different programs owned or maintained by the same entity. Duplicate code is generally considered undesirable for a number of reasons. A minimum requirement is usually applied to the quantity of code that must appear in a sequence for it to be considered duplicate rather than coincidentally similar. Sequences of duplicate code are sometimes known as code clones or just clones, the automated process of finding duplications in source code is called clone detection. Two code sequences may be duplicates of each other without being character-for-character identical, for example by being character-for-character identical only when white space characters and comments are ignored, or by being token-for-token identical, or token-for-token identical with occasional variation. Even code sequences that are only functionally identical may be considered duplicate code. Emergence Some of the ways in which duplicate code may be created are: copy and paste programming, which in academic settings may be done as part of plagiarism scrounging, in which a section of code is copied "because it works". In most cases this operation involves slight modifications in the cloned code, such as renaming variables or inserting/deleting code. The language nearly always allows one to call one copy of the code from different places, so that it can serve multiple purposes, but instead the programmer creates another copy, perhaps because they do not understand the language properly do not have the time to do it properly, or do not care about the increased active software rot. It may also happen that functionality is required that is very similar to that in another part of a program, and a developer independently writes code that is very similar to what exists elsewhere. Studies suggest that such independently rewritten code is typically not syntactically similar.
https://en.wikipedia.org/wiki/Hereditary%20set
In set theory, a hereditary set (or pure set) is a set whose elements are all hereditary sets. That is, all elements of the set are themselves sets, as are all elements of the elements, and so on. Examples For example, it is vacuously true that the empty set is a hereditary set, and thus the set containing only the empty set is a hereditary set. Similarly, a set that contains two elements: the empty set and the set that contains only the empty set, is a hereditary set. In formulations of set theory In formulations of set theory that are intended to be interpreted in the von Neumann universe or to express the content of Zermelo–Fraenkel set theory, all sets are hereditary, because the only sort of object that is even a candidate to be an element of a set is another set. Thus the notion of hereditary set is interesting only in a context in which there may be urelements. Assumptions The inductive definition of hereditary sets presupposes that set membership is well-founded (i.e., the axiom of regularity), otherwise the recurrence may not have a unique solution. However, it can be restated non-inductively as follows: a set is hereditary if and only if its transitive closure contains only sets. In this way the concept of hereditary sets can also be extended to non-well-founded set theories in which sets can be members of themselves. For example, a set that contains only itself is a hereditary set. See also Hereditarily countable set Hereditarily finite set Well-founded set
https://en.wikipedia.org/wiki/Oenopides
Oenopides of Chios (; born c. 490 BCE) was an ancient Greek geometer, astronomer and mathematician, who lived around 450 BCE. Biography Only limited information are known about the early life of Oenopides except his birthplace which was the island of Chios around 490 BCE. It is believed that Oenopides spent time in Athens but there is only circumstantial evidence to support this. Plato mentions him in Erastae: A Dialogue On Philosophy which places him in Athens. The English translator of the same book reveals (in footnote 3) one other aspect in Oenopides life which was his travel in Egypt in which he enriched his knowledge in the art of Astronomy and Geometry by some Egyptian priests. Astronomy The main accomplishment of Oenopides as an astronomer was his determination of the angle between the plane of the celestial equator, and the zodiac (the yearly path of the Sun in the sky). He found this angle to be 24°. In effect this amounted to measuring the inclination of the Earth axis. Oenopides's result remained the standard value for two centuries, until Eratosthenes measured it with greater precision. Oenopides also determined the value of the Great Year, that is, the shortest interval of time that is equal to both an integer number of years and an integer number of months. As the relative positions of the Sun and Moon repeat themselves after each Great Year, this offers a means to predict solar and lunar eclipses. In actual practice this is only approximately true, because the ratio of the length of the year and that of the month does not exactly match any simple mathematical fraction, and because in addition the lunar orbit varies continuously. Oenopides put the Great Year at 59 years, corresponding to 730 months. This was a good approximation, but not a perfect one, since 59 (sidereal) years are equal to 21550.1 days, while 730 (synodical) months equal 21557.3 days. The difference therefore amounts to seven days. In addition there are the interfering variatio
https://en.wikipedia.org/wiki/Goodyear%20MPP
The Goodyear Massively Parallel Processor (MPP) was a massively parallel processing supercomputer built by Goodyear Aerospace for the NASA Goddard Space Flight Center. It was designed to deliver enormous computational power at lower cost than other existing supercomputer architectures, by using thousands of simple processing elements, rather than one or a few highly complex CPUs. Development of the MPP began circa 1979; it was delivered in May 1983, and was in general use from 1985 until 1991. It was based on Goodyear's earlier STARAN array processor, a 4x256 1-bit processing element (PE) computer. The MPP was a 128x128 2-dimensional array of 1-bit wide PEs. In actuality 132x128 PEs were configured with a 4x128 configuration added for fault tolerance to substitute for up to 4 rows (or columns) of processors in the presence of problems. The PEs operated in a single instruction, multiple data (SIMD) fashioneach PE performed the same operation simultaneously, on different data elements, under the control of a microprogrammed control unit. After the MPP was retired in 1991, it was donated to the Smithsonian Institution, and is now in the collection of the National Air and Space Museum's Steven F. Udvar-Hazy Center. It was succeeded at Goddard by the MasPar MP-1 and Cray T3D massively parallel computers. Applications The MPP was initially developed for high-speed analysis of satellite images. In early tests, it was able to extract and separate different land-use areas on Landsat imagery in 18 seconds, as compared with 7 hours on a DEC VAX-11/780. Once the system was put into production use, NASA's Office of Space Science and Applications solicited proposals from scientists across the country to test and implement a wide range of computational algorithms on the MPP. 40 projects were accepted, to form the "MPP Working Group"; results of most of them were presented at the First Symposium on the Frontiers of Massively Parallel Computation, in 1986. Some examples of app
https://en.wikipedia.org/wiki/Binomial%20transform
In combinatorics, the binomial transform is a sequence transformation (i.e., a transform of a sequence) that computes its forward differences. It is closely related to the Euler transform, which is the result of applying the binomial transform to the sequence associated with its ordinary generating function. Definition The binomial transform, T, of a sequence, {an}, is the sequence {sn} defined by Formally, one may write for the transformation, where T is an infinite-dimensional operator with matrix elements Tnk. The transform is an involution, that is, or, using index notation, where is the Kronecker delta. The original series can be regained by The binomial transform of a sequence is just the nth forward differences of the sequence, with odd differences carrying a negative sign, namely: where Δ is the forward difference operator. Some authors define the binomial transform with an extra sign, so that it is not self-inverse: whose inverse is In this case the former transform is called the inverse binomial transform, and the latter is just binomial transform. This is standard usage for example in On-Line Encyclopedia of Integer Sequences. Example Both versions of the binomial transform appear in difference tables. Consider the following difference table: Each line is the difference of the previous line. (The n-th number in the m-th line is am,n = 3n−2(2m+1n2 + 2m(1+6m)n + 2m-19m2), and the difference equation am+1,n = am,n+1 - am,n holds.) The top line read from left to right is {an} = 0, 1, 10, 63, 324, 1485, ... The diagonal with the same starting point 0 is {tn} = 0, 1, 8, 36, 128, 400, ... {tn} is the noninvolutive binomial transform of {an}. The top line read from right to left is {bn} = 1485, 324, 63, 10, 1, 0, ... The cross-diagonal with the same starting point 1485 is {sn} = 1485, 1161, 900, 692, 528, 400, ... {sn} is the involutive binomial transform of {bn}. Ordinary generating function The transform connects the generating functions asso
https://en.wikipedia.org/wiki/Describing%20function
In control systems theory, the describing function (DF) method, developed by Nikolay Mitrofanovich Krylov and Nikolay Bogoliubov in the 1930s, and extended by Ralph Kochenburger is an approximate procedure for analyzing certain nonlinear control problems. It is based on quasi-linearization, which is the approximation of the non-linear system under investigation by a linear time-invariant (LTI) transfer function that depends on the amplitude of the input waveform. By definition, a transfer function of a true LTI system cannot depend on the amplitude of the input function because an LTI system is linear. Thus, this dependence on amplitude generates a family of linear systems that are combined in an attempt to capture salient features of the non-linear system behavior. The describing function is one of the few widely applicable methods for designing nonlinear systems, and is very widely used as a standard mathematical tool for analyzing limit cycles in closed-loop controllers, such as industrial process controls, servomechanisms, and electronic oscillators. The method Consider feedback around a discontinuous (but piecewise continuous) nonlinearity (e.g., an amplifier with saturation, or an element with deadband effects) cascaded with a slow stable linear system. The continuous region in which the feedback is presented to the nonlinearity depends on the amplitude of the output of the linear system. As the linear system's output amplitude decays, the nonlinearity may move into a different continuous region. This switching from one continuous region to another can generate periodic oscillations. The describing function method attempts to predict characteristics of those oscillations (e.g., their fundamental frequency) by assuming that the slow system acts like a low-pass or bandpass filter that concentrates all energy around a single frequency. Even if the output waveform has several modes, the method can still provide intuition about properties like frequency and p
https://en.wikipedia.org/wiki/Interrupt%20descriptor%20table
The interrupt descriptor table (IDT) is a data structure used by the x86 architecture to implement an interrupt vector table. The IDT is used by the processor to determine the correct response to interrupts and exceptions. The details in the description below apply specifically to the x86 architecture. Other architectures have similar data structures, but may behave differently. Use of the IDT is triggered by three types of events: hardware interrupts, software interrupts, and processor exceptions, which together are referred to as interrupts. The IDT consists of 256 interrupt vectors–the first 32 (0–31 or 0x00–0x1F) of which are used for processor exceptions. Real mode In real mode, the interrupt table is called IVT (interrupt vector table). Up to the 80286, the IVT always resided at the same location in memory, ranging from 0x0000 to 0x03ff, and consisted of 256 far pointers. Hardware interrupts may be mapped to any of the vectors by way of a programmable interrupt controller. On the 80286 and later, the size and locations of the IVT can be changed in the same way as it is done with the IDT (Interrupt descriptor table) in protected mode (i.e., via the LIDT (Load Interrupt Descriptor Table Register) instruction) though it does not change the format of it. BIOS interrupts The BIOS provides simple real-mode access to a subset of hardware facilities by registering interrupt handlers. They are invoked as software interrupts with the INT assembly instruction and the parameters are passed via registers. These interrupts are used for various tasks like detecting the system memory layout, configuring VGA output and modes, and accessing the disk early in the boot process. Protected and long mode The IDT is an array of descriptors stored consecutively in memory and indexed by the vector number. It is not necessary to use all of the possible entries: it is sufficient to populate the table up to the highest interrupt vector used, and set the IDT length portion of the
https://en.wikipedia.org/wiki/V%28D%29J%20recombination
V(D)J recombination is the mechanism of somatic recombination that occurs only in developing lymphocytes during the early stages of T and B cell maturation. It results in the highly diverse repertoire of antibodies/immunoglobulins and T cell receptors (TCRs) found in B cells and T cells, respectively. The process is a defining feature of the adaptive immune system. V(D)J recombination in mammals occurs in the primary lymphoid organs (bone marrow for B cells and thymus for T cells) and in a nearly random fashion rearranges variable (V), joining (J), and in some cases, diversity (D) gene segments. The process ultimately results in novel amino acid sequences in the antigen-binding regions of immunoglobulins and TCRs that allow for the recognition of antigens from nearly all pathogens including bacteria, viruses, parasites, and worms as well as "altered self cells" as seen in cancer. The recognition can also be allergic in nature (e.g. to pollen or other allergens) or may match host tissues and lead to autoimmunity. In 1987, Susumu Tonegawa was awarded the Nobel Prize in Physiology or Medicine "for his discovery of the genetic principle for generation of antibody diversity". Background Human antibody molecules (including B cell receptors) are composed of heavy and light chains, each of which contains both constant (C) and variable (V) regions, genetically encoded on three loci: The immunoglobulin heavy locus (IGH@) on chromosome 14, containing the gene segments for the immunoglobulin heavy chain. The immunoglobulin kappa (κ) locus (IGK@) on chromosome 2, containing the gene segments for one type (κ) of immunoglobulin light chain. The immunoglobulin lambda (λ) locus (IGL@) on chromosome 22, containing the gene segments for another type (λ) of immunoglobulin light chain. Each heavy chain or light chain gene contains multiple copies of three different types of gene segments for the variable regions of the antibody proteins. For example, the human immunoglobulin heavy
https://en.wikipedia.org/wiki/ZX8301
The ZX8301 is an Uncommitted Logic Array (ULA) integrated circuit designed for the Sinclair QL microcomputer. Also known as the "Master Chip", it provides a Video Display Generator, the division of a 15 MHz crystal to provide the 7.5 MHz system clock, ZX8302 register address decoder, DRAM refresh and bus controller. The ZX8301 is IC22 on the QL motherboard. The Sinclair Research business model had always been to work toward a maximum performance to price ratio (as was evidenced by the keyboard mechanisms in the QL and earlier Sinclair models). Unfortunately, this focus on price and performance often resulted in cost cutting in the design and build of Sinclair's machines. One such cost driven decision (failing to use a hardware buffer integrated circuit (IC) between the IC pins and the external RGB monitor connection) caused the ZX8301 to quickly develop a reputation for being fragile and easy to damage, particularly if the monitor plug was inserted or removed while the QL was powered up. Such action resulted in damage to the video circuitry and almost always required replacement of the ZX8301. The ZX8301, when subsequently used in the International Computers Limited (ICL) One Per Desk featured hardware buffering, and the chip proved to be much more reliable in this configuration. See also Sinclair QL One Per Desk List of Sinclair QL clones
https://en.wikipedia.org/wiki/Tension%20%28physics%29
In physics, tension is described as the pulling force transmitted axially by the means of a string, a rope, chain, or similar object, or by each end of a rod, truss member, or similar three-dimensional object; tension might also be described as the action-reaction pair of forces acting at each end of said elements. Tension could be the opposite of compression. At the atomic level, when atoms or molecules are pulled apart from each other and gain potential energy with a restoring force still existing, the restoring force might create what is also called tension. Each end of a string or rod under such tension could pull on the object it is attached to, in order to restore the string/rod to its relaxed length. Tension (as a transmitted force, as an action-reaction pair of forces, or as a restoring force) is measured in newtons in the International System of Units (or pounds-force in Imperial units). The ends of a string or other object transmitting tension will exert forces on the objects to which the string or rod is connected, in the direction of the string at the point of attachment. These forces due to tension are also called "passive forces". There are two basic possibilities for systems of objects held by strings: either acceleration is zero and the system is therefore in equilibrium, or there is acceleration, and therefore a net force is present in the system. Tension in one dimension Tension in a string is a non-negative vector quantity. Zero tension is slack. A string or rope is often idealized as one dimension, having length but being massless with zero cross section. If there are no bends in the string, as occur with vibrations or pulleys, then tension is a constant along the string, equal to the magnitude of the forces applied by the ends of the string. By Newton's third law, these are the same forces exerted on the ends of the string by the objects to which the ends are attached. If the string curves around one or more pulleys, it will still have const
https://en.wikipedia.org/wiki/Reactions%20on%20surfaces
Reactions on surfaces are reactions in which at least one of the steps of the reaction mechanism is the adsorption of one or more reactants. The mechanisms for these reactions, and the rate equations are of extreme importance for heterogeneous catalysis. Via scanning tunneling microscopy, it is possible to observe reactions at the solid gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid–gas interface are in some cases related to catalysis. Simple decomposition If a reaction occurs through these steps: A + S ⇌ AS → Products where A is the reactant and S is an adsorption site on the surface and the respective rate constants for the adsorption, desorption and reaction are k1, k−1 and k2, then the global reaction rate is: where: r is the rate, mol·m−2·s−1 is the concentration of adsorbate, mol·m−3 is the surface concentration of occupied sites, mol·m−2 is the concentration of all sites (occupied or not), mol·m−2 is the surface coverage, (i.e. ) defined as the fraction of sites which are occupied, which is dimensionless is time, s is the rate constant for the surface reaction, s−1. is the rate constant for surface adsorption, m3·mol−1·s−1 is the rate constant for surface desorption, s−1 is highly related to the total surface area of the adsorbent: the greater the surface area, the more sites and the faster the reaction. This is the reason why heterogeneous catalysts are usually chosen to have great surface areas (in the order of a hundred m2/gram) If we apply the steady state approximation to AS, then: so and The result is equivalent to the Michaelis–Menten kinetics of reactions catalyzed at a site on an enzyme. The rate equation is complex, and the reaction order is not clear. In experimental work, usually two extreme cases are looked for in order to prove the mechanism. In them, the rate-determining step can be: Limiting step: adsorption/desorption The order respect to A is 1. Examples o
https://en.wikipedia.org/wiki/Physical%20coefficient
A physical coefficient is an important number that characterizes some physical property of a technical or scientific object. A coefficient also has a scientific reference which is the reliance on force. Stoichiometric coefficient of a chemical compound To find the coefficient of a chemical compound, you must balance the elements involved in it. For example, water: H2O. It just so happens that hydrogen (H) and oxygen (O) are both diatomic molecules, thus we have H2 and O2. To form water, one of the O atoms breaks off from the O2 molecule and react with the H2 compound to form H2O. But, there is one oxygen atom left. It reacts with another H2 molecule. Since it took two of each atom to balance the compound, we put the coefficient 2 in front of H2O: 2 H2O. The total reaction is thus 2 H2 + O2 → 2 H2O. Examples of physical coefficients Coefficient of thermal expansion (thermodynamics) (dimensionless) - Relates the change in temperature to the change in a material's dimensions. Partition coefficient (KD) (chemistry) - The ratio of concentrations of a compound in two phases of a mixture of two immiscible solvents at equilibrium. Hall coefficient (electrical physics) - Relates a magnetic field applied to an element to the voltage created, the amount of current and the element thickness. It is a characteristic of the material from which the conductor is made. Lift coefficient (CL or CZ) (aerodynamics) (dimensionless) - Relates the lift generated by an airfoil with the dynamic pressure of the fluid flow around the airfoil, and the planform area of the airfoil. Ballistic coefficient (BC) (aerodynamics) (units of kg/m2) - A measure of a body's ability to overcome air resistance in flight. BC is a function of mass, diameter, and drag coefficient. Transmission coefficient (quantum mechanics) (dimensionless) - Represents the probability flux of a transmitted wave relative to that of an incident wave. It is often used to describe the probability of a particle
https://en.wikipedia.org/wiki/Prefetcher
The Prefetcher is a component of Microsoft Windows which was introduced in Windows XP. It is a component of the Memory Manager that can speed up the Windows boot process and shorten the amount of time it takes to start up programs. It accomplishes this by caching files that are needed by an application to RAM as the application is launched, thus consolidating disk reads and reducing disk seeks. This feature was covered by US patent 6,633,968. Since Windows Vista, the Prefetcher has been extended by SuperFetch and ReadyBoost. SuperFetch attempts to accelerate application launch times by monitoring and adapting to application usage patterns over periods of time, and caching the majority of the files and data needed by them into memory in advance so that they can be accessed very quickly when needed. ReadyBoost (when enabled) uses external memory like a USB flash drive to extend the system cache beyond the amount of RAM installed in the computer. ReadyBoost also has a component called ReadyBoot that replaces the Prefetcher for the boot process if the system has 700 MB or more of RAM. Overview When a Windows system boots, components of many files need to be read into memory and processed. Often different parts of the same file (e.g. Registry hives) are loaded at different times. As a result, a significant amount of time is spent 'jumping' from file to file and back again multiple times, even though a single access would be more efficient. The prefetcher works by watching what data is accessed during the boot process (including data read from the NTFS Master File Table), and recording a trace file of this activity. The boot fetcher will continue to watch for such activity until 30 seconds after the user's shell has started, or until 60 seconds after all services have finished initializing, or until 120 seconds after the system has booted, whichever elapses first. Future boots can then use the information recorded in this trace file to load code and data in a more eff
https://en.wikipedia.org/wiki/Life%20history%20theory
Life history theory is an analytical framework designed to study the diversity of life history strategies used by different organisms throughout the world, as well as the causes and results of the variation in their life cycles. It is a theory of biological evolution that seeks to explain aspects of organisms' anatomy and behavior by reference to the way that their life histories—including their reproductive development and behaviors, post-reproductive behaviors, and lifespan (length of time alive)—have been shaped by natural selection. A life history strategy is the "age- and stage-specific patterns" and timing of events that make up an organism's life, such as birth, weaning, maturation, death, etc. These events, notably juvenile development, age of sexual maturity, first reproduction, number of offspring and level of parental investment, senescence and death, depend on the physical and ecological environment of the organism. The theory was developed in the 1950s and is used to answer questions about topics such as organism size, age of maturation, number of offspring, life span, and many others. In order to study these topics, life history strategies must be identified, and then models are constructed to study their effects. Finally, predictions about the importance and role of the strategies are made, and these predictions are used to understand how evolution affects the ordering and length of life history events in an organism's life, particularly the lifespan and period of reproduction. Life history theory draws on an evolutionary foundation, and studies the effects of natural selection on organisms, both throughout their lifetime and across generations. It also uses measures of evolutionary fitness to determine if organisms are able to maximize or optimize this fitness, by allocating resources to a range of different demands throughout the organism's life. It serves as a method to investigate further the "many layers of complexity of organisms and their worl
https://en.wikipedia.org/wiki/Sarah%20Bryant%20%28Virtua%20Fighter%29
is a character in the Virtua Fighter series of fighting games by Sega. She is a college student from San Francisco, California, who debuted in the original Virtua Fighter, brainwashed to try and kill her brother, and later tries to surpass him while seeking to take down the organization responsible. She has appeared in every game in the series including spinoff titles, and made several guest appearances in other games, notably in Tecmo Koei's Dead or Alive 5 as a playable character. In addition, she has been featured in various print media, as well as the Virtua Fighter anime. Originally voiced by Lynn Harris, she was designed by Seiichi Ishii alongside director Yu Suzuki after a brainstorming session, and inspired by Sarah Connor from the Terminator franchise. Her primary outfit, designed to serve as both combat-ready attire and a distraction for opponents, has remained consistent throughout the series with minimal changes. Sarah has been cited as one of the first Western female characters in Japanese fighting games, receiving much praise for her looks and character, and noted for her influence on the designs of later similar characters in other fighting game franchises. However, discussion and criticism has also arisen around the sexualization of her character done by both Sega themselves and gaming publications utilizing her image. Conception and design Created during a brainstorming session by development team Sega AM2, her initial name was Anego , signifying her role as Jacky Bryants sibling who at this point was called Aniki . Designed by Seiichi Ishii, he stated she was inspired by the Terminator character Sarah Connor, though series creator Yu Suzuki refused to comment himself when asked if she was based on anyone. He did however label her his favorite character due to her ease of use, due to the fact she "fights aggressively: she does not stand still waiting for the opponent's move, but moves ahead. It fully reflects my personality." When asked what feeli
https://en.wikipedia.org/wiki/SSLeay
SSLeay is an open-source SSL implementation. It was developed by Eric Andrew Young and Tim J. Hudson as an SSL 3.0 implementation using RC2 and RC4 encryption. The recommended pronunciation is to say each letter s-s-l-e-a-y and was first developed by Eric A. Young ("eay"). SSLeay also included an implementation of the DES from earlier work by Eric Young which was believed to be the first open-source implementation of DES. Development of SSLeay unofficially mostly ended, and volunteers forked the project under the OpenSSL banner around December 1998, when Tim and Eric both commenced working for RSA Security in Australia. SSLeay SSLeay was developed by Eric A. Young, starting in 1995. Windows support was added by Tim J. Hudson. Patches to open source applications to support SSL using SSLeay were produced by Tim Hudson. Development by Young and Hudson ceased in 1998. The SSLeay library and codebase is licensed under its own SSLeay License, a form of free software license. The SSLeay License is a BSD-style open-source license, almost identical to a four-clause BSD license. SSLeay supports X.509v3 certificates and PKCS#10 certificate requests. It supports SSL2 and SSL3. Also supported is TLSv1. The first secure FTP implementation was created under BSD using SSLeay by Tim Hudson. The first open source Certifying Authority implementation was created with CGI scripts using SSLeay by Clifford Heath. Forks OpenSSL is a fork and successor project to SSLeay and has a similar interface to it. After Young and Hudson joined RSA Corporation, volunteers forked SSLeay and continued development as OpenSSL. BSAFE SSL-C is a fork of SSLeay developed by Eric A. Young and Tim J. Hudson for RSA Corporation. It was released as part of BSAFE SSL-C.
https://en.wikipedia.org/wiki/Polynomial%20transformation
In mathematics, a polynomial transformation consists of computing the polynomial whose roots are a given function of the roots of a polynomial. Polynomial transformations such as Tschirnhaus transformations are often used to simplify the solution of algebraic equations. Simple examples Translating the roots Let be a polynomial, and be its complex roots (not necessarily distinct). For any constant , the polynomial whose roots are is If the coefficients of are integers and the constant is a rational number, the coefficients of may be not integers, but the polynomial has integer coefficients and has the same roots as . A special case is when The resulting polynomial does not have any term in . Reciprocals of the roots Let be a polynomial. The polynomial whose roots are the reciprocals of the roots of as roots is its reciprocal polynomial Scaling the roots Let be a polynomial, and be a non-zero constant. A polynomial whose roots are the product by of the roots of is The factor appears here because, if and the coefficients of are integers or belong to some integral domain, the same is true for the coefficients of . In the special case where , all coefficients of are multiple of , and is a monic polynomial, whose coefficients belong to any integral domain containing and the coefficients of . This polynomial transformation is often used to reduce questions on algebraic numbers to questions on algebraic integers. Combining this with a translation of the roots by , allows to reduce any question on the roots of a polynomial, such as root-finding, to a similar question on a simpler polynomial, which is monic and does not have a term of degree . For examples of this, see Cubic function § Reduction to a depressed cubic or Quartic function § Converting to a depressed quartic. Transformation by a rational function All preceding examples are polynomial transformations by a rational function, also called Tschirnhaus transformations. Let be
https://en.wikipedia.org/wiki/Bosonic%20field
In quantum field theory, a bosonic field is a quantum field whose quanta are bosons; that is, they obey Bose–Einstein statistics. Bosonic fields obey canonical commutation relations, as distinct from the canonical anticommutation relations obeyed by fermionic fields. Examples include scalar fields, describing spin-0 particles such as the Higgs boson, and gauge fields, describing spin-1 particles such as the photon. Basic properties Free (non-interacting) bosonic fields obey canonical commutation relations. Those relations also hold for interacting bosonic fields in the interaction picture, where the fields evolve in time as if free and the effects of the interaction are encoded in the evolution of the states. It is these commutation relations that imply Bose–Einstein statistics for the field quanta. Examples Examples of bosonic fields include scalar fields, gauge fields, and symmetric 2-tensor fields, which are characterized by their covariance under Lorentz transformations and have spins 0, 1 and 2, respectively. Physical examples, in the same order, are the Higgs field, the photon field, and the graviton field. Of the last two, only the photon field can be quantized using the conventional methods of canonical or path integral quantization. This has led to the theory of quantum electrodynamics, one of the most successful theories in physics. Quantization of gravity, on the other hand, is a long-standing problem that has led to development of theories such as string theory and loop quantum gravity. Spin and statistics The spin–statistics theorem implies that quantization of local, relativistic field theories in 3+1 dimensions may lead either to bosonic or fermionic quantum fields, i.e., fields obeying commutation or anti-commutation relations, according to whether they have integer or half-integer spin, respectively. Thus bosonic fields are one of the two theoretically possible types of quantum field, namely those corresponding to particles with integer spin.
https://en.wikipedia.org/wiki/Angel%20dusting
Angel dusting is the misleading marketing practice of including a minuscule amount of an active ingredient in a cosmetic, cosmeceutical, dietary supplement, food product, or nutraceutical, insufficient to give any measurable benefit. The advertising materials may claim that the ingredient is helpful and that the ingredient is contained in the product, both of which are true. However, no claim is made that the product contains enough of the active ingredient to have an effect – this is just assumed by the purchaser. See also Homeopathy False advertising List of topics characterized as pseudoscience
https://en.wikipedia.org/wiki/Threshold%20expression
Mitochondrial threshold effect is a phenomenon where the number of mutated mtDNA has surpassed a certain threshold which causes the electron transport chain and ATP synthesis of a mitochondrion to fail. There isn't a set number that needs to be surpassed, however, it is associated with an increase of the number of mutated mtDNA. When there is 60-80% of mutated mtDNA present, that is said to be the threshold level. While 60-80% is the general threshold level, this is also dependent on the individual, the specific organ in question and what the specific mutation is. There are three specific types of mitochondrial threshold effects: phenotypic threshold effect, biochemical threshold effect and translational threshold effect. Threshold expression is a phenomenon in which phenotypic expression of a mitochondrial disease within an organ system occurs when the severity of the mutation, relative number of mutant mtDNA, and reliance of the organ system on oxidative phosphorylation combine in such a way that ATP production of the tissue falls below the level required by the tissue. The phenotype may be expressed even if the percentage of mutant mtDNA is below 50% if the mutation is severe enough. Phenotypic threshold effect Phenotypic threshold effect is when there is a certain amount of wild-type mtDNA present in the mitochondrion which is able to balance out the number of mutated mtDNA. As a result, the phenotype is normal. However, if the number of wild-type mtDNA decreases and the number of mutant mtDNA increases, resulting in an imbalance between the two, the threshold level has been altered which causes complications. This occurs because the wild-type mtDNA present are able to keep the electron transport chain and ATP synthesis functioning despite there being a few number of them present. They are able to counterbalance the mutated mtDNA, however, when the number drops below threshold level the mutant mtDNA take over. See also Heteroplasmy
https://en.wikipedia.org/wiki/Nectar
Nectar is a sugar-rich liquid produced by plants in glands called nectaries or nectarines, either within the flowers with which it attracts pollinating animals, or by extrafloral nectaries, which provide a nutrient source to animal mutualists, which in turn provide herbivore protection. Common nectar-consuming pollinators include mosquitoes, hoverflies, wasps, bees, butterflies and moths, hummingbirds, honeyeaters and bats. Nectar plays a crucial role in the foraging economics and evolution of nectar-eating species; for example, nectar foraging behavior is largely responsible for the divergent evolution of the African honey bee, A. m. scutellata and the western honey bee. Nectar is an economically important substance as it is the sugar source for honey. It is also useful in agriculture and horticulture because the adult stages of some predatory insects feed on nectar. For example, a number of parasitoid wasps (e.g. the social wasp species Apoica flavissima) rely on nectar as a primary food source. In turn, these wasps then hunt agricultural pest insects as food for their young. Etymology Nectar is derived from Greek νεκταρ, the fabled drink of eternal life. Some derive the word from νε- or νη- "not" plus κτα- or κτεν- "kill", meaning "unkillable", thus "immortal". The common use of the word "nectar" to refer to the "sweet liquid in flowers", is first recorded in AD 1600. Floral nectaries A nectary or nectarine is floral tissue found in different locations in the flower and is one of several secretory floral structures, including elaiophores and osmophores, producing nectar, oil and scent respectively. The function of these structures is to attract potential pollinators, which may include insects, including bees and moths, and vertebrates such as hummingbirds and bats. Nectaries can occur on any floral part, but they may also represent a modified part or a novel structure. The different types of floral nectaries include: receptacle (receptacular: extrastaminal
https://en.wikipedia.org/wiki/PGPDisk
PGP Virtual Disk is a disk encryption system that allows one to create a virtual encrypted disk within a file. Older versions for Windows NT were freeware (for example, bundled with PGP v6.0.2i; and with some of the CKT builds of PGP). These are still available for download, but no longer maintained. Today, PGP Virtual Disk is available as part of the PGP Desktop product family, running on Windows 2000/XP/Vista, and Mac OS X. See also Disk encryption software Comparison of disk encryption software United States v. Boucher – federal criminal case involving PGPDisk-protected data Cryptographic software Disk encryption
https://en.wikipedia.org/wiki/Endemic%20Bird%20Areas%20of%20the%20World
Endemic Bird Areas of the World: Priorities for Biodiversity Conservation represents an effort to document in detail the endemic biodiversity conservation importance of the world's Endemic Bird Areas. The authors are Alison J. Stattersfield, Michael J. Crosby, Adrian J. Long, and David C. Wege, with a foreword by Queen Noor of Jordan. Endemic Bird Areas of the World: Priorities for Biodiversity Conservation contains 846 pages, and is a 1998 publication by Birdlife International, No. 7 in their Birdlife Conservation Series. Six Introductory Sections The book has six introductory sections: "Biodiversity and Priority setting" "Identifying Endemic Bird Areas" "Global Analyses" "The Prioritization of Endemic Bird Areas" "The Conservation Relevance of Endemic Bird Areas" "Endemic Bird Areas as Targets for Conservation Action" Six Regional Introductions These are then followed by six Regional Introductions, in which Endemic Bird Areas are grouped into six major regions: North and Central America South America Africa, Europe, and the Middle East Continental Asia South-east Asian Islands, New Guinea and Australia Pacific Islands Endemic Bird Areas The bulk of the book consists of accounts of each of the 218 Endemic Bird Areas. Each account contains the following information: summary statistics about the EBA A "General Characteristics" section A section giving an overview of the restricted-range endemic bird species found in the EBA A Threats and Conservation section describing the threats posed to the EBA's biodiversity interest, and any significant measure in which are in place to counter these An annotated list of the restricted-range endemics found in the EBA Secondary Bird Areas The book concludes with a short section giving brief details of 138 secondary areas, again grouped into the six regions. Details Endemic Bird Areas of the World: Priorities for Biodiversity Conservation follows on from work presented in the 1992 publication Putting biodiver
https://en.wikipedia.org/wiki/Replica%20trick
In the statistical physics of spin glasses and other systems with quenched disorder, the replica trick is a mathematical technique based on the application of the formula: or: where is most commonly the partition function, or a similar thermodynamic function. It is typically used to simplify the calculation of , the expected value of , reducing the problem to calculating the disorder average where is assumed to be an integer. This is physically equivalent to averaging over copies or replicas of the system, hence the name. The crux of the replica trick is that while the disorder averaging is done assuming to be an integer, to recover the disorder-averaged logarithm one must send continuously to zero. This apparent contradiction at the heart of the replica trick has never been formally resolved, however in all cases where the replica method can be compared with other exact solutions, the methods lead to the same results. (A natural sufficient rigorous proof that the replica trick works would be to check that the assumptions of Carlson's theorem hold, especially that the ratio is of exponential type less than .) It is occasionally necessary to require the additional property of replica symmetry breaking (RSB) in order to obtain physical results, which is associated with the breakdown of ergodicity. General formulation It is generally used for computations involving analytic functions (can be expanded in power series). Expand using its power series: into powers of or in other words replicas of , and perform the same computation which is to be done on , using the powers of . A particular case which is of great use in physics is in averaging the thermodynamic free energy, over values of with a certain probability distribution, typically Gaussian. The partition function is then given by Notice that if we were calculating just (or more generally, any power of ) and not its logarithm which we wanted to average, the resulting integral (assuming a Gaus
https://en.wikipedia.org/wiki/WNYZ-LD
WNYZ-LD is a low-power television station in New York City, owned by K Media. It broadcasts on VHF channel 6, commonly known as an "FM6 operation" because the audio portion of the signal lies at 87.75 MHz, receivable by analog FM radios, tuned to the 87.75 frequency. Throughout its existence, the station has operated closer to a radio station than a television station. WNYZ-LD broadcasts video, usually silent films, which are repeated throughout the day to fulfill the Federal Communications Commission (FCC) requirement that video be broadcast on the licensed frequency. The station airs this programming without commercials, while viewers hear the audio of WWRU out of Jersey City, New Jersey. History As W33BS The station originated in 1987. It first signed on in 1998 as W33BS in Darien, Connecticut; later as UHF channel 33. As WNYZ-LP The station was moved to VHF channel 6 in 2003 and the call sign was changed to WNYZ-LP. At that time the station was re-licensed to New York City. The station's original owner, Reverend Dr. Carrie L. Thomas, sold the station to the now defunct Island Broadcasting Company after its transition to channel 6. The new owner dropped its religious format, and began operating WNYZ as an FM radio station. Since the New York City FM radio dial is significantly crowded, the market had not added a station to the FM band since 1985. This rather unconventional work-around effectively extended the available FM band in the city. The audio programming broadcast over WNYZ was originally Russian pop music. The station was branded as, "Radio Everything" (). Brief digital operation In November 2008, Island Broadcasting installed an Axcera DT325B digital VHF transmitter with the Axciter/Bandwidth Enhancement Technology (BET) option, which permitted WNYZ-LP to simultaneously transmit a single 480i SD digital stream using virtual channel 1.1, along with the analog audio carrier on 87.75 MHz. This allowed the station to serve both its radio and television
https://en.wikipedia.org/wiki/Albert%20Einstein%20Medal
The Albert Einstein Medal is an award presented by the Albert Einstein Society in Bern. First given in 1979, the award is presented to people for "scientific findings, works, or publications related to Albert Einstein" each year. Recipients Source: Einstein Society 2020: Event Horizon Telescope (EHT) scientific collaboration 2019: Clifford Martin Will 2018: Juan Martín Maldacena 2017: LIGO Scientific Collaboration and the Virgo Collaboration 2016: Alexei Yuryevich Smirnov 2015: Stanley Deser and Charles Misner 2014: Tom W. B. Kibble 2013: Roy Kerr 2012: Alain Aspect 2011: Adam Riess, Saul Perlmutter 2010: Hermann Nicolai 2009: Kip Stephen Thorne 2008: Beno Eckmann 2007: Reinhard Genzel 2006: Gabriele Veneziano 2005: Murray Gell-Mann 2004: Michel Mayor 2003: George F. Smoot 2001: Johannes Geiss, Hubert Reeves 2000: Gustav Tammann 1999: Friedrich Hirzebruch 1998: Claude Nicollier 1996: Thibault Damour 1995: Chen Ning Yang 1994: Irwin Shapiro 1993: Max Flückiger, Adolf Meichle 1992: Peter Bergmann 1991: Joseph Hooton Taylor, Jr. 1990: Roger Penrose 1989: Markus Fierz 1988: John Archibald Wheeler 1987: Jeanne Hersch 1986: Rudolf Mössbauer 1985: Edward Witten 1984: Victor Weisskopf 1983: Hermann Bondi 1982: Friedrich Traugott Wahlen 1979: Stephen Hawking See also Albert Einstein Award, Lewis and Rosa Strauss Memorial Fund Albert Einstein World Award of Science, World Cultural Council Einstein Prize, American Physical Society List of physics awards UNESCO Albert Einstein medal, United Nations Educational, Scientific and Cultural Organization
https://en.wikipedia.org/wiki/SAGE%20KE
The Science of Aging Knowledge Environment (SAGE KE) was an online scientific resource provided by the American Association for the Advancement of Science (AAAS). History and organization The American Association for the Advancement of Science established a collaboration with Stanford University Libraries and The Center for Resource Economics/Island Press (Island Press) in 1996 to find means to utilize internet-based technologies to enhance access to scientific information and improve the effectiveness of information transfer. The collaborative coined the term Knowledge Environment (KE) to describe the collection of electronic networking tools they were seeking to develop. SAGE KE is the third in a series of Knowledge Environments developed by Science and AAAS, after the Signal Transduction Knowledge Environment (STKE) and AIDScience. Funding for SAGE KE comes from The Ellison Medical Foundation, founded and supported by Oracle Corporation CEO Larry Ellison. SAGE KE published its final issue on 28 June 2006 due to lack of funding. The interactive content was discontinued during the summer of 2006, leaving the SAGE KE site as an archive by August 2006. Activities The focus of SAGE KE was to provide timely access to information about advances on basic mechanisms of aging and age-related diseases through the internet, to provide searchable databases of information on aging and to provide an active environment in which biogerontologists could share and debate their understandings. Ouroboros Ouroboros is a WordPress community weblog devoted to research in the biology of aging. It was established in July 2006 in reaction to the termination of the SAGE KE. The primary mission of the site is to provide timely commentary and review of recently published articles in the scholarly literature, either directly or indirectly related to aging. Articles on the site discuss a range of scientific topics, including Alzheimer's disease, bioinformatics, calorie restriction, regul
https://en.wikipedia.org/wiki/Surfer%27s%20ear
Surfer's ear is the common name for an exostosis or abnormal bone growth within the ear canal. Surfer's ear is not the same as swimmer's ear, although infection can result as a side effect. Irritation from cold wind and water exposure causes the bone surrounding the ear canal to develop lumps of new bony growth which constrict the ear canal. Where the ear canal is actually blocked by this condition, water and wax can become trapped and give rise to infection. The condition is so named due to its prevalence among cold water surfers. Warm water surfers are also at risk for exostosis due to the evaporative cooling caused by wind and the presence of water in the ear canal. Most avid surfers have at least some mild bone growths (exostoses), causing little to no problems. The condition is progressive, making it important to take preventive measures early, preferably whenever surfing. The condition is not limited to surfing and can occur in any activity with cold, wet, windy conditions such as windsurfing, kayaking, sailing, jet skiing, kitesurfing, and diving. Signs and symptoms In general, one ear will be somewhat worse than the other due to the prevailing wind direction of the area surfed or the side that most often strikes the wave first. Decreased hearing or hearing loss, temporary or ongoing Increased prevalence of ear infections, causing ear pain Difficulty evacuating debris or water from the ear causing a plugging sensation Cause The majority of patients present in their mid-30s to late 40s. This is likely due to a combination of the slow growth of the bone and the decreased participation in activities associated with surfer's ear past the 30s. However, surfer's ear is possible at any age and is directly proportional to the amount of time spent in cold, wet, windy weather without adequate protection. The normal ear canal is approximately 7 mm in diameter and has a volume of approximately 0.8 ml (approximately one-sixth of a teaspoon). As the condition progr
https://en.wikipedia.org/wiki/Electronic%20waste
Electronic waste or e-waste describes discarded electrical or electronic devices. It is also commonly known as waste electrical and electronic equipment (WEEE) or end-of-life (EOL) electronics. Used electronics which are destined for refurbishment, reuse, resale, salvage recycling through material recovery, or disposal are also considered e-waste. Informal processing of e-waste in developing countries can lead to adverse human health effects and environmental pollution. The growing consumption of electronic goods due to the Digital Revolution and innovations in science and technology, such as bitcoin, has led to a global e-waste problem and hazard. The rapid exponential increase of e-waste is due to frequent new model releases and unnecessary purchases of electrical and electronic equipment (EEE), short innovation cycles and low recycling rates, and a drop in the average life span of computers. Electronic scrap components, such as CPUs, contain potentially harmful materials such as lead, cadmium, beryllium, or brominated flame retardants. Recycling and disposal of e-waste may involve significant risk to the health of workers and their communities. Definition E-waste or electronic waste is created when an electronic product is discarded after the end of its useful life. The rapid expansion of technology and the consumption driven society results in the creation of a very large amount of e-waste. In the US, the United States Environmental Protection Agency (EPA) classifies e-waste into ten categories: Large household appliances, including cooling and freezing appliances Small household appliances IT equipment, including monitors Consumer electronics, including televisions Lamps and luminaires Toys Tools Medical devices Monitoring and control instruments and Automatic dispensers These include used electronics which are destined for reuse, resale, salvage, recycling, or disposal as well as re-usables (working and repairable electronics) and secondary ra
https://en.wikipedia.org/wiki/Sungazing
Sungazing is the unsafe practice of looking directly at the Sun. It is sometimes done as part of a spiritual or religious practice, most often near dawn or dusk. The human eye is very sensitive, and exposure to direct sunlight can lead to solar retinopathy, pterygium, cataracts, and often blindness. Studies have shown that even when viewing a solar eclipse the eye can still be exposed to harmful levels of ultraviolet radiation. Movements Referred to as sunning by William Horatio Bates as one of a series of exercises included in his Bates method, it became a popular form of alternative therapy in the early 20th century. His methods were widely debated at the time but ultimately discredited for lack of scientific rigor. The British Medical Journal reported in 1967 that "Bates (1920) advocated prolonged sun-gazing as the treatment of myopia, with disastrous results". See also Inedia (breatharianism) Joseph Plateau Scientific skepticism
https://en.wikipedia.org/wiki/Jefimenko%27s%20equations
In electromagnetism, Jefimenko's equations (named after Oleg D. Jefimenko) give the electric field and magnetic field due to a distribution of electric charges and electric current in space, that takes into account the propagation delay (retarded time) of the fields due to the finite speed of light and relativistic effects. Therefore they can be used for moving charges and currents. They are the particular solutions to Maxwell's equations for any arbitrary distribution of charges and currents. Equations Electric and magnetic fields Jefimenko's equations give the electric field E and magnetic field B produced by an arbitrary charge or current distribution, of charge density ρ and current density J: where r′ is a point in the charge distribution, r is a point in space, and is the retarded time. There are similar expressions for D and H. These equations are the time-dependent generalization of Coulomb's law and the Biot–Savart law to electrodynamics, which were originally true only for electrostatic and magnetostatic fields, and steady currents. Origin from retarded potentials Jefimenko's equations can be found from the retarded potentials φ and A: which are the solutions to Maxwell's equations in the potential formulation, then substituting in the definitions of the electromagnetic potentials themselves: and using the relation replaces the potentials φ and A by the fields E and B. Heaviside–Feynman formula The Heaviside–Feynman formula, also known as the Jefimenko–Feynman formula, can be seen as the point-like electric charge version of Jefimenko's equations. Actually, it can be (non trivially) deduced from them using Dirac functions, or using the Liénard-Wiechert potentials. It is mostly known from The Feynman Lectures on Physics, where it was used to introduce and describe the origin of electromagnetic radiation. The formula provides a natural generalization of the Coulomb's law for cases where the source charge is moving: Here, and are the electri
https://en.wikipedia.org/wiki/Non-pesticide%20management
Non-pesticidal Management (NPM) describes various pest-control techniques which do not rely on pesticides. It is used in organic production of foodstuff, as well as in other situations in which the introduction of toxins is undesirable. Instead of the use of synthetic toxins, pest control is achieved by biological means. Some examples of Non-Pesticidal Management techniques include: Introduction of natural predators. Use of naturally occurring insecticides, such as Neem tree products, Margosa, Tulsi / Basil Leaf, Citrus Oil, Eucalyptus Oil, Onion, Garlic spray, Essential Oils. These also refer to as Organic Pesticides. Use of trap crops which attract the insects away from the fields. The trap crops are regularly checked and pests are manually removed. Pest larvae which were killed by viruses can be crushed and sprayed over fields, thus killing the remaining larvae. Field sanitation. Timely sowing. Nutrient management. Maintain proper plant population. Soil solarisation Deep summer ploughing. Over years insects have withstood natural calamities and survived successfully. They are able to develop resistance to chemical pesticides insecticides used by farmers. To be successful, farmers should be knowledgeable and able to identify various crop pests, and their natural enemies (farmer’s friendly insects). Farmers should recognize different stages of insects and their behavior. The efforts to minimize pests should aim at restoring the natural balance of insects in crop ecosystem but not elimination of the pest. Principles of NPM Encouraging natural process in environmentCrop ecosystem should be diverse by growing inter crops, trap crops, border crops in place of mono cropping. Once the insecticide sprays stopped, natural enemies of crop pests gradually establish and exercise control of crop pests, which can be enhanced with botanical extracts like NSKE, chilli garlic extract, cattle dung urine decoction etc. Management skillSelecting crop based on soil, wate
https://en.wikipedia.org/wiki/Dick%20de%20Jongh
Dick Herman Jacobus de Jongh (born 19 October 1939, Enschede) is a Dutch logician and mathematician and a retired professor at the University of Amsterdam. He received his PhD degree in 1968 from the University of Wisconsin–Madison under supervision of Stephen Kleene with a dissertation titled Investigations on the Intuitionistic Propositional Calculus. De Jongh is mostly known for his work on proof theory, provability logic and intuitionistic logic. De Jongh is a member of the group collectively publishing under the pseudonym L. T. F. Gamut. In 2004, on the occasion of his retirement, the Institute for Logic, Language and Computation at the University of Amsterdam published a festschrift in his honor.
https://en.wikipedia.org/wiki/Frederick%20Rowbottom
Frederick Rowbottom (16 January 1938 – 12 October 2009) was a British logician and mathematician. The large cardinal notion of Rowbottom cardinals is named after him. Biography After graduating from Cambridge University, Rowbottom studied under Howard Jerome Keisler at the University of Wisconsin–Madison, earning his Ph.D. degree in 1964, with a thesis entitled Large Cardinals and Small Constructible Sets, under the supervision of Jerome Keisler. With a recommendation from Georg Kreisel, he took a position at the University of Bristol in 1965, where he spent the rest of his professional career. He published a paper called "Some strong axioms of infinity incompatible with the axiom of constructibility" in the Annals of Mathematical Logic, 3 1971. This paper, together with his thesis, "showed that Ramsey cardinals were weaker than measurable cardinals, and that their existence implied the constructible real continuum was countable; he further proved that this followed also from weaker partition and two cardinal properties." The large cardinal notion of Rowbottom cardinals is named after him, as is the notion of a Rowbottom ultrafilter. Keith Devlin studied set theory under Rowbottom. In 1992 he and a student, Jonathan Chapman, wrote a textbook on topos theory, Relative Category Theory and Geometric Morphisms: A Logical Approach, published in Oxford Logic Guides, No. 16. Rowbottom retired in 1993 at the age of 55. Rowbottom died of heart failure in Hadfield, England, on 12 October 2009, aged 71.
https://en.wikipedia.org/wiki/Brendan%20McKay%20%28mathematician%29
Brendan Damien McKay (born 26 October 1951 in Melbourne, Australia) is an Australian computer scientist and mathematician. He is currently an Emeritus Professor in the Research School of Computer Science at the Australian National University (ANU). He has published extensively in combinatorics. McKay received a Ph.D. in mathematics from the University of Melbourne in 1980, and was appointed Assistant Professor of Computer Science at Vanderbilt University, Nashville in the same year (1980–1983). His thesis, Topics in Computational Graph Theory, was written under the direction of Derek Holton. He was awarded the Australian Mathematical Society Medal in 1990. He was elected a Fellow of the Australian Academy of Science in 1997, and appointed Professor of Computer Science at the ANU in 2000. Mathematics McKay is the author of at least 127 refereed articles. One of McKay's main contributions has been a practical algorithm for the graph isomorphism problem and its software implementation NAUTY (No AUTomorphisms, Yes?). Further achievements include proving with Stanisław Radziszowski that the Ramsey number R(4,5) = 25; proving with Radziszowski that no 4-(12, 6, 6) combinatorial designs exist, determining with Gunnar Brinkmann, the number of posets on 16 points, and determining with Ian Wanless the number of Latin squares of size 11. Together with Brinkmann, he also developed the Plantri programme for generating planar triangulations and planar cubic graphs. The McKay–Miller–Širáň graphs, a class of highly-symmetric graphs with diameter two and many vertices relative to their degree, are named in part for McKay, who first wrote about them with Mirka Miller and Jozef Širáň in 1998. Biblical cyphers Outside of his specialty, McKay is best known for his collaborative work with a group of Israeli mathematicians such as Dror Bar-Natan and Gil Kalai, together with Maya Bar-Hillel, who rebutted a Bible code theory which maintained that the Hebrew text of the Bible enciphered
https://en.wikipedia.org/wiki/N%C6%B0%E1%BB%9Bc%20ch%E1%BA%A5m
(, Chữ Nôm: 渃㴨) is a common name for a variety of Vietnamese "dipping sauces" that are served quite frequently as condiments. It is commonly a sweet, sour, salty, savoury and/or spicy sauce. (mixed fish sauce) is the most well known dipping sauce made from fish sauce. Its simplest recipe is some lime juice, or occasionally vinegar, one part fish sauce (), one part sugar and two parts water. Vegetarians create (vegetarian dipping sauce) or (soy sauce) by substituting Maggi seasoning sauce for fish sauce (). To this, people will usually add minced uncooked garlic, chopped or minced bird's eye chilis, and in some instances, shredded pickled carrot or white radish and green papaya for . Otherwise, when having seafood, such as eels, people also serve some slices of lemongrass. It is often prepared hot on a stove to dissolve the sugar more quickly, then cooled. The flavor can be varied depending on the individual's preference, but it is generally described as pungent and distinct, sweet yet sour, and sometimes spicy. Varieties by region People in the north of Vietnam tend to use , as cooked by using the above recipes, but add broth made from pork loin and penaeid shrimp (). In the central section of the country, people like using a less dilute form of that has the same proportions of fish sauce, lime, and sugar as the recipe above, but less water, and with fresh chili. Southern Vietnamese people often use palm sugar and coconut water as the sweetener. Uses is typically served with: , a cracked-rice dish with meat, poultry, eggs, seafood or vegetables. The toppings are often fried, grilled, braised, steamed/boiled, or stir-fried. , spring rolls , which are sometimes called shrimp salad rolls or "rice paper" rolls, or as spring rolls (Alternately, are served with a peanut sauce containing hoisin sauce and sometimes chili, or made from , a Vietnamese fermented bean paste/soy sauce.) or "rice rolls", where wide sheets of rice noodles rolled up, and toppe
https://en.wikipedia.org/wiki/Nuqta
The nuqta (, ; sometimes also spelled nukta), also known as bindu (, ), is a diacritic mark that was introduced in Devanagari and some other Indic scripts to represent sounds not present in the original scripts. It takes the form of a dot placed below a character. This idea is inspired from the Arabic script; for example, there are some letters in Urdu that share the same basic shape but differ in the placement of dots(s) or nuqta(s) in the Perso-Arabic script: the letter ع ayn, with the addition of a nuqta on top, becomes the letter غ g͟hayn. Use in Devanagari Perso-Arabic consonants The term () is itself an example of the use of the nuqta. Other examples include ; and , a combination of a Perso-Arabic (āġā) and a Turko-Mongolic (k͟hān) honorific. The nuqta, and the phonological distinction it represents, is sometimes ignored in practice; e.g., being simply spelled as . In the text Dialect Accent Features for Establishing Speaker Identity, Manisha Kulshreshtha and Ramkumar Mathur write, "A few sounds, borrowed from the other languages like Persian and Arabic, are written with a dot (bindu or nuktā). Many people who speak Hindi as a second language, especially those who come from rural backgrounds and do not speak conventional Hindi (also called Khariboli), or speak in one of its dialects, pronounce these sounds as their nearest equivalents." For example, these rural speakers will assimilate the sound ɣ (Devanagari: ग़; Urdu: ) as ɡ (Devanagari: ग; Urdu: ). With a renewed Hindi–Urdu language contact, many Urdu writers now publish their works in Devanagari editions. Since the Perso-Arabic orthography is preserved in Nastaʿlīq script Urdu orthography, these writers use the nuqta in Devanagari when transcribing these consonants. Sometimes, व़ is used to explicitly represent the /w/ consonant and to differentiate it from /v/ consonant व. Dravidian consonants Devanagari also includes coverage for the Dravidian consonants /ɻ/; /r/ and /n/. (Respectively, thes
https://en.wikipedia.org/wiki/Probabilistic%20metric%20space
In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers , but in distribution functions. Let D+ be the set of all probability distribution functions F such that F(0) = 0 (F is a nondecreasing, left continuous mapping from R into [0, 1] such that max(F) = 1). Then given a non-empty set S and a function F: S × S → D+ where we denote F(p, q) by Fp,q for every (p, q) ∈ S × S, the ordered pair (S, F) is said to be a probabilistic metric space if: For all u and v in S, if and only if for all x > 0. For all u and v in S, . For all u, v and w in S, and for . Probability metric of random variables A probability metric D between two random variables X and Y may be defined, for example, as where F(x, y) denotes the joint probability density function of the random variables X and Y. If X and Y are independent from each other then the equation above transforms into where f(x) and g(y) are probability density functions of X and Y respectively. One may easily show that such probability metrics do not satisfy the first metric axiom or satisfies it if, and only if, both of arguments X and Y are certain events described by Dirac delta density probability distribution functions. In this case: the probability metric simply transforms into the metric between expected values , of the variables X and Y. For all other random variables X, Y the probability metric does not satisfy the identity of indiscernibles condition required to be satisfied by the metric of the metric space, that is: Example For example if both probability distribution functions of random variables X and Y are normal distributions (N) having the same standard deviation , integrating yields: where and is the complementary error function. In this case: Probability metric of random vectors The probability metric of random variables may be extended into metric D(X, Y) of random vectors X, Y by sub
https://en.wikipedia.org/wiki/Straight%20sinus
The straight sinus, also known as tentorial sinus or the , is an area within the skull beneath the brain. It receives blood from the inferior sagittal sinus and the great cerebral vein, and drains into the confluence of sinuses. Structure The straight sinus is situated within the dura mater, where the falx cerebri meets the midline of tentorium cerebelli. It forms from the confluence of the inferior sagittal sinus and the great cerebral vein. It may also drain blood from the superior cerebellar veins and veins from the falx cerebri. In cross-section, it is triangular, contains a few transverse bands across its interior, and increases in size as it proceeds backward. It is usually around 5 cm long. Variation The straight sinus is usually an unpaired structure. However, there may be two straight sinuses, which may be one on top of the other or parallel. Function The straight sinus allows blood to drain from the inferior center of the head outwards posteriorly. It receives blood from the inferior sagittal sinus, great cerebral vein, posterior cerebral veins, superior cerebellar veins and veins from the falx cerebri. Additional images See also Dural venous sinuses
https://en.wikipedia.org/wiki/Superior%20sagittal%20sinus
The superior sagittal sinus (also known as the superior longitudinal sinus), within the human head, is an unpaired area along the attached margin of the falx cerebri. It allows blood to drain from the lateral aspects of anterior cerebral hemispheres to the confluence of sinuses. Cerebrospinal fluid drains through arachnoid granulations into the superior sagittal sinus and is returned to venous circulation. Structure Commencing at the foramen cecum, through which it receives emissary veins from the nasal cavity, it runs from anterior to posterior, grooving the inner surface of the frontal, the adjacent margins of the two parietal lobes, and the superior division of the cruciate eminence of the occipital lobe. Near the internal occipital protuberance, it drains into the confluence of sinuses and deviates to either side (usually the right). At this point it is continued as the corresponding transverse sinus. The superior sagittal sinus is usually divided into three parts: anterior (foramen cecum to bregma), middle (bregma to lambda), posterior (lambda to confluence). It is triangular in section, narrow in front, and gradually increases in size as it passes backward. Its inner surface presents the openings of the superior cerebral veins, which run, for the most part, obliquely forward, and open chiefly at the back part of the sinus, their orifices being concealed by fibrous folds; numerous fibrous bands (chordae Willisii) extend transversely across the inferior angle of the sinus; and, lastly, small openings communicate with irregularly shaped venous spaces (venous lacunae) in the dura mater near the sinus. There are usually three lacunae on either side of the sinus: a small frontal, a large parietal, and an occipital, intermediate in size between the other two. Most of the cerebral veins from the outer surface of the hemisphere open into these lacunæ, and numerous arachnoid granulations (Pacchionian bodies) project into them from below. The superior sagittal sin
https://en.wikipedia.org/wiki/Inferior%20sagittal%20sinus
The inferior sagittal sinus (also known as inferior longitudinal sinus), within the human head, is an area beneath the brain which allows blood to drain outwards posteriorly from the center of the head. It drains (from the center of the brain) to the straight sinus (at the back of the head), which connects to the transverse sinuses. See diagram (at right): labeled in the brain as "" (for Latin: sinus sagittalis inferior). The inferior sagittal sinus courses along the inferior border of the falx cerebri, superior to the corpus callosum. It receives blood from the deep and medial aspects of the cerebral hemispheres and drains into the straight sinus. Additional images See also Dural venous sinuses Occipital sinus Superficial veins of the brain
https://en.wikipedia.org/wiki/Inferior%20petrosal%20sinus
The inferior petrosal sinuses are two small sinuses situated on the inferior border of the petrous part of the temporal bone, one on each side. Each inferior petrosal sinus drains the cavernous sinus into the internal jugular vein. Structure The inferior petrosal sinus is situated in the inferior petrosal sulcus, formed by the junction of the petrous part of the temporal bone with the basilar part of the occipital bone. It begins below and behind the cavernous sinus and, passing through the anterior part of the jugular foramen, ends in the superior bulb of the internal jugular vein. Function The inferior petrosal sinus receives the internal auditory veins and also veins from the medulla oblongata, pons, and under surface of the cerebellum. Additional images See also Dural venous sinuses Inferior petrosal sinus sampling
https://en.wikipedia.org/wiki/Tutte%20polynomial
The Tutte polynomial, also called the dichromate or the Tutte–Whitney polynomial, is a graph polynomial. It is a polynomial in two variables which plays an important role in graph theory. It is defined for every undirected graph and contains information about how the graph is connected. It is denoted by . The importance of this polynomial stems from the information it contains about . Though originally studied in algebraic graph theory as a generalization of counting problems related to graph coloring and nowhere-zero flow, it contains several famous other specializations from other sciences such as the Jones polynomial from knot theory and the partition functions of the Potts model from statistical physics. It is also the source of several central computational problems in theoretical computer science. The Tutte polynomial has several equivalent definitions. It is essentially equivalent to Whitney’s rank polynomial, Tutte’s own dichromatic polynomial and Fortuin–Kasteleyn’s random cluster model under simple transformations. It is essentially a generating function for the number of edge sets of a given size and connected components, with immediate generalizations to matroids. It is also the most general graph invariant that can be defined by a deletion–contraction recurrence. Several textbooks about graph theory and matroid theory devote entire chapters to it. Definitions Definition. For an undirected graph one may define the Tutte polynomial as where denotes the number of connected components of the graph . In this definition it is clear that is well-defined and a polynomial in and . The same definition can be given using slightly different notation by letting denote the rank of the graph . Then the Whitney rank generating function is defined as The two functions are equivalent under a simple change of variables: Tutte’s dichromatic polynomial is the result of another simple transformation: Tutte’s original definition of is equivalent but less easil
https://en.wikipedia.org/wiki/Zbus
Z Matrix or bus impedance matrix in computing is an important tool in power system analysis. Though, it is not frequently used in power flow study, unlike Ybus matrix, it is, however, an important tool in other power system studies like short circuit analysis or fault study. The Zbus matrix can be computed by matrix inversion of the Ybus matrix. Since the Ybus matrix is usually sparse, the explicit Zbus matrix would be dense and very memory intensive to handle directly. Context Electric power transmission needs optimization. Only Computer simulation allows the complex handling required. The Zbus matrix is a big tool in that box. Formulation Z Matrix can be formed by either inverting the Ybus matrix or by using Z bus building algorithm. The latter method is harder to implement but more practical and faster (in terms of computer run time and number of floating-point operations per second) for a relatively large system. Formulation: Because the Zbus is the inverse of the Ybus, it is symmetrical like the Ybus. The diagonal elements of the Zbus are referred to as driving-point impedances of the buses and the off-diagonal elements are called transfer impedances. One reason the Ybus is so much more popular in calculation is the matrix becomes sparse for large systems; that is, many elements go to zero as the admittance between two far away buses is very small. In the Zbus, however, the impedance between two far away buses becomes very large, so there are no zero elements, making computation much harder. The operations to modify an existing Zbus are straightforward, and outlined in Table 1. To create a Zbus matrix from scratch, we start by listing the equation for one branch: Then we add additional branches according to Table 1 until each bus is expressed in the matrix:
https://en.wikipedia.org/wiki/Smart%20battery
A smart battery or a smart battery pack is a rechargeable battery pack with a built-in battery management system (BMS), usually designed for use in a portable computer such as a laptop. In addition to the usual positive and negative terminals, a smart battery has two or more terminals to connect to the BMS; typically the negative terminal is also used as BMS "ground". BMS interface examples are: SMBus, PMBus, EIA-232, EIA-485, and Local Interconnect Network. Internally, a smart battery can measure voltage and current, and deduce charge level and SoH (State of Health) parameters, indicating the state of the cells. Externally, a smart battery can communicate with a smart battery charger and a "smart energy user" via the bus interface. A smart battery can demand that the charging stop, request charging, or demand that the smart energy user stop using power from this battery. There are standard specifications for smart batteries: Smart Battery System, MIPI BIF and many ad-hoc specifications. Charging A smart battery charger is mainly a switch mode power supply (also known as high frequency charger) that has the ability to communicate with a smart battery pack's battery management system (BMS) in order to control and monitor the charging process. This communication may be by a standard bus such as CAN bus in automobiles or System Management Bus (SMBus) in computers. The charge process is controlled by the BMS and not by the charger, thus increasing security in the system. Not all chargers have this type of communication, which is commonly used for lithium batteries. Besides the usual plus (positive) and minus (negative) terminals, a smart battery charger also has multiple terminals to connect to the smart battery pack's BMS. The Smart Battery System standard is commonly used to define this connection, which includes the data bus and the communications protocol between the charger and battery. There are other ad-hoc specifications also used. Hardware Smart battery c
https://en.wikipedia.org/wiki/Bottom%20type
In type theory, a theory within mathematical logic, the bottom type of a type system is the type that is a subtype of all other types. Where such a type exists, it is often represented with the up tack (⊥) symbol. When the bottom type is empty, a function whose return type is bottom cannot return any value, not even the lone value of a unit type. In such a language, the bottom type may therefore be known as the zero, void or never type. In the Curry–Howard correspondence, an empty type corresponds to falsity. Computer science applications In subtyping systems, the bottom type is a subtype of all types. It is dual to the top type, which spans all possible values in a system. If a type system is sound, the bottom type is uninhabited and a term of bottom type represents a logical contradiction. In such systems, typically no distinction is drawn between the bottom type and the empty type, and the terms may be used interchangeably. If the bottom type is inhabited, its terms[s] typically correspond to error conditions such as undefined behavior, infinite recursion, or unrecoverable errors. In Bounded Quantification with Bottom, Pierce says that "Bot" has many uses: In a language with exceptions, a natural type for the raise construct is raise ∈ exception -> Bot, and similarly for other control structures. Intuitively, Bot here is the type of computations that do not return an answer. Bot is useful in typing the "leaf nodes" of polymorphic data structures. For example, List(Bot) is a good type for nil. Bot is a natural type for the "null pointer" value (a pointer which does not point to any object) of languages like Java: in Java, the null type is the universal subtype of reference types. null is the only value of the null type; and it can be cast to any reference type. However, the null type is not a bottom type as described above, it is not a subtype of int and other primitive types. A type system including both Top and Bot seems to be a natural target for t
https://en.wikipedia.org/wiki/Top%20type
In mathematical logic and computer science, some type theories and type systems include a top type that is commonly denoted with top or the symbol ⊤. The top type is sometimes called also universal type, or universal supertype as all other types in the type system of interest are subtypes of it, and in most cases, it contains every possible object of the type system. It is in contrast with the bottom type, or the universal subtype, which every other type is supertype of and it is often that the type contains no members at all. Support in programming languages Several typed programming languages provide explicit support for the top type. In statically-typed languages, there are two different, often confused, concepts when discussing the top type. A universal base class or other item at the top of a run time class hierarchy (often relevant in object-oriented programming) or type hierarchy; it is often possible to create objects with this (run time) type, or it could be found when one examines the type hierarchy programmatically, in languages that support it A (compile time) static type in the code whose variables can be assigned any value (or a subset thereof, like any object pointer value), similar to dynamic typing The first concept often implies the second, i.e., if a universal base class exists, then a variable that can point to an object of this class can also point to an object of any class. However, several languages have types in the second regard above (e.g., void * in C++, id in Objective-C, interface {} in Go), static types which variables can accept any object value, but which do not reflect real run time types that an object can have in the type system, so are not top types in the first regard. In dynamically-typed languages, the second concept does not exist (any value can be assigned to any variable anyway), so only the first (class hierarchy) is discussed. This article tries to stay with the first concept when discussing top types, but also mentio
https://en.wikipedia.org/wiki/Difference%20hierarchy
In set theory, a branch of mathematics, the difference hierarchy over a pointclass is a hierarchy of larger pointclasses generated by taking differences of sets. If Γ is a pointclass, then the set of differences in Γ is . In usual notation, this set is denoted by 2-Γ. The next level of the hierarchy is denoted by 3-Γ and consists of differences of three sets: . This definition can be extended recursively into the transfinite to α-Γ for some ordinal α. In the Borel hierarchy, Felix Hausdorff and Kazimierz Kuratowski proved that the countable levels of the difference hierarchy over Π0γ give Δ0γ+1.
https://en.wikipedia.org/wiki/Abstract%20type
In programming languages, an abstract type (also known as existential types) is a type in a nominative type system that cannot be instantiated directly; by contrast, a concrete type be instantiated directly. Instantiation of an abstract type can occur only indirectly, via a concrete subtype. An abstract type may provide no implementation, or an incomplete implementation. In some languages, abstract types with no implementation (rather than an incomplete implementation) are known as protocols, interfaces, signatures, or class types. In class-based object-oriented programming, abstract types are implemented as abstract classes (also known as abstract base classes), and concrete types as concrete classes. In generic programming, the analogous notion is a concept, which similarly specifies syntax and semantics, but does not require a subtype relationship: two unrelated types may satisfy the same concept. Often, abstract types will have one or more implementations provided separately, for example, in the form of concrete subtypes that be instantiated. In object-oriented programming, an abstract class may include abstract methods or abstract properties that are shared by its subclasses. Other names for language features that are (or may be) used to implement abstract types include traits, mixins, flavors, roles, or type classes. Creation Abstract classes can be created, signified, or simulated in several ways: By use of the explicit keyword in the class definition, as in Java, D or C#. By including, in the class definition, one or more abstract methods (called pure virtual functions in C++), which the class is declared to accept as part of its protocol, but for which no implementation is provided. By inheriting from an abstract type, and not overriding all missing features necessary to complete the class definition. In other words, a child type that does not implement all abstract methods from its parent becomes abstract itself. In many dynamically typed lan
https://en.wikipedia.org/wiki/Bear-resistant%20food%20storage%20container
Bear-resistant food storage containers, also called bear canisters or bear cans, are usually hard-sided containers used by backpackers to protect their food from theft by bears. Bear canisters are seeing increased popularity in areas where bears have become habituated to human presence, and are required in some places, such as Yosemite National Park in the United States. Construction A bear canister typically weighs between 2-4 lb (1-2 kg), and has a storage capacity of 400 - 900 in3 (6 - 15 liters). The actual capacity in number of days of hiking food stored varies with the appetite of the hiker, the selection of food, and the skill in which it is packed, but a 700 in3 canister likely holds up to a week's worth of food for the average hiker. Hard-sided bear cans employ such materials as polycarbonate, ABS plastic, carbon fiber, and aluminum in their construction. An effective canister must resist both the tremendous strength and high intelligence of an attacking animal. Most containers are too large for a bear to simply pick up and carry away. The lid of a canister is usually recessed in order to prevent it being pried off. Some manufacturers, such as Garcia require a tool such as a coin to open the canister whereas other manufacturers' products, such as the BearVault, use locking nubs that allow the user to twist the lid off without tools. At least one model of soft-sided "bear bag" is made from Spectra (UHMWPE) fabric. While a soft-sided container may prevent a bear from eating its contents, the food inside is likely to be reduced to purée in the attempt and leak through the Spectra fabric; thus feeding the bear. A newer model comes with an aluminum stiffener that protects the contents more effectively than the bag alone. Regulations and Testing Several national parks and national forests require backcountry visitors to carry approved food storage containers. Backpackers who ignore this policy may face fines, property impounded, or eviction from the wilder
https://en.wikipedia.org/wiki/Trait%20%28computer%20programming%29
In computer programming, a trait is a concept used in programming languages which represents a set of methods that can be used to extend the functionality of a class. Rationale In object-oriented programming, behavior is sometimes shared between classes which are not related to each other. For example, many unrelated classes may have methods to serialize objects to JSON. Historically, there have been several approaches to solve this without duplicating the code in every class needing the behavior. Other approaches include multiple inheritance and mixins, but these have drawbacks: the behavior of the code may unexpectedly change if the order in which the mixins are applied is altered, or if new methods are added to the parent classes or mixins. Traits solve these problems by allowing classes to use the trait and get the desired behavior. If a class uses more than one trait, the order in which the traits are used does not matter. The methods provided by the traits have direct access to the data of the class. Characteristics Traits combine aspects of protocols (interfaces) and mixins. Like an interface, a trait defines one or more method signatures, of which implementing classes must provide implementations. Like a mixin, a trait provides additional behavior for the implementing class. In case of a naming collision between methods provided by different traits, the programmer must explicitly disambiguate which one of those methods will be used in the class; thus manually solving the diamond problem of multiple inheritance. This is different from other composition methods in object-oriented programming, where conflicting names are automatically resolved by scoping rules. Operations which can be performed with traits include: symmetric sum: an operation that merges two disjoint traits to create a new trait override (or asymmetric sum): an operation that forms a new trait by adding methods to an existing trait, possibly overriding some of its methods alias: an oper
https://en.wikipedia.org/wiki/Trochleitis
Trochleitis is inflammation of the superior oblique tendon trochlea apparatus characterized by localized swelling, tenderness, and severe pain. This condition is an uncommon but treatable cause of periorbital pain. The trochlea is a ring-like apparatus of cartilage through which passes the tendon of the superior oblique muscle. It is located in the superior nasal orbit and functions as a pulley for the superior oblique muscle. Inflammation of the trochlear region leads to a painful syndrome with swelling and exquisite point tenderness in the upper medial rim of the orbit. A vicious cycle may ensue such that inflammation causes swelling and fraying of the tendon which then increases the friction of passing through the trochlea which in turn adds to the inflammation. Trochleitis has also been associated with triggering or worsening of migraine attacks in patients with pre-existing migraines (Yanguela, 2002). Symptoms Patients with trochleitis typically experience a dull fluctuating aching over the trochlear region developing over a few days. Some may also feel occasional sharp pains punctuating the ache. In patients with migraines, trochleitis may occur simultaneously with headache. Presentation is usually unilateral with palpable swelling over the affected area supranasal to the eye. The trochlear region is extremely tender to touch. Pain is exacerbated by eye movements looking down and inwards, and especially in supraduction (looking up) and looking outwards, which stretches the superior oblique muscle tendon. Notably, there is no restriction of extraocular movements, no diplopia, and often no apparent ocular signs such as proptosis. However, occasionally mild ptosis is found. The absence of generalized signs of orbital involvement is helpful in eliminating other more common causes of periorbital pain. Cause The cause of trochleitis is often unknown (idiopathic trochleitis), but it has been known to occur in patients with rheumatological diseases such as sys
https://en.wikipedia.org/wiki/Ekiga
Ekiga (formerly called GnomeMeeting) is a VoIP and video conferencing application for GNOME and Microsoft Windows. It is distributed as free software under the terms of the GNU GPL-2.0-or-later. It was the default VoIP client in Ubuntu until October 2009, when it was replaced by Empathy. Ekiga supports both the SIP and H.323 (based on OPAL) protocols and is fully interoperable with any other SIP compliant application and with Microsoft NetMeeting. It supports many high-quality audio and video codecs. Ekiga was initially written by Damien Sandras in order to graduate from the University of Louvain (UCLouvain). It is currently developed by a community-based team led by Sandras. The logo was designed based on his concept by Andreas Kwiatkowski. Ekiga.net was also a free and private SIP registrar, which enabled its members to originate and terminate (receive) calls from and to each other directly over the Internet. The service was discontinued at the end of 2018. Features Features of Ekiga include: Integration Ekiga is integrated with a number of different software packages and protocols such as LDAP directories registration and browsing along with support for Novell Evolution so that contacts are shared between both programs and zeroconf (Apple Bonjour) support. It auto-detects devices including USB, ALSA and legacy OSS soundcards, Video4linux and FireWire camera. User interface Ekiga supports a Contact list based interface along with Presence support with custom messages. It allows for the monitoring of contacts and viewing call history along with an addressbook, dialpad, and chat window. SIP URLs and H.323/callto support is built-in along with full-screen videoconferencing (accelerated using a graphics card). Technical features Call forwarding on busy, no answer, always (SIP and H.323) Call transfer (SIP and H.323) Call hold (SIP and H.323) DTMF support (SIP and H.323) Basic instant messaging (SIP) Text chat (SIP and H.323) Register with several regi
https://en.wikipedia.org/wiki/Tidal%20range
Tidal range is the difference in height between high tide and low tide. Tides are the rise and fall of sea levels caused by gravitational forces exerted by the Moon and Sun, by Earth's rotation and by centrifugal force caused by Earth's progression around the Earth-Moon barycenter. Tidal range depends on time and location. Larger tidal range occur during spring tides (spring range), when the gravitational forces of both the Moon and Sun are aligned (at syzygy), reinforcing each other in the same direction (new moon) or in opposite directions (full moon). The largest annual tidal range can be expected around the time of the equinox if it coincides with a spring tide. Spring tides occur at the second and fourth (last) quarters of the lunar phases. By contrast, during neap tides, when the Moon and Sun's gravitational force vectors act in quadrature (making a right angle to the Earth's orbit), the difference between high and low tides (neap range) is smallest. Neap tides occur at the first and third quarters of the lunar phases. Tidal data for coastal areas is published by national hydrographic offices. The data is based on astronomical phenomena and is predictable. Sustained storm-force winds blowing from one direction combined with low barometric pressure can increase the tidal range, particularly in narrow bays. Such weather-related effects on the tide can cause ranges in excess of predicted values and can cause localized flooding. These weather-related effects are not calculable in advance. Mean tidal range is calculated as the difference between mean high water (i.e., the average high tide level) and mean low water (the average low tide level). Geography The typical tidal range in the open ocean is about (blue and green on the map on the right). Closer to the coast, this range is much greater. Coastal tidal ranges vary globally and can differ anywhere from near zero to over . The exact range depends on the volume of water adjacent to the coast, and the g
https://en.wikipedia.org/wiki/Schur-convex%20function
In mathematics, a Schur-convex function, also known as S-convex, isotonic function and order-preserving function is a function that for all such that is majorized by , one has that . Named after Issai Schur, Schur-convex functions are used in the study of majorization. Every function that is convex and symmetric is also Schur-convex. The opposite implication is not true, but all Schur-convex functions are symmetric (under permutations of the arguments). Schur-concave function A function f is 'Schur-concave' if its negative, −f, is Schur-convex. Schur-Ostrowski criterion If f is symmetric and all first partial derivatives exist, then f is Schur-convex if and only if for all holds for all . Examples is Schur-concave while is Schur-convex. This can be seen directly from the definition. The Shannon entropy function is Schur-concave. The Rényi entropy function is also Schur-concave. is Schur-convex. The function is Schur-concave, when we assume all . In the same way, all the elementary symmetric functions are Schur-concave, when . A natural interpretation of majorization is that if then is more spread out than . So it is natural to ask if statistical measures of variability are Schur-convex. The variance and standard deviation are Schur-convex functions, while the median absolute deviation is not. If is a convex function defined on a real interval, then is Schur-convex. A probability example: If are exchangeable random variables, then the function is Schur-convex as a function of , assuming that the expectations exist. The Gini coefficient is strictly Schur convex.
https://en.wikipedia.org/wiki/Robert%20A.%20Swanson
Robert "Bob" Swanson (1947–1999) was an American venture capitalist who cofounded the biotechnology giant Genentech in 1976 with Herbert Boyer. Genentech is a pioneer in the field, and it remains one of the leading biotechnology companies in the world. He served as CEO of Genentech from 1976 to 1990, and as chairman from 1990 to 1996. Bob Swanson graduated from the Massachusetts Institute of Technology, where he was a member of the Sigma Chi fraternity. He completed a B.S. degree in Chemistry as well as a master's degree in Management from the MIT Sloan School of Management. Both degrees were conferred in 1970. He is regarded as an instrumental figure in launching the biotechnology revolution. The authors of the book, 1,000 Years, 1,000 People: Ranking the Men and Women Who Shaped the Millennium ranked Mr. Swanson number 612. Mr. Swanson was inducted into the Junior Achievement U. S. Business Hall of Fame in 2006. He received the 2000 Biotechnology Heritage Award posthumously with Herbert Boyer. On December 6, 1999, he succumbed to glioblastoma, a type of brain cancer, at the age of 52. Early life and education Robert S. Swanson was born in Brooklyn, New York, in 1947 to Arthur J. Swanson and Arline Baker Swanson. Arthur Swanson was an airplane electrical maintenance crew leader, and worked in shifts. According to Swanson, he was taught from an early age that his generation would do better than the last generation of his family. It was because of this that his family wanted him to be the first to obtain a college degree. His family was particularly interested in the Massachusetts Institute of Technology (MIT). Much to his family's pride, Swanson was accepted into MIT in 1965. Even though he was majoring in chemistry, he realized later during his undergraduate education that he preferred working with people, rather than in research. What follows is an excerpt from a 1996 interview that describes how he came to this realization: "At the end of my junior year, I
https://en.wikipedia.org/wiki/Minimum%20intelligent%20signal%20test
The minimum intelligent signal test, or MIST, is a variation of the Turing test proposed by Chris McKinstry in which only boolean (yes/no or true/false) answers may be given to questions. The purpose of such a test is to provide a quantitative statistical measure of humanness, which may subsequently be used to optimize the performance of artificial intelligence systems intended to imitate human responses. McKinstry gathered approximately 80,000 propositions that could be answered yes or no, e.g.: Is Earth a planet? Was Abraham Lincoln once President of the United States? Is the sun bigger than my foot? Do people sometimes lie? He called these propositions Mindpixels. These questions test both specific knowledge of aspects of culture, and basic facts about the meaning of various words and concepts. It could therefore be compared with the SAT, intelligence testing and other controversial measures of mental ability. McKinstry's aim was not to distinguish between shades of intelligence but to identify whether a computer program could be considered intelligent at all. According to McKinstry, a program able to do much better than chance on a large number of MIST questions would be judged to have some level of intelligence and understanding. For example, on a 20-question test, if a program were guessing the answers at random, it could be expected to score 10 correct on average. But the probability of a program scoring 20 out of 20 correct by guesswork is only one in 220, i.e. one in 1,048,576; so if a program were able to sustain this level of performance over several independent trials, with no prior access to the propositions, it should be considered intelligent. Discussion McKinstry criticized existing approaches to artificial intelligence such as chatterbots, saying that his questions could "kill" AI programs by quickly exposing their weaknesses. He contrasted his approach, a series of direct questions assessing an AI's capabilities, to the Turing test and
https://en.wikipedia.org/wiki/Nyctography
Nyctography is a form of substitution cipher writing created by Lewis Carroll (Charles Lutwidge Dodgson) in 1891. Nyctography is written with a nyctograph (also invented by Carroll) and uses a system of dots and strokes all based on a dot placed in the upper left corner. Using the Nyctograph, one could quickly jot down ideas or notes without the aid of light. Carroll invented the Nyctograph and Nyctography because he was often awakened during the night with thoughts that needed to be written down immediately, and didn't want to go through the lengthy process of lighting a lamp just to have to extinguish it shortly thereafter. Nyctograph The device consisted of a gridded card with sixteen square holes, each a quarter inch wide, and system of symbols representing an alphabet of Carroll's design, which could then be transcribed the following day. He first named it "typhlograph" from ("blind"), but at the suggestion of one of his brother-students, this was subsequently changed into "Nyctograph". Initially, Carroll used an oblong of card with an oblong cut out of the centre to guide his writing in the dark. This did not appear to be satisfactory as the results were illegible. The new and final version of the nyctograph is recorded in his journal of September 24, 1891, and is the subject of a letter to The Lady magazine of October 29, 1891: From the description it appears that Carroll's nyctograph was a single row of 16 boxes cut from a piece of card. Carroll would enter one of his symbols in each box, then move the card down to the next line (which, in the darkness, probably, he would have to estimate) and then repeat the process. Nyctographic alphabet Each character had a large dot or circle in the upper-left corner. Beside the 26 letters of the alphabet, there were five additional characters for 'and', 'the', the corners of the letter 'f' to indicate that the following characters were digits ('figures'), the corners of the letter 'l' to indicate that they wer
https://en.wikipedia.org/wiki/List%20of%20AMD%20graphics%20processing%20units
The following is a list that contains general information about GPUs and video cards by AMD, including those by ATI Technologies before 2006, based on official specifications in table-form. Field explanations The headers in the table listed below describe the following: Model – The marketing name for the GPU assigned by AMD/ATI. Note that ATI trademarks have been replaced by AMD trademarks starting with the Radeon HD 6000 series for desktop and AMD FirePro series for professional graphics. Codename – The internal engineering codename for the GPU. Launch – Date of release for the GPU. Architecture – The microarchitecture used by the GPU. Fab – Fabrication process. Average feature size of components of the GPU. Transistors – Number of transistors on the die. Die size – Physical surface area of the die. Core config – The layout of the graphics pipeline, in terms of functional units. Core clock – The reference base and boost (if available) core clock frequency. Fillrate Pixel - The rate at which pixels can be rendered by the raster operators to a display. Measured in pixels/s. Texture - The rate at which textures can be mapped by the texture mapping units onto a polygon mesh. Measured in texels/s. Performance Shader operations - How many operations the pixel shaders (or unified shaders in Direct3D 10 and newer GPUs) can perform. Measured in operations/s. Vertex operations - The amount of geometry operations that can be processed on the vertex shaders in one second (only applies to Direct3D 9.0c and older GPUs). Measured in vertices/s. Memory Bus type – Type of memory bus utilized. Bus width – Maximum bit width of the memory bus utilized. Size – Size of the graphics memory. Clock – The reference memory clock frequency. Bandwidth – Maximum theoretical memory bandwidth based on bus type and width. TDP (Thermal design power) – Maximum amount of heat generated by the GPU chip, measured in Watt. TBP (Typical board power) – Typical power drawn by the t
https://en.wikipedia.org/wiki/Hempel%27s%20dilemma
Hempel's dilemma is a question first asked (at least on record) by the philosopher Carl Hempel. It has relevance to naturalism and physicalism in philosophy, and to philosophy of mind. The dilemma questions how the language of physics can be used to accurately describe existence, given that it relies on imperfect human linguistics, or as Hempel stated: "The thesis of physicalism would seem to require a language in which a true theory of all physical phenomena can be formulated. But it is quite unclear what is to be understood here by a physical phenomenon, especially in the context of a doctrine that has taken a decidedly linguistic turn." Overview Physicalism, in at least one rough sense, is the claim that the entire world may be described and explained using the laws of nature, in other words, that all phenomena are natural phenomena. This leaves open the question of what is 'natural' (in physicalism 'natural' means procedural, causally coherent or all effects have particular causes regardless of human knowledge [like physics] and interpretation and it also means 'ontological reality' and not just a hypothesis or a calculational technique), but one common understanding of the claim is that everything in the world is ultimately explicable in the terms of physics. This is known as reductive physicalism. However, this type of physicalism in its turn leaves open the question of what we are to consider as the proper terms of physics. There seem to be two options here, and these options form the horns of Hempel's dilemma, because neither seems satisfactory. On the one hand, we may define the physical as whatever is currently explained by our best physical theories, e.g., quantum mechanics, general relativity. Though many would find this definition unsatisfactory, some would accept that we have at least a general understanding of the physical based on these theories, and can use them to assess what is physical and what is not. And therein lies the rub, as a worked-
https://en.wikipedia.org/wiki/Bosenova
A bosenova or bose supernova is a very small, supernova-like explosion, which can be induced in a Bose–Einstein condensate (BEC) by changing the external magnetic field, so that the "self-scattering" interaction transitions from repulsive to attractive due to the Feshbach resonance, causing the BEC to "collapse and bounce" or "rebound." Although the total energy of the explosion is very small, the "collapse and bounce" scenario qualitatively resembles a condensed matter version of a core-collapse supernova, hence the term bosenova. The nomenclature is not a play of words on the Brazilian music style bossa nova, but a play of words with bose-einstein and supernova. Experiment In the particular experiment when a bosenova was first detected, transitioning the self-interaction from repulsive to attractive caused the BEC to implode and shrink to a size smaller than the optical detector's minimum resolution limit, and then suddenly "explode." In this explosion, about half of the atoms in the condensate superficially seemed to have "disappeared" from the experiment altogether, i.e., they were not detected in either the cold particle remnants nor in the expanding gas cloud produced. Under current BEC theory, which only very crudely accounts for the interactions between the particles composing the BEC, the bosenova phenomenon remains unexplained, because the energy available to the individual atoms of the condensate near absolute zero appears to be insufficient to cause the observed implosion. However, subsequent mean-field theories have been proposed to explain bosenovas as a collective phenomenon. The bosenova behaviour of a BEC may provide insights into the behavior of a neutron star, as well as into the possible properties of still-hypothetical boson stars and into the quantum theory of "collective phenomena" in general.
https://en.wikipedia.org/wiki/Biliverdin%20reductase
Biliverdin reductase (BVR) is an enzyme () found in all tissues under normal conditions, but especially in reticulo-macrophages of the liver and spleen. BVR facilitates the conversion of biliverdin to bilirubin via the reduction of a double-bond between the second and third pyrrole ring into a single-bond. There are two isozymes, in humans, each encoded by its own gene, biliverdin reductase A (BLVRA) and biliverdin reductase B (BLVRB). Mechanism of catalysis BVR acts on biliverdin by reducing its double-bond between the pyrrole rings into a single-bond. It accomplishes this using NADPH + H+ as an electron donor, forming bilirubin and NADP+ as products. BVR catalyzes this reaction through an overlapping binding site including Lys18, Lys22, Lys179, Arg183, and Arg185 as key residues. This binding site attaches to biliverdin, and causes its dissociation from heme oxygenase (HO) (which catalyzes reaction of ferric heme --> biliverdin), causing the subsequent reduction to bilirubin. Structure BVR is composed of two closely packed domains, between 247-415 amino acids long and containing a Rossmann fold. BVR has also been determined to be a zinc-binding protein with each enzyme protein having one strong-binding zinc atom. The C-terminal half of BVR contains the catalytic domain, which adopts a structure containing a six-stranded beta-sheet that is flanked on one face by several alpha-helices. This domain contains the catalytic active site, which reduces the gamma-methene bridge of the open tetrapyrrole, biliverdin IX alpha, to bilirubin with the concomitant oxidation of a NADH or NADPH cofactor. Function BVR works with the biliverdin/bilirubin redox cycle. It converts biliverdin to bilirubin (a strong antioxidant), which is then converted back into biliverdin through the actions of reactive oxygen species (ROS). This cycle allows for the neutralization of ROS, and the reuse of biliverdin products. Biliverdin also is replenished in the cycle with its format
https://en.wikipedia.org/wiki/Thorium%20fuel%20cycle
The thorium fuel cycle is a nuclear fuel cycle that uses an isotope of thorium, , as the fertile material. In the reactor, is transmuted into the fissile artificial uranium isotope which is the nuclear fuel. Unlike natural uranium, natural thorium contains only trace amounts of fissile material (such as ), which are insufficient to initiate a nuclear chain reaction. Additional fissile material or another neutron source is necessary to initiate the fuel cycle. In a thorium-fuelled reactor, absorbs neutrons to produce . This parallels the process in uranium breeder reactors whereby fertile absorbs neutrons to form fissile . Depending on the design of the reactor and fuel cycle, the generated either fissions in situ or is chemically separated from the used nuclear fuel and formed into new nuclear fuel. The thorium fuel cycle has several potential advantages over a uranium fuel cycle, including thorium's greater abundance, superior physical and nuclear properties, reduced plutonium and actinide production, and better resistance to nuclear weapons proliferation when used in a traditional light water reactor though not in a molten salt reactor. History Concerns about the limits of worldwide uranium resources motivated initial interest in the thorium fuel cycle. It was envisioned that as uranium reserves were depleted, thorium would supplement uranium as a fertile material. However, for most countries uranium was relatively abundant and research in thorium fuel cycles waned. A notable exception was India's three-stage nuclear power programme. In the twenty-first century thorium's claimed potential for improving proliferation resistance and waste characteristics led to renewed interest in the thorium fuel cycle. While thorium is more abundant in the continental crust than uranium and easily extracted from monazite as a side product of rare earth element mining, it is much less abundant in seawater than uranium. At Oak Ridge National Laboratory in the 1960s, the Mol
https://en.wikipedia.org/wiki/Coacervate
Coacervate ( or ) is an aqueous phase rich in macromolecules such as synthetic polymers, proteins or nucleic acids. It forms through liquid-liquid phase separation (LLPS), leading to a dense phase in thermodynamic equilibrium with a dilute phase. The dispersed droplets of dense phase are also called coacervates, micro-coacervates or coacervate droplets. These structures draw a lot of interest because they form spontaneously from aqueous mixtures and provide stable compartmentalization without the need of a membrane. The term coacervate was coined in 1929 by Dutch chemist Hendrik G. Bungenberg de Jong and Hugo R. Kruyt while studying lyophilic colloidal dispersions. The name is a reference to the clustering of colloidal particles, like bees in a swarm. The concept was later borrowed by Russian biologist Alexander I. Oparin to describe the proteinoid microspheres proposed to be primitive cells (protocells) on early Earth. Coacervate-like protocells are at the core of the Oparin-Haldane hypothesis. A reawakening of coacervate research was seen in the 2000s, starting with the recognition in 2004 by scientists at the University of California, Santa Barbara (UCSB) that some marine invertebrates (such as the sandcastle worm) exploit complex coacervation to produce water-resistant biological adhesives. A few years later in 2009 the role of liquid-liquid phase separation was further recognized to be involved in the formation of certain membraneless organelles by the biophysicists Clifford Brangwynne and Tony Hyman. Liquid organelles share features with coacervate droplets and fueled the study of coacervates for biomimicry. Thermodynamics Coacervates are a type of lyophilic colloid; that is, the dense phase retains some of the original solvent – generally water – and does not collapse into solid aggregates, rather keeping a liquid property. Coacervates can be characterized as complex or simple based on the driving force for the LLPS: associative or segregative. Associative
https://en.wikipedia.org/wiki/FPD-Link
Flat Panel Display Link, more commonly referred to as FPD-Link, is the original high-speed digital video interface created in 1996 by National Semiconductor (now within Texas Instruments). It is a free and open standard for connecting the output from a graphics processing unit in a laptop, tablet computer, flat panel display, or LCD television to the display panel's timing controller. Most laptops, tablet computers, flat-panel monitors, and TVs used the interface internally through 2010, when industry leaders AMD, Dell, Intel, Lenovo, LG, and Samsung together announced that they would be phasing out this interface by 2013 in favor of embedded DisplayPort (eDP). FPD-Link and LVDS FPD-Link was the first large-scale application of the low-voltage differential signaling (LVDS) standard. National Semiconductor immediately provided interoperability specifications for the FPD-Link technology in order to promote it as a free and open standard, and thus other IC suppliers were able to copy it. FlatLink by TI was the first interoperable version of FPD-Link. By the end of the twentieth century, the major notebook computer manufacturers created the Standard Panels Working Group (SPWG) and made FPD-Link / FlatLink the standard for transferring graphics and video through the notebook's hinge. Automotive and more applications In automotive applications, FPD-Link is commonly used for navigation systems, in-car entertainment, and backup cameras, as well as other advanced driver-assistance systems. The automotive environment is known to be one of the harshest for electronic equipment due to inherent extreme temperatures and electrical transients. In order to satisfy these stringent reliability requirements, the FPD-Link II and III chipsets meet or exceed the AEC-Q100 automotive reliability standard for integrated circuits, and the ISO 10605 standard for automotive ESD applications. Another display interface based on FPD-Link is OpenLDI. It enables longer cable lengths becau
https://en.wikipedia.org/wiki/OpenLDI
OpenLDI (Open LVDS Display Interface) is a high-bandwidth digital-video interface standard for connecting graphics/video processors to flat panel LCD monitors. Even though the promoter’s group originally designed it for the desktop computer to monitor application, the majority of applications today are industrial display connections. For example, displays in medical imaging, machine vision, and construction equipment use the OpenLDI chipsets. OpenLDI is based on the FPD-Link specification, which was the de facto standard for transferring graphics and video data through notebook computer hinges since the late 1990s. Both OpenLDI and FPD-Link use low-voltage differential signaling (LVDS) as the physical layer signaling, and the three terms have mistakenly been used synonymously. (FPD-Link and OpenLDI are largely compatible, beyond the physical-layer; specifying the same serial data-streams). The OpenLDI standard was promoted by National Semiconductor, Texas Instruments, Silicon Graphics (SGI) and others. OpenLDI wasn't used in many of the intended applications after losing the computer-to-monitor interconnect application to a competing standard, Digital Visual Interface (DVI). The SGI 1600SW was the only monitor produced in significant quantities with an OpenLDI connection, though it had minor differences from the final published standards. The 1600SW used a 36-pin MDR36 male connector with a pinout that differs from that of the 36-pin centronics-style connector in the OpenLDI standard. Sony produced some VAIO displays and laptops using the standard. (According to the SGI 1600SW entry, a few other displays were made by various manufacturers using the OpenLDI standard.) See also VGA
https://en.wikipedia.org/wiki/Copeland%E2%80%93Erd%C5%91s%20constant
The Copeland–Erdős constant is the concatenation of "0." with the base 10 representations of the prime numbers in order. Its value, using the modern definition of prime, is approximately 0.235711131719232931374143… . The constant is irrational; this can be proven with Dirichlet's theorem on arithmetic progressions or Bertrand's postulate (Hardy and Wright, p. 113) or Ramare's theorem that every even integer is a sum of at most six primes. It also follows directly from its normality (see below). By a similar argument, any constant created by concatenating "0." with all primes in an arithmetic progression dn + a, where a is coprime to d and to 10, will be irrational; for example, primes of the form 4n + 1 or 8n + 1. By Dirichlet's theorem, the arithmetic progression dn · 10m + a contains primes for all m, and those primes are also in cd + a, so the concatenated primes contain arbitrarily long sequences of the digit zero. In base 10, the constant is a normal number, a fact proven by Arthur Herbert Copeland and Paul Erdős in 1946 (hence the name of the constant). The constant is given by where pn is the nth prime number. Its continued fraction is [0; 4, 4, 8, 16, 18, 5, 1, …] (). Related constants Copeland and Erdős's proof that their constant is normal relies only on the fact that is strictly increasing and , where is the nth prime number. More generally, if is any strictly increasing sequence of natural numbers such that and is any natural number greater than or equal to 2, then the constant obtained by concatenating "0." with the base- representations of the 's is normal in base . For example, the sequence satisfies these conditions, so the constant 0.003712192634435363748597110122136… is normal in base 10, and 0.003101525354661104…7 is normal in base 7. In any given base b the number which can be written in base b as 0.0110101000101000101…b where the nth digit is 1 if and only if n is prime, is irrational. See also Smarandache–Wellin numbers:
https://en.wikipedia.org/wiki/Heat%20kernel
In the mathematical study of heat conduction and diffusion, a heat kernel is the fundamental solution to the heat equation on a specified domain with appropriate boundary conditions. It is also one of the main tools in the study of the spectrum of the Laplace operator, and is thus of some auxiliary importance throughout mathematical physics. The heat kernel represents the evolution of temperature in a region whose boundary is held fixed at a particular temperature (typically zero), such that an initial unit of heat energy is placed at a point at time t = 0. The most well-known heat kernel is the heat kernel of d-dimensional Euclidean space Rd, which has the form of a time-varying Gaussian function, This solves the heat equation for all t > 0 and x,y ∈ Rd, where Δ is the Laplace operator, with the initial condition where δ is a Dirac delta distribution and the limit is taken in the sense of distributions. To wit, for every smooth function ϕ of compact support, On a more general domain Ω in Rd, such an explicit formula is not generally possible. The next simplest cases of a disc or square involve, respectively, Bessel functions and Jacobi theta functions. Nevertheless, the heat kernel (for, say, the Dirichlet problem) still exists and is smooth for t > 0 on arbitrary domains and indeed on any Riemannian manifold with boundary, provided the boundary is sufficiently regular. More precisely, in these more general domains, the heat kernel for the Dirichlet problem is the solution of the initial boundary value problem It is not difficult to derive a formal expression for the heat kernel on an arbitrary domain. Consider the Dirichlet problem in a connected domain (or manifold with boundary) U. Let λn be the eigenvalues for the Dirichlet problem of the Laplacian Let ϕn denote the associated eigenfunctions, normalized to be orthonormal in L2(U). The inverse Dirichlet Laplacian Δ−1 is a compact and selfadjoint operator, and so the spectral theorem implies th
https://en.wikipedia.org/wiki/Myofilament
Myofilaments are the three protein filaments of myofibrils in muscle cells. The main proteins involved are myosin, actin, and titin. Myosin and actin are the contractile proteins and titin is an elastic protein. The myofilaments act together in muscle contraction, and in order of size are a thick one of mostly myosin, a thin one of mostly actin, and a very thin one of mostly titin. Types of muscle tissue are striated skeletal muscle and cardiac muscle, obliquely striated muscle (found in some invertebrates), and non-striated smooth muscle. Various arrangements of myofilaments create different muscles. Striated muscle has transverse bands of filaments. In obliquely striated muscle, the filaments are staggered. Smooth muscle has irregular arrangements of filaments. Structure There are three different types of myofilaments: thick, thin, and elastic filaments. Thick filaments consist primarily of a type of myosin, a motor protein – myosin II. Each thick filament is approximately 15 nm in diameter, and each is made of several hundred molecules of myosin. A myosin molecule is shaped like a golf club, with a tail formed of two intertwined chains and a double globular head projecting from it at an angle. Half of the myosin heads angle to the left and half of them angle to the right, creating an area in the middle of the filament known as the M-region or bare zone. Thin filaments, are 7 nm in diameter, and consist primarily of the protein actin, specifically filamentous F-actin. Each F-actin strand is composed of a string of subunits called globular G-actin. Each G-actin has an active site that can bind to the head of a myosin molecule. Each thin filament also has approximately 40 to 60 molecules of tropomyosin, the protein that blocks the active sites of the thin filaments when the muscle is relaxed. Each tropomyosin molecule has a smaller calcium-binding protein called troponin bound to it. All thin filaments are attached to the Z-line. Elastic filaments, 1 nm in
https://en.wikipedia.org/wiki/Ntoskrnl.exe
ntoskrnl.exe (short for Windows NT operating system kernel executable), also known as the kernel image, contains the kernel and executive layers of the Microsoft Windows NT kernel, and is responsible for hardware abstraction, process handling, and memory management. In addition to the kernel and executive mentioned earlier, it contains the cache manager, security reference monitor, memory manager, scheduler (Dispatcher), and blue screen of death (the prose and portions of the code). Overview x86 versions of ntoskrnl.exe depend on bootvid.dll, hal.dll and kdcom.dll (x64 variants of ntoskrnl.exe have these dlls embed into the kernel to increase performance). However, it is not a native application. In other words, it is not linked against ntdll.dll. Instead, ntoskrnl.exe containing a standard "start" entry point that calls the architecture-independent kernel initialization function. Because it requires a static copy of the C Runtime objects, the executable is usually about 10 MB in size. In Windows XP and earlier, the Windows installation source ships four kernel image files to support uniprocessor systems, symmetric multiprocessor (SMP) systems, CPUs with PAE, and CPUs without PAE. Windows setup decides whether the system is uniprocessor or multiprocessor, then, installs both the PAE and non-PAE variants of the kernel image for the decided kind. On a multiprocessor system, Setup installs ntkrnlmp.exe and ntkrpamp.exe but renames them to ntoskrnl.exe and ntkrnlpa.exe respectively. Starting with Windows Vista, Microsoft began unifying the kernel images as multi-core CPUs took to the market and PAE became mandatory. Routines in ntoskrnl use prefixes on their names to indicate in which component of ntoskrnl they are defined. Since not all functions are being exported by the kernel, function prefixes ending in i or p (such as Mi, Obp, Iop) are internal and not supposed to be accessed by the user. These functions contain the core code and implements important checks
https://en.wikipedia.org/wiki/Darwinism%20%28book%29
Darwinism: An Exposition of the Theory of Natural Selection with Some of Its Applications is an 1889 book on evolution by Alfred Russel Wallace, the co-discoverer of evolution by natural selection together with Charles Darwin. This was a book Wallace wrote as a defensive response to the scientific critics of natural selection. Of all Wallace's books, it is cited by scholarly publications the most. Synopsis In Darwinism fifteen chapters, Alfred Russel Wallace sets out his understanding of the theory of evolution by natural selection. He begins by defining "species", discussing creationism, opinion before Charles Darwin, and Darwin's theory. He then describes the Malthusian struggle for existence, given the ability of organisms to reproduce in a world of finite resources. He explains the importance of variability within species, giving examples. He describes variation in domesticated animals and cultivated plants, and the process of artificial selection by breeders. Wallace then explains the process of natural selection acting on pre-existing variation. He lists various issues and objections to the theory. He discusses how interspecies hybrids are usually infertile, and how this can contribute to reproductive isolation. He then examines the purpose of animal coloration, including camouflage and mimicry, arguing that these are evidence of natural selection. He gives detailed examples of warning coloration and mimicry, discussing how these are produced by selection. Animal coloration and ornamentation that differs between the sexes are discussed, though he largely disagrees with Darwin's theory of sexual selection. Wallace then explores the co-evolution of flowers with their pollinators including insects and birds. He then describes the geographical distribution of organisms, arguing that this was created by long-distance dispersal of pioneer organisms, such as insects blown across the sea. He explains the geological evidence for evolution, the fossil record in succ
https://en.wikipedia.org/wiki/Hermann%E2%80%93Mauguin%20notation
In geometry, Hermann–Mauguin notation is used to represent the symmetry elements in point groups, plane groups and space groups. It is named after the German crystallographer Carl Hermann (who introduced it in 1928) and the French mineralogist Charles-Victor Mauguin (who modified it in 1931). This notation is sometimes called international notation, because it was adopted as standard by the International Tables For Crystallography since their first edition in 1935. The Hermann–Mauguin notation, compared with the Schoenflies notation, is preferred in crystallography because it can easily be used to include translational symmetry elements, and it specifies the directions of the symmetry axes. Point groups Rotation axes are denoted by a number n — 1, 2, 3, 4, 5, 6, 7, 8 ... (angle of rotation φ = ). For improper rotations, Hermann–Mauguin symbols show rotoinversion axes, unlike Schoenflies and Shubnikov notations, that shows rotation-reflection axes. The rotoinversion axes are represented by the corresponding number with a macron, — , , , , , , , , ... . is equivalent to a mirror plane and usually notated as m. The direction of the mirror plane is defined as the direction perpendicular to it (the direction of the axis). Hermann–Mauguin symbols show non-equivalent axes and planes in a symmetrical fashion. The direction of a symmetry element corresponds to its position in the Hermann–Mauguin symbol. If a rotation axis n and a mirror plane m have the same direction (i.e. the plane is perpendicular to axis n), then they are denoted as a fraction or n/m. If two or more axes have the same direction, the axis with higher symmetry is shown. Higher symmetry means that the axis generates a pattern with more points. For example, rotation axes 3, 4, 5, 6, 7, 8 generate 3-, 4-, 5-, 6-, 7-, 8-point patterns, respectively. Improper rotation axes , , , , , generate 6-, 4-, 10-, 6-, 14-, 8-point patterns, respectively. If a rotation and a rotoinversion axis generate the same n
https://en.wikipedia.org/wiki/MRC%20Cognition%20and%20Brain%20Sciences%20Unit
The Cognition and Brain Sciences Unit is a branch of the UK Medical Research Council, based in Cambridge, England. The CBSU is a centre for cognitive neuroscience, with a mission to improve human health by understanding and enhancing cognition and behaviour in health, disease and disorder. It is one of the largest and most long-lasting contributors to the development of psychological theory and practice. The CBSU has its own magnetic resonance imaging (MRI, 3T) scanner on-site, as well as a 306-channel magnetoencephalography (MEG) system and a 128-channel electroencephalography (EEG) laboratory. The CBSU has close links to clinical neuroscience research in the University of Cambridge Medical School. Over 140 scientists, students, and support staff work in research areas such as Memory, Attention, Emotion, Speech and Language, Development and Aging, Computational Modelling and Neuroscience Methods. With dedicated facilities available on site, the Unit has particular strengths in the application of neuroimaging techniques in the context of well-developed neuro-cognitive theory. History The unit was established in 1944 as the MRC Applied Psychology Unit. In June 2001, the History of Modern Biomedicine Research Group held a witness seminar to gather information on the unit's history. On 1 July 2017, the CBU was merged with the University of Cambridge. Coming under the Clinical School, the unit is still funded by the British government through Research Councils UK but is managed and maintained by Cambridge University. List of directors Kenneth Craik, 1944–1945 Frederic Bartlett, 1945–1951 Norman Mackworth, 1951–1958 Donald Broadbent, 1958–1974 Alan Baddeley, 1974–1997 William Marslen-Wilson, 1997–2010 Susan Gathercole, 2011–2018 Matthew Lambon Ralph, 2018–
https://en.wikipedia.org/wiki/Slip%20%28materials%20science%29
In materials science, slip is the large displacement of one part of a crystal relative to another part along crystallographic planes and directions. Slip occurs by the passage of dislocations on close/packed planes, which are planes containing the greatest number of atoms per area and in close-packed directions (most atoms per length). Close-packed planes are known as slip or glide planes. A slip system describes the set of symmetrically identical slip planes and associated family of slip directions for which dislocation motion can easily occur and lead to plastic deformation. The magnitude and direction of slip are represented by the Burgers vector, . An external force makes parts of the crystal lattice glide along each other, changing the material's geometry. A critical resolved shear stress is required to initiate a slip. Slip systems Face centered cubic crystals Slip in face centered cubic (fcc) crystals occurs along the close packed plane. Specifically, the slip plane is of type {111}, and the direction is of type <10>. In the diagram on the right, the specific plane and direction are (111) and [10], respectively. Given the permutations of the slip plane types and direction types, fcc crystals have 12 slip systems. In the fcc lattice, the norm of the Burgers vector, b, can be calculated using the following equation: Where a is the lattice constant of the unit cell. Body centered cubic crystals Slip in body-centered cubic (bcc) crystals occurs along the plane of shortest Burgers vector as well; however, unlike fcc, there are no truly close-packed planes in the bcc crystal structure. Thus, a slip system in bcc requires heat to activate. Some bcc materials (e.g. α-Fe) can contain up to 48 slip systems. There are six slip planes of type {110}, each with two <111> directions (12 systems). There are 24 {123} and 12 {112} planes each with one <111> direction (36 systems, for a total of 48). Although the number of possible slip systems is much higher in bcc cr
https://en.wikipedia.org/wiki/Taint%20checking
Taint checking is a feature in some computer programming languages, such as Perl, Ruby or Ballerina designed to increase security by preventing malicious users from executing commands on a host computer. Taint checks highlight specific security risks primarily associated with web sites which are attacked using techniques such as SQL injection or buffer overflow attack approaches. Overview The concept behind taint checking is that any variable that can be modified by an outside user (for example a variable set by a field in a web form) poses a potential security risk. If that variable is used in an expression that sets a second variable, that second variable is now also suspicious. The taint checking tool can then proceed variable by variable forming a list of variables which are potentially influenced by outside input. If any of these variables is used to execute dangerous commands (such as direct commands to a SQL database or the host computer operating system), the taint checker warns that the program is using a potentially dangerous tainted variable. The computer programmer can then redesign the program to erect a safe wall around the dangerous input. Taint checking may be viewed as a conservative approximation of the full verification of non-interference or the more general concept of secure information flow. Because information flow in a system cannot be verified by examining a single execution trace of that system, the results of taint analysis will necessarily reflect approximate information regarding the information flow characteristics of the system to which it is applied. Example The following dangerous Perl code opens a large SQL injection vulnerability by not checking the value of the $name variable: #!/usr/bin/perl my $name = $cgi->param("name"); # Get the name from the browser ... $dbh->{TaintIn} = 1; $dbh->execute("SELECT * FROM users WHERE name = '$name';"); # Execute an SQL query If taint checking is turned on, Perl would refuse to run t
https://en.wikipedia.org/wiki/Hybrid%20neural%20network
The term hybrid neural network can have two meanings: Biological neural networks interacting with artificial neuronal models, and Artificial neural networks with a symbolic part (or, conversely, symbolic computations with a connectionist part). As for the first meaning, the artificial neurons and synapses in hybrid networks can be digital or analog. For the digital variant voltage clamps are used to monitor the membrane potential of neurons, to computationally simulate artificial neurons and synapses and to stimulate biological neurons by inducing synaptic. For the analog variant, specially designed electronic circuits connect to a network of living neurons through electrodes. As for the second meaning, incorporating elements of symbolic computation and artificial neural networks into one model was an attempt to combine the advantages of both paradigms while avoiding the shortcomings. Symbolic representations have advantages with respect to explicit, direct control, fast initial coding, dynamic variable binding and knowledge abstraction. Representations of artificial neural networks, on the other hand, show advantages for biological plausibility, learning, robustness (fault-tolerant processing and graceful decay), and generalization to similar input. Since the early 1990s many attempts have been made to reconcile the two approaches.
https://en.wikipedia.org/wiki/Seed%20dormancy
Seed dormancy is an evolutionary adaptation that prevents seeds from germinating during unsuitable ecological conditions that would typically lead to a low probability of seedling survival. Dormant seeds do not germinate in a specified period of time under a combination of environmental factors that are normally conducive to the germination of non-dormant seeds. An important function of seed dormancy is delayed germination, which allows dispersal and prevents simultaneous germination of all seeds. The staggering of germination safeguards some seeds and seedlings from suffering damage or death from short periods of bad weather or from transient herbivores; it also allows some seeds to germinate when competition from other plants for light and water might be less intense. Another form of delayed seed germination is seed quiescence, which is different from true seed dormancy and occurs when a seed fails to germinate because the external environmental conditions are too dry or warm or cold for germination. Many species of plants have seeds that delay germination for many months or years, and some seeds can remain in the soil seed bank for more than 50 years before germination. Seed dormancy is especially adaptive in fire-prone ecosystems. Some seeds have a very long viability period, and the oldest documented germinating seed was nearly 2000 years old based on radiocarbon dating. Overview True dormancy or inherent (or innate) dormancy is caused by conditions within the seed that prevent germination even if the conditions are favorable. Imposed dormancy is caused by the external conditions that remain unsuitable for germination Seed dormancy can be divided into two major categories based on what part of the seed produces dormancy: exogenous and endogenous. There are three types of inherent dormancy based on their mode of action: physical, physiological and morphological. There have been a number of classification schemes developed to group different dormant seeds, b
https://en.wikipedia.org/wiki/Pattern%20formation
The science of pattern formation deals with the visible, (statistically) orderly outcomes of self-organization and the common principles behind similar patterns in nature. In developmental biology, pattern formation refers to the generation of complex organizations of cell fates in space and time. The role of genes in pattern formation is an aspect of morphogenesis, the creation of diverse anatomies from similar genes, now being explored in the science of evolutionary developmental biology or evo-devo. The mechanisms involved are well seen in the anterior-posterior patterning of embryos from the model organism Drosophila melanogaster (a fruit fly), one of the first organisms to have its morphogenesis studied, and in the eyespots of butterflies, whose development is a variant of the standard (fruit fly) mechanism. Patterns in nature Examples of pattern formation can be found in biology, physics, and science, and can readily be simulated with computer graphics, as described in turn below. Biology Biological patterns such as animal markings, the segmentation of animals, and phyllotaxis are formed in different ways. In developmental biology, pattern formation describes the mechanism by which initially equivalent cells in a developing tissue in an embryo assume complex forms and functions. Embryogenesis, such as of the fruit fly Drosophila, involves coordinated control of cell fates. Pattern formation is genetically controlled, and often involves each cell in a field sensing and responding to its position along a morphogen gradient, followed by short distance cell-to-cell communication through cell signaling pathways to refine the initial pattern. In this context, a field of cells is the group of cells whose fates are affected by responding to the same set positional information cues. This conceptual model was first described as the French flag model in the 1960s. More generally, the morphology of organisms is patterned by the mechanisms of evolutionary development
https://en.wikipedia.org/wiki/Occipital%20artery
The occipital artery is a branch of the external carotid artery that provides arterial supply to the back of the scalp, sternocleidomastoid muscles, and deep muscles of the back and neck. Structure Origin The occipital artery arises from (the posterior aspect of) the external carotid artery (some 2 cm distal to the origin of the external carotid artery). Course and relations At its origin, the hypoglossal nerve (CN XII) crosses artery superficially as the nerve passes posteroanteriorly. The artery passes superoposteriorly deep to the posterior belly of the digastricus muscle. It crosses the internal carotid artery and vein, the vagus nerve (CN X), accessory nerve (CN XI), and hypoglossal nerve (CN XII). It next ascends to the interval between the transverse process of the atlas and the mastoid process of the temporal bone, and passes horizontally backward, grooving the surface of the latter bone, being covered by the sternocleidomastoideus, splenius capitis, longissimus capitis, and digastricus, and resting upon the rectus capitis lateralis, the obliquus superior, and semispinalis capitis. It then changes its course and runs vertically upward, pierces the fascia connecting the cranial attachment of the trapezius with the sternocleidomastoideus, and ascends in a tortuous course in the superficial fascia of the scalp, where it divides into numerous branches, which reach as high as the vertex of the skull and anastomose with the posterior auricular and superficial temporal arteries. Distribution Muscular branches: supply the digastric, stylohyoid, splenius, and longus capitis muscles. Sternocleidomastoid branch: This branch divides into upper and lower branches in the carotid triangle. The upper branch accompanies the accessory nerve to the sternocleidomastoid, and the lower branch arises near the origin of the occipital artery before entering the sternocleidomastoid muscle. Occasionally, this branch arises directly from the external carotid artery. Auric
https://en.wikipedia.org/wiki/Anonymous%20post
An anonymous post, is an entry on a textboard, anonymous bulletin board system, or other discussion forums like Internet forum, without a screen name or more commonly by using a non-identifiable pseudonym. Some online forums such as Slashdot do not allow such posts, requiring users to be registered either under their real name or utilizing a pseudonym. Others like JuicyCampus, AutoAdmit, 2channel, and other Futaba-based imageboards (such as 4chan) thrive on anonymity. Users of 4chan, in particular, interact in an anonymous and ephemeral environment that facilitates rapid generation of new trends. History of online anonymity Online anonymity can be traced to Usenet newsgroups in the late 1990s where the notion of using invalid emails for posting to newsgroups was introduced. This was primarily used for discussion on newsgroups pertaining to certain sensitive topics. There was also the introduction of anonymous remailers which were capable of stripping away the sender's address from mail packets before sending them to the receiver. Online services which facilitated anonymous posting sprang up around mid-1992, originating with the cypherpunk group. The precursor to Internet forums like 2channel and 4chan were textboards like Ayashii World and Amezou World that provided the ability for anonymous posts in Japan. These "large-scale anonymous textboards" were inspired by the Usenet culture and were primarily focused on technology, unlike their descendants. Today, image boards receive tremendous Internet traffic from all parts of the world. In 2011, on 4chan's most popular board, /b/, there were roughly 35,000 threads and 400,000 posts created per day. At that time, that level of content was on par with YouTube. Such high traffic suggests a broad demand from Internet users for anonymous content sharing sites. Levels of anonymity Anonymity on the Internet can pertain to both the utilization of pseudonyms or requiring no authentication at all (also called "perfect anonymi
https://en.wikipedia.org/wiki/Private%20IP
PIP in telecommunications and datacommunications stands for Private Internet Protocol or Private IP. PIP refers to connectivity into a private extranet network which by its design emulates the functioning of the Internet. Specifically, the Internet uses a routing protocol called border gateway protocol (BGP), as do most Multiprotocol Label Switching (MPLS) networks. With this design, there is an ambiguity to the route that a packet can take while traversing the network. Whereas the Internet is a public offering, MPLS PIP networks are private. This lends a known, often used, and comfortable network design model for private implementation. Private IP removes the need for antiquated Frame Relay networks, and even more antiquated point-to-point networks, with the service provider able to offer a private extranet to its customer at an affordable pricepoint.
https://en.wikipedia.org/wiki/Icelandic%20magical%20staves
Icelandic magical staves () are sigils that were credited with supposed magical effect preserved in various Icelandic grimoires, such as the Galdrabók, dating from the 17th century and later. Table of magical staves See also Galdr Hex sign Runic magic
https://en.wikipedia.org/wiki/Autosave
Autosave is a saving function in many computer applications and video games which automatically saves the current changes or progress in the program or game, intending to prevent data loss should the user be otherwise prevented from doing so manually by a crash, freeze or user error. Autosaving is typically done either in predetermined intervals or before, during, and after a complex editing task is begun. Application software It has traditionally been seen as a feature to protect documents in an application or system failure (crash), and autosave backups are often purged whenever the user finishes their work. An alternative paradigm is to have all changes saved continuously (as with pen and paper) and all versions of a document available for review. This would remove the need for saving documents entirely. There are challenges to implementation at the file, application and operating system levels. For example, in Microsoft Office, this option is called AutoRecover and, by default, saves the document every ten minutes in the temporary file directory. Restarting an Office program after crashing prompts the user to save the last recovered version. However, this does not protect users who mistakenly click "No" when asked to save their changes if Excel closes normally (except for Office 2013 and later). Autosave also syncs documents to OneDrive when editing normally. Mac OS 10.7 Lion added an autosave feature that is available to some applications, and works in conjunction with Time Machine-like functionality to periodically save all versions of a document. This eliminates the need for any manual saving, as well as providing versioning support through the same system. A version is saved every five minutes, during any extended periods of idle time, or when the user uses "Save a version," which replaces the former "Save" menu item and takes its Command-S shortcut. Saves are made on snapshots of the document data and occur in a separate thread, so the user is never pa
https://en.wikipedia.org/wiki/5-Methyluridine%20triphosphate
5-Methyluridine triphosphate or m5UTP is one of five nucleoside triphosphates. It is the ribonucleoside triphosphate of thymidine, but the nomenclature with "5-methyluridine" is used because the term thymidine triphosphate is used for the deoxyribonucleoside by convention.
https://en.wikipedia.org/wiki/IBM%20Advanced%20Peer-to-Peer%20Networking
IBM Advanced Peer-to-Peer Networking (APPN) is an extension to the Systems Network Architecture (SNA) "that allows large and small computers to communicate as peers across local and wide-area networks." Goals and features The goals of APPN were: Provide effective routing for SNA traffic Allow sessions to be established without the involvement of a central computer Reduce the requirements to predict resource use Provide prioritization within SNA traffic Support both legacy and APPN traffic To meet these goals it includes features such as these: distributed network control dynamic exchange of network topology information to foster ease of connection, reconfiguration, and route selection dynamic definition of network resources automated resource registration and directory lookup. History APPN was defined around 1986, and was meant to complement IBM's Systems Network Architecture. It was designed as a simplification, but it turned out to be significantly complex, in particular in migration situations. APPN was originally meant to be a "DECNET killer", but DEC actually died before APPN was completed. APPN has been largely superseded by TCP/IP (Internet). APPN evolved to include a more efficient data routing layer which was called High Performance Routing (HPR). HPR was made available across a range of enterprise corporation networking products in the late 1990s, but today is typically used only within IBM's z/OS environments as a replacement for legacy SNA networks. It seems to be still widely used within UDP tunnels, this technology is known as Enterprise Extender. APPN should not be confused with the similarly named APPC (Advanced Program-to-Program Communication). APPN manages communication between machines, including routing, and operates at the transport and network layers. By contrast, APPC manages communication between programs, operating at the application and presentation layers. APPN has nothing to do with peer-to-peer file sharing software such
https://en.wikipedia.org/wiki/Projection%20%28mathematics%29
In mathematics, a projection is an idempotent mapping of a set (or other mathematical structure) into a subset (or sub-structure). In this case, idempotent means that projecting twice is the same as projecting once. The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost. An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the projection (shadow) of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a closed disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are: The projection from a point onto a plane or central projection: If C is a point, called the center of projection, then the projection of a point P different from C onto a plane that does not contain C is the intersection of the line CP with the plane. The points P such that the line CP is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane (see Projective geometry for a formalization of this terminology). The projection of the point C itself is not defined. The projection parallel to a direction D, onto a plane or parallel projection: The image of a point P is the intersection with the plane of the line parallel to D passing through P. See for an accurate definition, generalized to any dimension. The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the con
https://en.wikipedia.org/wiki/Modified%20internal%20rate%20of%20return
The modified internal rate of return (MIRR) is a financial measure of an investment's attractiveness. It is used in capital budgeting to rank alternative investments of equal size. As the name implies, MIRR is a modification of the internal rate of return (IRR) and as such aims to resolve some problems with the IRR. Problems associated with the IRR While there are several problems with the IRR, MIRR resolves two of them. Firstly, IRR is sometimes misapplied, under an assumption that interim positive cash flows are reinvested elsewhere in a different project at the same rate of return offered by the project that generated them. This is usually an unrealistic scenario and a more likely situation is that the funds will be reinvested at a rate closer to the firm's cost of capital. The IRR therefore often gives an unduly optimistic picture of the projects under study. Generally for comparing projects more fairly, the weighted average cost of capital should be used for reinvesting the interim cash flows. Secondly, more than one IRR can be found for projects with alternating positive and negative cash flows, which leads to confusion and ambiguity. MIRR finds only one value. Calculation MIRR is calculated as follows: , where n is the number of equal periods at the end of which the cash flows occur (not the number of cash flows), PV is present value (at the beginning of the first period), FV is future value (at the end of the last period). The formula adds up the negative cash flows after discounting them to time zero using the external cost of capital, adds up the positive cash flows including the proceeds of reinvestment at the external reinvestment rate to the final period, and then works out what rate of return would cause the magnitude of the discounted negative cash flows at time zero to be equivalent to the future value of the positive cash flows at the final time period. Spreadsheet applications, such as Microsoft Excel, have inbuilt functions to calculate t
https://en.wikipedia.org/wiki/Fluoroacetic%20acid
Fluoroacetic acid is a organofluorine compound with formula CH2FCO2H. It is a colorless solid that is noted for its relatively high toxicity. The conjugate base, fluoroacetate occurs naturally in at least 40 plants in Australia, Brazil, and Africa. It is one of only five known organic fluorine-containing natural products. Toxicity Fluoroacetic acid is a harmful metabolite of some fluorine-containing drugs (median lethal dose, LD50 = 10 mg/kg in humans). The most common metabolic sources of fluoroacetic acid are fluoroamines and fluoroethers. Fluoroacetic acid can disrupt the Krebs cycle. In contrast with monofluoroacetic acid, difluoroacetic acid and trifluoroacetic acid are far less toxic. Its pKa is 2.66, in contrast to 1.24 and 0.23 for the respective di- and trifluorinated acids. Uses Fluoroacetic acid is used to manufacture pesticides especially rodenticides (see sodium fluoroacetate). The overall market is projected to rise at a considerable rate during the forecast period, 2021 to 2027. See also Difluoroacetic acid Trifluoroacetic acid
https://en.wikipedia.org/wiki/Multiple%20%28mathematics%29
In mathematics, a multiple is the product of any quantity and an integer. In other words, for the quantities a and b, it can be said that b is a multiple of a if b = na for some integer n, which is called the multiplier. If a is not zero, this is equivalent to saying that is an integer. When a and b are both integers, and b is a multiple of a, then a is called a divisor of b. One says also that a divides b. If a and b are not integers, mathematicians prefer generally to use integer multiple instead of multiple, for clarification. In fact, multiple is used for other kinds of product; for example, a polynomial p is a multiple of another polynomial q if there exists third polynomial r such that p = qr. Examples 14, 49, −21 and 0 are multiples of 7, whereas 3 and −6 are not. This is because there are integers that 7 may be multiplied by to reach the values of 14, 49, 0 and −21, while there are no such integers for 3 and −6. Each of the products listed below, and in particular, the products for 3 and −6, is the only way that the relevant number can be written as a product of 7 and another real number: is not an integer; is not an integer. Properties 0 is a multiple of every number (). The product of any integer and any integer is a multiple of . In particular, , which is equal to , is a multiple of (every integer is a multiple of itself), since 1 is an integer. If and are multiples of then and are also multiples of . Submultiple In some texts, "a is a submultiple of b" has the meaning of "a being a unit fraction of b" (a1/b) or, equivalently, "b being an integer multiple n of a" (bna). This terminology is also used with units of measurement (for example by the BIPM and NIST), where a unit submultiple is obtained by prefixing the main unit, defined as the quotient of the main unit by an integer, mostly a power of 103. For example, a millimetre is the 1000-fold submultiple of a metre. As another example, one inch may be considered as a 12-fold su
https://en.wikipedia.org/wiki/Projection%20%28set%20theory%29
In set theory, a projection is one of two closely related types of functions or operations, namely: A set-theoretic operation typified by the th projection map, written that takes an element of the Cartesian product to the value A function that sends an element to its equivalence class under a specified equivalence relation or, equivalently, a surjection from a set to another set. The function from elements to equivalence classes is a surjection, and every surjection corresponds to an equivalence relation under which two elements are equivalent when they have the same image. The result of the mapping is written as when is understood, or written as when it is necessary to make explicit. See also
https://en.wikipedia.org/wiki/Holoprotein
A holoprotein or conjugated protein is an apoprotein combined with its prosthetic group. Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron-sulfur clusters) or organic compounds (e.g., flavin and heme). Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase). An example of an enzyme that contains a cofactor is carbonic anhydrase, which has a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions. Enzymes that require a cofactor but do not have one bound are called apoenzymes or apoproteins. An enzyme together with the cofactor(s) required for activity is called a holoenzyme (or haloenzyme). The term holoenzyme can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity.
https://en.wikipedia.org/wiki/Pervasive%20Software
Pervasive Software was a company that developed software including database management systems and extract, transform and load tools. Pervasive Data Integrator and Pervasive Data Profiler are integration products, and the Pervasive PSQL relational database management system is its primary data storage product. These embeddable data management products deliver integration between corporate data, third-party applications and custom software. Pervasive Software was headquartered in Austin, Texas, and sold its products with partners in other countries. The company is involved in cloud computing through DataSolutions and its DataCloud offering along with its long-standing relationship with salesforce.com. It was acquired by Actian Corp. in April 2013. History Pervasive started in 1982 as SoftCraft developing the database management system technology Btrieve. Acquired by Novell in 1987, in January 1994 Pervasive spun out as Btrieve Technologies. The company name was changed to Pervasive Software in June 1996. Their initial public offering in 1997 raised $18.6 million. Ron R. Harris was chief executive and founder Nancy R. Woodward was chairman of the board of directors (the other co-founder was her husband Douglas Woodward). Its shares were listed on the Nasdaq exchange under symbol PVSW. Its database product was announced in 1999 as Pervasive.SQL version 7, and later renamed PSQL. PSQL implemented the atomicity, consistency, isolation, durability properties known as ACID using a relational database model. In August 2003, Pervasive agreed to acquire Data Junction Corporation, makers of data and application integration tools renamed Pervasive Data Integrator, for about $51.7 million in cash and stock shares. Data Junction, founded in 1984, was a privately held company also headquartered in Austin. The merger closed in December 2003. Pervasive also acquired business-to-business data interchange service Channelinx in August 2009. Based in Greenville, South Carolina
https://en.wikipedia.org/wiki/Optimal%20virulence
Optimal virulence is a concept relating to the ecology of hosts and parasites. One definition of virulence is the host's parasite-induced loss of fitness. The parasite's fitness is determined by its success in transmitting offspring to other hosts. For about 100 years, the consensus was that virulence decreased and parasitic relationships evolved toward symbiosis. This was even called the law of declining virulence despite being a hypothesis, not even a theory. It has been challenged since the 1980s and has been disproved. A pathogen that is too restrained will lose out in competition to a more aggressive strain that diverts more host resources to its own reproduction. However, the host, being the parasite's resource and habitat in a way, suffers from this higher virulence. This might induce faster host death, and act against the parasite's fitness by reducing probability to encounter another host (killing the host too fast to allow for transmission). Thus, there is a natural force providing pressure on the parasite to "self-limit" virulence. The idea is, then, that there exists an equilibrium point of virulence, where parasite's fitness is highest. Any movement on the virulence axis, towards higher or lower virulence, will result in lower fitness for the parasite, and thus will be selected against. Mode of transmission Paul W. Ewald has explored the relationship between virulence and mode of transmission. He came to the conclusion that virulence tends to remain especially high in waterborne and vector-borne infections, such as cholera and dengue. Cholera is spread through sewage and dengue through mosquitos. In the case of respiratory infections, the pathogen depends on an ambulatory host to survive. It must spare the host long enough to find a new host. Water- or vector-borne transmission circumvents the need for a mobile host. Ewald is convinced that the crowding of field hospitals and trench warfare provided an easy route to transmission that evolved the
https://en.wikipedia.org/wiki/Systematics%20and%20the%20Origin%20of%20Species
Systematics and the Origin of Species from the Viewpoint of a Zoologist is a book written by zoologist and evolutionary biologist Ernst Mayr, first published in 1942 by Columbia University Press. The book became one of the canonical publications on the modern synthesis and is considered to be exemplary of the original expansion of evolutionary theory. The book is considered one of his greatest and most influential. Systematics and the Origin of Species from the Viewpoint of a Zoologist contains a reassessment of previous evidence regarding the mechanisms of biological evolution. The points of view of modern systematics are compared with views from other life science fields, attempting to bridge the gap between different biological disciplines. In his book, Mayr attempts to summarize the knowledge within his field of systemics, investigates the main factors involved in taxonomic work, and presents some evidence regarding the origin of species Species concepts are discussed and Mayr proposes a definition of the species category where he considers species groups of natural populations which are reproductively isolated from each other. This concept Ernst Mayr proposes here is now commonly referred to as the biological species concept. The biological species concept defines a species in terms of biological factors such as reproduction, taking into account ecology, geography, and life history; it remains an important and useful idea in biology, particularly for animal speciation. Despite acceptance and approval of his species definition, his input did little to resolve the long-standing disagreements concerning the issue of species concepts. With his addition of the formulation of his species definition, Ernst Mayr was able to express the question of the species definition as a biological rather than topological issue After the publication of his species concept, Mayr became a major figure in the biological as well as the philosophical components of the debate regarding