source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Relativity%20priority%20dispute
Albert Einstein presented the theories of special relativity and general relativity in publications that either contained no formal references to previous literature, or referred only to a small number of his predecessors for fundamental results on which he based his theories, most notably to the work of Henri Poincaré and Hendrik Lorentz for special relativity, and to the work of David Hilbert, Carl F. Gauss, Bernhard Riemann, and Ernst Mach for general relativity. Subsequently, claims have been put forward about both theories, asserting that they were formulated, either wholly or in part, by others before Einstein. At issue is the extent to which Einstein and various other individuals should be credited for the formulation of these theories, based on priority considerations. Various scholars have questioned aspects of the work of Einstein, Henri Poincaré, and Lorentz leading up to the theories’ publication in 1905. Questions raised by these scholars include asking to what degree Einstein was familiar with Poincaré's work, whether Einstein was familiar with Lorentz's 1904 paper or a review of it, and how closely Einstein followed other physicists at the time. It is known that Einstein was familiar with Poincaré's 1902 paper [Poi02], but it is not known to what extent he was familiar with other work of Poincaré in 1905. However, it is known that he knew [Poi00] in 1906, because he quoted it in [Ein06]. Lorentz's 1904 paper [Lor04] contained the transformations bearing his name that appeared in the Annalen der Physik. Some authors claim that Einstein worked in relative isolation and with restricted access to the physics literature in 1905. Others, however, disagree; a personal friend of Einstein, Maurice Solovine, acknowledged that he and Einstein pored over Poincaré's 1902 book, keeping them "breathless for weeks on end" [Rot06]. The question of whether Einstein's wife Mileva Marić contributed to Einstein's work has also been raised, but most scholars on the topic
https://en.wikipedia.org/wiki/Sequential%20walking
Sequential walking is a technique that can be used to solve various 2D NMR spectra. In a 2D experiment, cross peaks must be correlated to the correct nuclei. Using sequential walking, the correct nuclei can be assigned to their crosspeaks. The assigned crosspeaks can give valuable information such as spatial interactions between nuclei. In a NOESY of DNA, for example, each nucleotide has a different chemical shift associated with it. In general, A's are more downstream, T's are more upstream, and C's and G's are intermediate. Each nucleotide has protons on the deoxyribose sugar, which can be assigned using sequential walking. To do this, the first nucleotide in the sequence must be detected. Knowing the DNA sequence helps, but in general the first nucleotide can be determined using the following rules. 1. 2' and 2" protons of a nucleotide will show up in its column, as well as in the column of the next nucleotide in the sequence. For example, in the sequence CATG, in the column for C, its own 2' and 2" protons will be seen, but none of the other nucleotides. For A, its own 2' and 2" protons will be seen, as well as those from C. 2. Methyl groups on the nucleotide are seen in the column for the nucleotide containing a methyl group, as well as for the nucleotide preceding it. For example, in CATG, the A and T will contain the methyl peak corresponding to the methyl group on T, but G will not. Once the first nucleotide has been found, you determine which nucleotide is next to it because it should contain the 2' and 2" protons from the previous nucleotide. This is done by "walking" across the spectrum. This process is then repeated sequentially until all nucleotides have been assigned. Nuclear magnetic resonance
https://en.wikipedia.org/wiki/FTPmail
FTPmail is the term used for the practice of using an FTPmail server to gain access to various files over the Internet. An FTPmail server is a proxy server which (asynchronously) connects to remote FTP servers in response to email requests, returning the downloaded files as an email attachment. This service might be useful to users who cannot themselves initiate an FTP session—for example, because they are constrained by restrictions on their Internet access. History During the early years of the Internet, Internet access was limited to a few locations. High speed links were not available for most users, and online connectivity was rare and expensive. Download of large files (then considered to be over a few megabytes) was nearly impossible due to bandwidth limitations, as well as frequent errors and lost connections. The original FTP specification did not allow for a session to be resumed, and the transmission had to restart from the beginning. FTPmail gateways allowed people to retrieve such files. The file was broken into smaller pieces and encoded using a popular format such as uuencode. The receiver of the email messages would later reassemble the original file and decode it. As the file was broken into smaller pieces, the chances of losing the transmission was much smaller. In case of loss of connectivity, the transmission could be restarted from that part. The process was slower but much more reliable. It also allowed people who accessed the Internet only through email using dial-up lines to download files that were located remotely. Unlike FTP, files could be transferred through FTPmail even if the user did not have an online Internet connection (for example, using BBSes or other specialized e-mail software). Servers located at universities, such as and , were popular. Some of these servers hosted software archives containing early versions of Linux and other GNU software. Access to these repositories via FTPmail was instrumental in allowing people fr
https://en.wikipedia.org/wiki/Event-related%20optical%20signal
Event-related optical signal (EROS) is a neuroimaging technique that uses infrared light through optical fibers to measure changes in optical properties of active areas of the cerebral cortex. The fast optical signal (EROS) measures changes in infrared light scattering that occur with neural activity. Whereas techniques such as diffuse optical imaging (DOI) and near-infrared spectroscopy (NIRS) measure optical absorption of hemoglobin, and thus are based on cerebral blood flow, EROS takes advantage of the scattering properties of the neurons themselves, and thus provide a much more direct measure of cellular activity. Characteristics EROS can pinpoint activity in the brain within millimeters and milliseconds, providing good spatial and temporal resolution at the same time. Currently, its biggest limitation is the inability to detect activity more than a few centimeters deep, which thus limits this fast optical imaging to the cerebral cortex. EROS can be measured using photon delay or as an intensity signal. EROS can also be measured concurrently with other neuroimaging techniques, such as fMRI, fNIRS, or EEG. History EROS is a relatively new and inexpensive technique that is non-invasive to the test subject. It was developed at the University of Illinois at Urbana–Champaign in the Cognitive Neuroimaging Laboratory of Drs. Gabriele Gratton and Monica Fabiani. EROS was first demonstrated in the visual cortex in 1995, and later in the motor cortex that same year. See also Optical imaging
https://en.wikipedia.org/wiki/Level%20sensor
Level sensors detect the level of liquids and other fluids and fluidized solids, including slurries, granular materials, and powders that exhibit an upper free surface. Substances that flow become essentially horizontal in their containers (or other physical boundaries) because of gravity whereas most bulk solids pile at an angle of repose to a peak. The substance to be measured can be inside a container or can be in its natural form (e.g., a river or a lake). The level measurement can be either continuous or point values. Continuous level sensors measure level within a specified range and determine the exact amount of substance in a certain place, while point-level sensors only indicate whether the substance is above or below the sensing point. Generally the latter detect levels that are excessively high or low. There are many physical and application variables that affect the selection of the optimal level monitoring method for industrial and commercial processes. The selection criteria include the physical: phase (liquid, solid or slurry), temperature, pressure or vacuum, chemistry, dielectric constant of medium, density (specific gravity) of medium, agitation (action), acoustical or electrical noise, vibration, mechanical shock, tank or bin size and shape. Also important are the application constraints: price, accuracy, appearance, response rate, ease of calibration or programming, physical size and mounting of the instrument, monitoring or control of continuous or discrete (point) levels. In short, level sensors are one of the very important sensors and play very important role in a variety of consumer/ industrial applications. As with other types of sensors, level sensors are available or can be designed using a variety of sensing principles. Selection of an appropriate type of sensor suiting to the application requirement is very important. Point and continuous level detection for solids A variety of sensors are available for point level detection of solids
https://en.wikipedia.org/wiki/Perfect%20power
In mathematics, a perfect power is a natural number that is a product of equal natural factors, or, in other words, an integer that can be expressed as a square or a higher integer power of another integer greater than one. More formally, n is a perfect power if there exist natural numbers m > 1, and k > 1 such that mk = n. In this case, n may be called a perfect kth power. If k = 2 or k = 3, then n is called a perfect square or perfect cube, respectively. Sometimes 0 and 1 are also considered perfect powers (0k = 0 for any k > 0, 1k = 1 for any k). Examples and sums A sequence of perfect powers can be generated by iterating through the possible values for m and k. The first few ascending perfect powers in numerical order (showing duplicate powers) are : The sum of the reciprocals of the perfect powers (including duplicates such as 34 and 92, both of which equal 81) is 1: which can be proved as follows: The first perfect powers without duplicates are: (sometimes 0 and 1), 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 81, 100, 121, 125, 128, 144, 169, 196, 216, 225, 243, 256, 289, 324, 343, 361, 400, 441, 484, 512, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1000, 1024, ... The sum of the reciprocals of the perfect powers p without duplicates is: where μ(k) is the Möbius function and ζ(k) is the Riemann zeta function. According to Euler, Goldbach showed (in a now-lost letter) that the sum of over the set of perfect powers p, excluding 1 and excluding duplicates, is 1: This is sometimes known as the Goldbach–Euler theorem. Detecting perfect powers Detecting whether or not a given natural number n is a perfect power may be accomplished in many different ways, with varying levels of complexity. One of the simplest such methods is to consider all possible values for k across each of the divisors of n, up to . So if the divisors of are then one of the values must be equal to n if n is indeed a perfect power. This method can immediately be simplified by instead
https://en.wikipedia.org/wiki/Thermal%20efficiency
In thermodynamics, the thermal efficiency () is a dimensionless performance measure of a device that uses thermal energy, such as an internal combustion engine, steam turbine, steam engine, boiler, furnace, refrigerator, ACs etc. For a heat engine, thermal efficiency is the ratio of the net work output to the heat input; in the case of a heat pump, thermal efficiency (known as the coefficient of performance) is the ratio of net heat output (for heating), or the net heat removed (for cooling) to the energy input (external work). The efficiency of a heat engine is fractional as the output is always less than the input while the COP of a heat pump is more than 1. These values are further restricted by the Carnot theorem. Overview In general, energy conversion efficiency is the ratio between the useful output of a device and the input, in energy terms. For thermal efficiency, the input, , to the device is heat, or the heat-content of a fuel that is consumed. The desired output is mechanical work, , or heat, , or possibly both. Because the input heat normally has a real financial cost, a memorable, generic definition of thermal efficiency is From the first law of thermodynamics, the energy output cannot exceed the input, and by the second law of thermodynamics it cannot be equal in a non-ideal process, so When expressed as a percentage, the thermal efficiency must be between 0% and 100%. Efficiency must be less than 100% because there are inefficiencies such as friction and heat loss that convert the energy into alternative forms. For example, a typical gasoline automobile engine operates at around 25% efficiency, and a large coal-fuelled electrical generating plant peaks at about 46%. However, advances in Formula 1 motorsport regulations have pushed teams to develop highly efficient power units which peak around 45–50% thermal efficiency. The largest diesel engine in the world peaks at 51.7%. In a combined cycle plant, thermal efficiencies approach 60%. Such a real
https://en.wikipedia.org/wiki/Fair%20queuing
Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes. Fair queuing is implemented in some advanced network switches and routers. History The term fair queuing was coined by John Nagle in 1985 while proposing round-robin scheduling in the gateway between a local area network and the internet to reduce network disruption from badly-behaving hosts. A byte-weighted version was proposed by Alan Demers, Srinivasan Keshav and Scott Shenker in 1989, and was based on the earlier Nagle fair queuing algorithm. The byte-weighted fair queuing algorithm aims to mimic a bit-per-bit multiplexing by computing theoretical departure date for each packet. The concept has been further developed into weighted fair queuing, and the more general concept of traffic shaping, where queuing priorities are dynamically controlled to achieve desired flow quality of service goals or accelerate some flows. Principle Fair queuing uses one queue per packet flow and services them in rotation, such that each flow can "obtain an equal fraction of the resources". The advantage over conventional first in first out (FIFO) or priority queuing is that a high-data-rate flow, consisting of large packets or many data packets, cannot take more than its fair share of the link capacity. Fair queuing is used in routers, switches, and statistical multiplexers that forward packets from a buffer. The buffer works as a queuing system, where the data packets are stored temporarily until they are transmitted. With a link data-rate of R, at any given time the N active data flows (the ones with non-empty queues) are serviced each with an average data rate of R/N. In a short time interval the data rate may fluctuate around this value si
https://en.wikipedia.org/wiki/Universal%20hashing
In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography. Introduction Assume we want to map keys from some universe into bins (labelled ). The algorithm will have to handle some data set of keys, which is not known in advance. Usually, the goal of hashing is to obtain a low number of collisions (keys from that land in the same bin). A deterministic hash function cannot offer any guarantee in an adversarial setting if , since the adversary may choose to be precisely the preimage of a bin. This means that all data keys land in the same bin, making hashing useless. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function. The solution to these problems is to pick a function randomly from a family of hash functions. A family of functions is called a universal family if, . In other words, any two different keys of the universe collide with probability at most when the hash function is drawn uniformly at random from . This is exactly the probability of collision we would expect if the hash function assigned truly random hash codes to every key. Sometimes, the definition is relaxed by a constant factor, only requiring collision probability rather than . This concept was introduced by Carter and Wegman in 1977, and has found numerous applications in computer
https://en.wikipedia.org/wiki/Land%20use%2C%20land-use%20change%2C%20and%20forestry
Land use, land-use change, and forestry (LULUCF), also referred to as Forestry and other land use (FOLU) or Agriculture, Forestry and Other Land Use (AFOLU), is defined as a "greenhouse gas inventory sector that covers emissions and removals of greenhouse gases resulting from direct human-induced land use such as settlements and commercial uses, land-use change, and forestry activities." LULUCF has impacts on the global carbon cycle and as such, these activities can add or remove carbon dioxide (or, more generally, carbon) from the atmosphere, influencing climate. LULUCF has been the subject of two major reports by the Intergovernmental Panel on Climate Change (IPCC), but is difficult to measure. Additionally, land use is of critical importance for biodiversity. A related term in the context of climate change mitigation is AFOLU which stands for "agriculture, forestry and other land use". Development The United Nations Framework Convention on Climate Change (UNFCCC) Article 4(1)(a) requires all Parties to "develop, periodically update, publish and make available to the Conference of the Parties" as well as "national inventories of anthropogenic emissions by sources" "removals by sinks of all greenhouse gases not controlled by the Montreal Protocol." Under the UNFCCC reporting guidelines, human-induced greenhouse emissions must be reported in six sectors: energy (including stationary energy and transport); industrial processes; solvent and other product use; agriculture; waste; and land use, land use change and forestry (LULUCF). The rules governing accounting and reporting of greenhouse gas emissions from LULUCF under the Kyoto Protocol are contained in several decisions of the Conference of Parties under the UNFCCC. LULUCF has been the subject of two major reports by the Intergovernmental Panel on Climate Change (IPCC). The Kyoto Protocol article 3.3 thus requires mandatory LULUCF accounting for afforestation (no forest for last 50 years), reforestation (n
https://en.wikipedia.org/wiki/White%20noise%20machine
A white noise machine is a device that produces a noise that calms the listener, which in many cases sounds like a rushing waterfall or wind blowing through trees, and other serene or nature-like sounds. Often such devices do not produce actual white noise, which has a harsh sound, but pink noise, whose power rolls off at higher frequencies, or other colors of noise. Use White noise devices are available from numerous manufacturers in many forms, for a variety of different uses, including audio testing, sound masking, sleep-aid, and power-napping. Sleep-aid and nap machine products may also produce other soothing sounds, such as music, rain, wind, highway traffic and ocean waves mixed with—or modulated by—white noise. Electric fans are a common alternative, although some Asian communities historically avoided using fans due to the superstition that a fan could suffocate them while sleeping. White noise generators are often used by people with tinnitus to mask their symptoms. The sounds generated by digital machines are not always truly random. Rather, they are short prerecorded audio-tracks which continuously repeat at the end of the track. Manufacturers of sound-masking devices recommend that the volume of white noise machines be initially set at a comfortable level, even if it does not provide the desired level of privacy. As the ear becomes accustomed to the new sound and learns to tune it out, the volume can be gradually increased to increase privacy. Manufacturers of sleeping aids and power-napping devices recommend that the volume level be set slightly louder than normal music listening level, but always in a comfortable listening range. Sound and noise have their own measurement and color coding techniques, which allows specialized users to identify noise and sound according to their respective needs and utilization. These specialized needs are dependent on certain professions and needs, e.g. a psychiatrist who needs certain sounds for therapies and trea
https://en.wikipedia.org/wiki/Transpiration%20stream
In plants, the transpiration stream is the uninterrupted stream of water and solutes which is taken up by the roots and transported via the xylem to the leaves where it evaporates into the air/apoplast-interface of the substomatal cavity. It is driven by capillary action and in some plants by root pressure. The main driving factor is the difference in water potential between the soil and the substomatal cavity caused by transpiration. Transpiration Transpiration can be regulated through stomatal closure or opening. It allows for plants to efficiently transport water up to their highest body organs, regulate the temperature of stem and leaves and it allows for upstream signaling such as the dispersal of an apoplastic alkalinization during local oxidative stress. Summary of water movement: Soil Roots and Root Hair Xylem Leaves Stomata Air Osmosis The water passes from the soil to the root by osmosis. The long and thin shape of root hairs maximizes surface area so that more water can enter. There is greater water potential in the soil than in the cytoplasm of the root hair cells. As the cell's surface membrane of the root hair cell is semi-permeable, osmosis can take place; and water passes from the soil to the root hairs. The next stage in the transpiration stream is water passing into the xylem vessels. The water either goes through the cortex cells (between the root cells and the xylem vessels) or it bypasses them – going through their cell walls. After this, the water moves up the xylem vessels to the leaves through diffusion: A pressure change between the top and bottom of the vessel. Diffusion takes place because there is a water potential gradient between water in the xylem vessel and the leaf (as water is transpiring out of the leaf). This means that water diffuses up the leaf. There is also a pressure change between the top and bottom of the xylem vessels, due to water loss from the leaves. This reduces the pressure of water at the top of the vessels. T
https://en.wikipedia.org/wiki/Substomatal%20cavity
In plants, the substomatal cavity is the cavity located immediately proximal to the stoma. It acts as a diffusion chamber connected with intercellular air spaces and allows rapid diffusion of carbon dioxide and other gases (such as plant pheromones) in and out of plant cells.
https://en.wikipedia.org/wiki/Systemic%20acquired%20resistance
Systemic acquired resistance (SAR) is a "whole-plant" resistance response that occurs following an earlier localized exposure to a pathogen. SAR is analogous to the innate immune system found in animals, and although there are many shared aspects between the two systems, it is thought to be a result of convergent evolution. The systemic acquired resistance response is dependent on the plant hormone, salicylic acid. Discovery While, it has been recognized since at least the 1930s that plants have some kind of induced immunity to pathogens, the modern study of systemic acquired resistance began in the 1980s when the invention of new tools allowed scientists to probe the molecular mechanisms of SAR. A number of 'marker genes' were characterized in the 80s and 90s which are strongly induced as part of the SAR response. These pathogenesis-related proteins (PR) belong to a number of different protein families. While there is substantial overlap, the spectrum of PR proteins expressed in a particular plant species is variable. It was noticed in the early 1990s that levels of salicylic acid (SA) increased dramatically in tobacco and cucumber upon infection. This pattern has been replicated in many other species since then. Further studies showed that SAR can also be induced by exogenous SA application and that transgenic Arabidopsis plants expressing a bacterial salicylate hydroxylase gene are unable to accumulate SA or mount an appropriate defensive response to a variety of pathogens. The first plant receptors of conserved microbial signatures were identified in rice (XA21, 1995) and in Arabidopsis (FLS2, 2000). Mechanism Plants have several immunity mechanisms to deal with infections and stress. When they are infected with pathogens the immune system recognizes called pathogen-associated molecular patterns (PAMPs), it is via pattern recognition receptors (PRRs). This induces a PAMP-triggered immunity (PTI). Some pathogens carry effectors that suppress PTI in the plant
https://en.wikipedia.org/wiki/IBM%20LPFK
The Lighted Program Function Keyboard (LPFK) is a computer input device manufactured by IBM that presents an array of buttons associated with lights. Each button is associated to a function in supporting software, and according to the availability of that function in current context of the application, the light is switched on or off, giving the user a graphical feedback on the set of available functions. Usually the button to function mapping is customizable. External links http://brutman.com/IBM_LPFK/IBM_LPFK.html Computer keyboards LPFK
https://en.wikipedia.org/wiki/Kaplansky%27s%20conjectures
The mathematician Irving Kaplansky is notable for proposing numerous conjectures in several branches of mathematics, including a list of ten conjectures on Hopf algebras. They are usually known as Kaplansky's conjectures. Group rings Let be a field, and a torsion-free group. Kaplansky's zero divisor conjecture states: The group ring does not contain nontrivial zero divisors, that is, it is a domain. Two related conjectures are known as, respectively, Kaplansky's idempotent conjecture: does not contain any non-trivial idempotents, i.e., if , then or . and Kaplansky's unit conjecture (which was originally made by Graham Higman and popularized by Kaplansky): does not contain any non-trivial units, i.e., if in , then for some in and in . The zero-divisor conjecture implies the idempotent conjecture and is implied by the unit conjecture. As of 2021, the zero divisor and idempotent conjectures are open. The unit conjecture, however, was disproved for fields of positive characteristic by Giles Gardam in February 2021: he published a preprint on the arXiv that constructs a counterexample. The field is of characteristic 2. (see also: Fibonacci group) There are proofs of both the idempotent and zero-divisor conjectures for large classes of groups. For example, the zero-divisor conjecture is known for all torsion-free elementary amenable groups (a class including all virtually solvable groups), since their group algebras are known to be Ore domains. It follows that the conjecture holds more generally for all residually torsion-free elementary amenable groups. Note that when is a field of characteristic zero, then the zero-divisor conjecture is implied by the Atiyah conjecture, which has also been established for large classes of groups. The idempotent conjecture has a generalisation, the Kadison idempotent conjecture, also known as the Kadison–Kaplansky conjecture, for elements in the reduced group C*-algebra. In this setting, it is known that if the Fa
https://en.wikipedia.org/wiki/Reflection%20formula
In mathematics, a reflection formula or reflection relation for a function f is a relationship between f(a − x) and f(x). It is a special case of a functional equation, and it is very common in the literature to use the term "functional equation" when "reflection formula" is meant. Reflection formulas are useful for numerical computation of special functions. In effect, an approximation that has greater accuracy or only converges on one side of a reflection point (typically in the positive half of the complex plane) can be employed for all arguments. Known formulae The even and odd functions satisfy by definition simple reflection relations around a = 0. For all even functions, and for all odd functions, A famous relationship is Euler's reflection formula for the gamma function , due to Leonhard Euler. There is also a reflection formula for the general n-th order polygamma function ψ(n)(z), which springs trivially from the fact that the polygamma functions are defined as the derivatives of and thus inherit the reflection formula. The Riemann zeta function ζ(z) satisfies and the Riemann Xi function ξ(z) satisfies
https://en.wikipedia.org/wiki/Relation%20construction
In logic and mathematics, relation construction and relational constructibility have to do with the ways that one relation is determined by an indexed family or a sequence of other relations, called the relation dataset. The relation in the focus of consideration is called the faciendum. The relation dataset typically consists of a specified relation over sets of relations, called the constructor, the factor, or the method of construction, plus a specified set of other relations, called the faciens, the ingredients, or the makings. Relation composition and relation reduction are special cases of relation constructions. See also Projection Relation Relation composition Mathematical relations
https://en.wikipedia.org/wiki/Electroluminescent%20display
Electroluminescent Displays (ELDs) are a type of flat panel display created by sandwiching a layer of electroluminescent material such as Gallium arsenide between two layers of conductors. When current flows, the layer of material emits radiation in the form of visible light. Electroluminescence (EL) is an optical and electrical phenomenon where a material emits light in response to an electric current passed through it, or to a strong electric field. The term "electroluminescent display" describes displays that use neither LED nor OLED devices, that instead use traditional electroluminescent materials. Beneq is the only manufacturer of TFEL (Thin Film Electroluminescent Display) and TAESL displays, which are branded as LUMINEQ Displays. The structure of a TFEL is similar to that of a passive matrix LCD or OLED display, and TAESL displays are essentially transparent TEFL displays with transparent electrodes. TAESL displays can have a transparency of 80%. Both TEFL and TAESL displays use chip-on-glass technology, which mounts the display driver IC directly on one of the edges of the display. TAESL displays can be embedded onto glass sheets. Unlike LCDs, TFELs are much more rugged and can operate at temperatures from −60 to 105°C and unlike OLEDs, TFELs can operate for 100,000 hours without considerable burn-in, retaining about 85% of their initial brightness. The electroluminescent material is deposited using atomic layer deposition, which is a process that deposits one 1-atom thick layer at a time. Mechanism EL works by exciting atoms by passing an electric current through them, causing them to emit photons. By varying the material being excited, the colour of the light emitted can be changed. The actual ELD is constructed using flat, opaque electrode strips running parallel to each other, covered by a layer of electroluminescent material, followed by another layer of electrodes, running perpendicular to the bottom layer. This top layer must be transparent in order
https://en.wikipedia.org/wiki/Anomalous%20monism
Anomalous monism is a philosophical thesis about the mind–body relationship. It was first proposed by Donald Davidson in his 1970 paper "Mental Events". The theory is twofold and states that mental events are identical with physical events, and that the mental is anomalous, i.e. under their mental descriptions, relationships between these mental events are not describable by strict physical laws. Hence, Davidson proposes an identity theory of mind without the reductive bridge laws associated with the type-identity theory. Since the publication of his paper, Davidson refined his thesis and both critics and supporters of anomalous monism have come up with their own characterizations of the thesis, many of which appear to differ from Davidson's. Overview Considering views about the relation between the mental and the physical as distinguished first by whether or not mental entities are identical with physical entities, and second by whether or not there are strict psychophysical laws, we arrive at a fourfold classification: (1) nomological monism, which says there are strict correlating laws, and that the correlated entities are identical (this is usually called type physicalism); (2) nomological dualism, which holds that there are strict correlating laws, but that the correlated entities are not identical (parallelism, property dualism and pre-established harmony); (3) anomalous dualism, which holds there are no laws correlating the mental and the physical, that the substances are ontologically distinct, but nevertheless there is interaction between them (i.e. Cartesian dualism); and (4) anomalous monism, which allows only one class of entities, but denies the possibility of definitional and nomological reduction. Davidson put forth his theory of anomalous monism as a possible solution to the mind–body problem. Since (in this theory) every mental event is some physical event or other, the idea is that someone's thinking at a certain time, for example, that snow is
https://en.wikipedia.org/wiki/National%20Food%20Safety%20and%20Quality%20Service
The National Food Safety and Quality Service (, SENASA) is an independent agency of the Argentine government charged with surveillance, regulation and certification of products of animal and plant origin and the prevention, eradication and control of diseases and plagues that affect them . SENASA formally comes under the Secretariat of Agriculture, Livestock, Fishing and Food, a division of the Ministry of Economy. SENASA has 24 regional and 1 metropolitan supervising offices in all the country; however, its head office is located in Buenos Aires. See also Food Administration External links Official website Regulation in Argentina Food safety organizations Medical and health organisations based in Argentina Phytosanitary authorities
https://en.wikipedia.org/wiki/Persistence%20length
The persistence length is a basic mechanical property quantifying the bending stiffness of a polymer. The molecule behaves like a flexible elastic rod/beam (beam theory). Informally, for pieces of the polymer that are shorter than the persistence length, the molecule behaves like a rigid rod, while for pieces of the polymer that are much longer than the persistence length, the properties can only be described statistically, like a three-dimensional random walk. Formally, the persistence length, P, is defined as the length over which correlations in the direction of the tangent are lost. In a more chemical based manner it can also be defined as the average sum of the projections of all bonds j ≥ i on bond i in an infinitely long chain. Let us define the angle θ between a vector that is tangent to the polymer at position 0 (zero) and a tangent vector at a distance L away from position 0, along the contour of the chain. It can be shown that the expectation value of the cosine of the angle falls off exponentially with distance, where P is the persistence length and the angled brackets denote the average over all starting positions. The persistence length is considered to be one half of the Kuhn length, the length of hypothetical segments that the chain can be considered as freely joined. The persistence length equals the average projection of the end-to-end vector on the tangent to the chain contour at a chain end in the limit of infinite chain length. The persistence length can be also expressed using the bending stiffness , the Young's modulus E and knowing the section of the polymer chain. where is the Boltzmann constant and T is the temperature. In the case of a rigid and uniform rod, I can be expressed as: where a is the radius. For charged polymers the persistence length depends on the surrounding salt concentration due to electrostatic screening. The persistence length of a charged polymer is described by the OSF (Odijk, Skolnick and Fixman) model. Ex
https://en.wikipedia.org/wiki/Colony-forming%20unit
In microbiology, colony-forming unit (CFU, cfu or Cfu) is a unit which estimates the number of microbial cells (bacteria, fungi, viruses etc.) in a sample that are viable, able to multiply via binary fission under the controlled conditions. Counting with colony-forming units requires culturing the microbes and counts only viable cells, in contrast with microscopic examination which counts all cells, living or dead. The visual appearance of a colony in a cell culture requires significant growth, and when counting colonies, it is uncertain if the colony arose from one cell or a group of cells. Expressing results as colony-forming units reflects this uncertainty. Theory The purpose of plate counting is to estimate the number of cells present based on their ability to give rise to colonies under specific conditions of nutrient medium, temperature and time. Theoretically, one viable cell can give rise to a colony through replication. However, solitary cells are the exception in nature, and most likely the progenitor of the colony was a mass of cells deposited together. In addition, many bacteria grow in chains (e.g. Streptococcus) or clumps (e.g., Staphylococcus). Estimation of microbial numbers by CFU will, in most cases, undercount the number of living cells present in a sample for these reasons. This is because the counting of CFU assumes that every colony is separate and founded by a single viable microbial cell. The plate count is linear for E. coli over the range of 30 to 300 CFU on a standard sized Petri dish. Therefore, to ensure that a sample will yield CFU in this range requires dilution of the sample and plating of several dilutions. Typically, ten-fold dilutions are used, and the dilution series is plated in replicates of 2 or 3 over the chosen range of dilutions. Often 100µl are plated but also larger amounts up to 1ml are used. Higher plating volumes increase drying times but often don't result in higher accuracy, since additional dilution steps may be
https://en.wikipedia.org/wiki/ATSC%20tuner
An ATSC (Advanced Television Systems Committee) tuner, often called an ATSC receiver or HDTV tuner, is a type of television tuner that allows reception of digital television (DTV) television channels that use ATSC standards, as transmitted by television stations in North America, parts of Central America, and South Korea. Such tuners are usually integrated into a television set, VCR, digital video recorder (DVR), or set-top box which provides audio/video output connectors of various types. Another type of television tuner is a digital television adapter (DTA) with an analog passthrough. Technical overview The terms "tuner" and "receiver" are used loosely, and it is perhaps more appropriately called an ATSC receiver, with the tuner being part of the receiver (see Metonymy). The receiver generates the audio and video (AV) signals needed for television, and performs the following tasks: demodulation; error correction; MPEG transport stream demultiplexing; decompression; AV synchronization; and media reformatting to match what is optimal input for one's TV. Examples of media reformatting include: interlace to progressive scan or vice versa; picture resolutions; aspect ratio conversions (16:9 to or from 4:3); frame rate conversion; and image scaling. Zooming is an example of resolution change. It is commonly used to convert a low-resolution picture to a high-resolution display. This lets the user eliminate letterboxing or pillarboxing by stretching or cropping the picture. Some ATSC receivers, mostly those in HDTV TV sets, will stretch automatically, either by detecting black bars or by reading the Active Format Descriptor (AFD). Operation An ATSC tuner works by generating audio and video signals that are picked up from over-the-air broadcast television. ATSC tuners provide the following functions: selective tuning; demodulation; transport stream demultiplexing; decompression; error correction; analog-to-digital conversion; AV synchronization; and media reformatting
https://en.wikipedia.org/wiki/Carath%C3%A9odory%27s%20extension%20theorem
In measure theory, Carathéodory's extension theorem (named after the mathematician Constantin Carathéodory) states that any pre-measure defined on a given ring of subsets R of a given set Ω can be extended to a measure on the σ-ring generated by R, and this extension is unique if the pre-measure is σ-finite. Consequently, any pre-measure on a ring containing all intervals of real numbers can be extended to the Borel algebra of the set of real numbers. This is an extremely powerful result of measure theory, and leads, for example, to the Lebesgue measure. The theorem is also sometimes known as the Carathéodory–Fréchet extension theorem, the Carathéodory–Hopf extension theorem, the Hopf extension theorem and the Hahn–Kolmogorov extension theorem. Introductory statement Several very similar statements of the theorem can be given. A slightly more involved one, based on semi-rings of sets, is given further down below. A shorter, simpler statement is as follows. In this form, it is often called the Hahn–Kolmogorov theorem. Let be an algebra of subsets of a set Consider a set function which is finitely additive, meaning that for any positive integer and disjoint sets in Assume that this function satisfies the stronger sigma additivity assumption for any disjoint family of elements of such that (Functions obeying these two properties are known as pre-measures.) Then, extends to a measure defined on the -algebra generated by ; that is, there exists a measure such that its restriction to coincides with If is -finite, then the extension is unique. Comments This theorem is remarkable for it allows one to construct a measure by first defining it on a small algebra of sets, where its sigma additivity could be easy to verify, and then this theorem guarantees its extension to a sigma-algebra. The proof of this theorem is not trivial, since it requires extending from an algebra of sets to a potentially much bigger sigma-algebra, guaranteeing that
https://en.wikipedia.org/wiki/Generalized%20polygon
In mathematics, a generalized polygon is an incidence structure introduced by Jacques Tits in 1959. Generalized n-gons encompass as special cases projective planes (generalized triangles, n = 3) and generalized quadrangles (n = 4). Many generalized polygons arise from groups of Lie type, but there are also exotic ones that cannot be obtained in this way. Generalized polygons satisfying a technical condition known as the Moufang property have been completely classified by Tits and Weiss. Every generalized n-gon with n even is also a near polygon. Definition A generalized 2-gon (or a digon) is an incidence structure with at least 2 points and 2 lines where each point is incident to each line. For a generalized n-gon is an incidence structure (), where is the set of points, is the set of lines and is the incidence relation, such that: It is a partial linear space. It has no ordinary m-gons as subgeometry for . It has an ordinary n-gon as a subgeometry. For any there exists a subgeometry () isomorphic to an ordinary n-gon such that . An equivalent but sometimes simpler way to express these conditions is: consider the bipartite incidence graph with the vertex set and the edges connecting the incident pairs of points and lines. The girth of the incidence graph is twice the diameter n of the incidence graph. From this it should be clear that the incidence graphs of generalized polygons are Moore graphs. A generalized polygon is of order (s,t) if: all vertices of the incidence graph corresponding to the elements of have the same degree s + 1 for some natural number s; in other words, every line contains exactly s + 1 points, all vertices of the incidence graph corresponding to the elements of have the same degree t + 1 for some natural number t; in other words, every point lies on exactly t + 1 lines. We say a generalized polygon is thick if every point (line) is incident with at least three lines (points). All thick generalized polygons have an
https://en.wikipedia.org/wiki/Planar%20ternary%20ring
In mathematics, an algebraic structure consisting of a non-empty set and a ternary mapping may be called a ternary system. A planar ternary ring (PTR) or ternary field is special type of ternary system used by Marshall Hall to construct projective planes by means of coordinates. A planar ternary ring is not a ring in the traditional sense, but any field gives a planar ternary ring where the operation is defined by . Thus, we can think of a planar ternary ring as a generalization of a field where the ternary operation takes the place of both addition and multiplication. There is wide variation in the terminology. Planar ternary rings or ternary fields as defined here have been called by other names in the literature, and the term "planar ternary ring" can mean a variant of the system defined here. The term "ternary ring" often means a planar ternary ring, but it can also simply mean a ternary system. Definition A planar ternary ring is a structure where is a set containing at least two distinct elements, called 0 and 1, and is a mapping which satisfies these five axioms: ; ; , there is a unique such that : ; , there is a unique , such that ; and , the equations have a unique solution . When is finite, the third and fifth axioms are equivalent in the presence of the fourth. No other pair (0', 1') in can be found such that still satisfies the first two axioms. Binary operations Addition Define . The structure is a loop with identity element 0. Multiplication Define . The set is closed under this multiplication. The structure is also a loop, with identity element 1. Linear PTR A planar ternary ring is said to be linear if . For example, the planar ternary ring associated to a quasifield is (by construction) linear. Connection with projective planes Given a planar ternary ring , one can construct a projective plane with point set P and line set L as follows: (Note that is an extra symbol not in .) Let , and . Then define, , th
https://en.wikipedia.org/wiki/Assortative%20mixing
In the study of complex networks, assortative mixing, or assortativity, is a bias in favor of connections between network nodes with similar characteristics. In the specific case of social networks, assortative mixing is also known as homophily. The rarer disassortative mixing is a bias in favor of connections between dissimilar nodes. In social networks, for example, individuals commonly choose to associate with others of similar age, nationality, location, race, income, educational level, religion, or language as themselves. In networks of sexual contact, the same biases are observed, but mixing is also disassortative by gender – most partnerships are between individuals of opposite sex. Assortative mixing can have effects, for example, on the spread of disease: if individuals have contact primarily with other members of the same population groups, then diseases will spread primarily within those groups. Many diseases are indeed known to have differing prevalence in different population groups, although other social and behavioral factors affect disease prevalence as well, including variations in quality of health care and differing social norms. Assortative mixing is also observed in other (non-social) types of networks, including biochemical networks in the cell, computer and information networks, and others. Of particular interest is the phenomenon of assortative mixing by degree, meaning the tendency of nodes with high degree to connect to others with high degree, and similarly for low degree. Because degree is itself a topological property of networks, this type of assortative mixing gives rise to more complex structural effects than other types. Empirically it has been observed that most social networks mix assortatively by degree, but most networks of other types mix disassortatively, although there are exceptions. See also Assortative mating Assortativity Complex network Friendship paradox Graph theory Heterophily Homophily Preferential attachme
https://en.wikipedia.org/wiki/The%20Institutes%20for%20the%20Achievement%20of%20Human%20Potential
The Institutes for The Achievement of Human Potential (IAHP), founded in 1955 by Glenn Doman and Carl Delacato, provide literature on and teaches a controversial patterning therapy (motor learning), which the Institutes promote as improving the "neurologic organization" of "brain injured" and mentally impaired children through a variety of programs, including diet and exercise. The Institutes also provides extensive early-learning programs for "well" children, including programs focused on reading, mathematics, language, and physical fitness. It is headquartered in Philadelphia, Pennsylvania, with offices and programs offered in several other countries. Pattern therapy for patients with neuromuscular disorders was first developed by neurosurgeon Temple Fay in the 1940s. Patterning has been widely criticized and multiple studies have found the therapy ineffective. History The Institutes for the Achievement of Human Potential (IAHP, also known as "The Institutes") was founded in 1955. It practices pattern therapy, which was developed by Doman and educational psychologist Carl Delacato. Pattern therapy drew upon the ideas and work of ideas of neurophysiologist Temple Fay, former head of the Department of Neurosurgery at Temple University School of Medicine and president of the Philadelphia Neurological Society. In 1960, Doman and Delacato published an article in the Journal of the American Medical Association (JAMA) detailing pattern therapy. The methodology of their study was later criticized. Philosophy The philosophy of the Institutes consists of several interrelated beliefs: that every child has genius potential, stimulation is the key to unlocking a child's potential, teaching should commence at birth, the younger the child, the easier the learning process, children naturally love to learn, parents are their child's best teacher, teaching and learning should be joyous and teaching and learning should never involve testing. This philosophy follows very closel
https://en.wikipedia.org/wiki/Petersson%20inner%20product
In mathematics the Petersson inner product is an inner product defined on the space of entire modular forms. It was introduced by the German mathematician Hans Petersson. Definition Let be the space of entire modular forms of weight and the space of cusp forms. The mapping , is called Petersson inner product, where is a fundamental region of the modular group and for is the hyperbolic volume form. Properties The integral is absolutely convergent and the Petersson inner product is a positive definite Hermitian form. For the Hecke operators , and for forms of level , we have: This can be used to show that the space of cusp forms of level has an orthonormal basis consisting of simultaneous eigenfunctions for the Hecke operators and the Fourier coefficients of these forms are all real.
https://en.wikipedia.org/wiki/Temporomandibular%20ligament
The temporomandibular ligament, also known as the external lateral ligament, is a ligament that connects the lower articular tubercle of the zygomatic arch to the lateral and posterior border of the neck of the mandible. It prevents posterior displacement of the mandible. It also prevents the condyloid process from being driven upward by a blow to the jaw, which would otherwise fracture the base of the skull. Structure The temporomandibular ligament originates from the lower articular tubercle of the zygomatic arch. This usually has a rough surface for the ligament to attach to. It attaches to the lateral and posterior border of the neck of the mandible. It consists of two short, narrow fasciculi, one in front of the other. It is broader above than below, and its fibers are directed obliquely downward and backward. It is covered by the parotid gland, and by the integument. Function The temporomandibular ligament constrains the mandible as it opens, keeping the condyloid process close to the joint. It prevents posterior displacement of the mandible. It also prevents the condyloid process from being driven upward by a blow to the jaw, which would otherwise fracture the base of the skull.
https://en.wikipedia.org/wiki/Stylomandibular%20ligament
The stylomandibular ligament is the thickened posterior portion of the investing cervical fascia around the neck. It extends from near the apex of the styloid process of the temporal bone to the angle and posterior border of the angle of the mandible, between the masseter muscle and medial pterygoid muscle. The stylomandibular ligament limits mandibular movements, such as preventing excessive opening. Structure The stylomandibular ligament extends from near the apex of the styloid process of the temporal bone to the angle and posterior border of the angle of the mandible, between the masseter muscle and medial pterygoid muscle. From its deep surface, some fibers of the styloglossus muscle originate. Although classed among the ligaments of the temporomandibular joint, it can only be considered as accessory to it. Function The stylomandibular ligament, along with the sphenomandibular ligament, limits mandibular movements, such as preventing excessive opening. Clinical significance The stylomandibular ligament is important for maintaining stability of the mandible after maxillofacial surgery.
https://en.wikipedia.org/wiki/Shebang%20%28Unix%29
In computing, a shebang is the character sequence consisting of the characters number sign and exclamation mark () at the beginning of a script. It is also called sharp-exclamation, sha-bang, hashbang, pound-bang, or hash-pling. When a text file with a shebang is used as if it is an executable in a Unix-like operating system, the program loader mechanism parses the rest of the file's initial line as an interpreter directive. The loader executes the specified interpreter program, passing to it as an argument the path that was initially used when attempting to run the script, so that the program may use the file as input data. For example, if a script is named with the path path/to/script, and it starts with the line #!/bin/sh, then the program loader is instructed to run the program /bin/sh, passing path/to/script as the first argument. The shebang line is usually ignored by the interpreter, because the "#" character is a comment marker in many scripting languages; some language interpreters that do not use the hash mark to begin comments still may ignore the shebang line in recognition of its purpose. Syntax The form of a shebang interpreter directive is as follows: #! interpreter [optional-arg] in which interpreter is a path to an executable program. The space between and interpreter is optional. There could be any number of spaces or tabs either before or after interpreter. The optional-arg will include any extra spaces up to the end-of-line. In Linux, the file specified by interpreter can be executed if it has the execute rights and is one of the following: a native executable, such as an ELF binary any kind of file for which an interpreter was registered via the binfmt_misc mechanism (such as for executing Microsoft .exe binaries using wine) another script starting with a shebang On Linux and Minix, an interpreter can also be a script. A chain of shebangs and wrappers yields a directly executable file that gets the encountered scripts as parameters in
https://en.wikipedia.org/wiki/Resistor%20ladder
A resistor ladder is an electrical circuit made from repeating units of resistors, in specific configurations. An R–2R ladder configuration is a simple and inexpensive way to perform digital-to-analog conversion (DAC), using repetitive arrangements of precise resistor networks in a ladder-like configuration. A string resistor ladder configuration implements the non-repetitive reference network. History A 1953 paper "Coding by Feedback Methods" describes "decoding networks" that convert numbers (in any base) represented by voltage sources or current sources connected to resistor networks in a "shunt resistor decoding network" (which in base 2 corresponds to the binary-weighted configuration) or in a "ladder resistor decoding network" (which in base 2 corresponds to R–2R configuration) into a single voltage output. The paper gives an advantage of R–2R that impedances seen by the sources are more equal. Another historic description is in US Patent 3108266, filed in 1955, "Signal Conversion Apparatus". String resistor ladder network A string of many resistors connected between two reference voltages is called a "resistor string". The resistors act as voltage dividers between the referenced voltages. A Kelvin divider or string DAC is a string of equal valued resistors. Analog-to-digital conversion Each tap of the string generates a different voltage, which can be compared with another voltage: this is the basic principle of a flash ADC (analog-to-digital converter). Often a voltage is converted to a current, enabling the possibility to use an R–2R ladder network. Disadvantage: for an n-bit ADC, the number of resistors grows exponentially, as resistors are required, while the R–2R resistor ladder only increases linearly with the number of bits, as it needs only resistors. Advantage: higher impedance values can be reached using the same number of components. Digital-to-analog conversion A string resistor can function as a DAC by having the bits of the binary
https://en.wikipedia.org/wiki/Advanced%20Resource%20Connector
Advanced Resource Connector (ARC) is a grid computing middleware introduced by NorduGrid. It provides a common interface for submission of computational tasks to different distributed computing systems and thus can enable grid infrastructures of varying size and complexity. The set of services and utilities providing the interface is known as ARC Computing Element (ARC-CE). ARC-CE functionality includes data staging and caching, developed in order to support data-intensive distributed computing. ARC is an open source software distributed under the Apache License 2.0. History ARC appeared (and is still often referred to) as the NorduGrid middleware, originally proposed as an architecture on top of the Globus Toolkit optimized for the needs of High-Energy Physics computing for the Large Hadron Collider experiments. First deployment of ARC at the NorduGrid testbed took place in summer 2002, and by 2003 it was used to support complex computations. The first stable release of ARC (version 0.4) came out in April 2004 under the GNU General Public License. The name "Advanced Resource Connector" was introduced for this release to distinguish the middleware from the infrastructure. In the same year, the Swedish national Grid project Swegrid became the first large cross-discipline infrastructure to be based on ARC. In 2005, NorduGrid was formally established as a collaboration to support and coordinate ARC development. In 2006 two closely related projects were launched: the Nordic Data Grid Facility, deploying a pan-Nordic e-Science infrastructure based on ARC, and KnowARC, focused on transforming ARC into a next generation Grid middleware. ARC v0.6 was released in May 2007, becoming the second stable release. Its key feature was introduction of the client library enabling easy development of higher-level applications. It was also the first ARC release making use of open standards, as it included support for JSDL. Later that year, the first technology preview of the next
https://en.wikipedia.org/wiki/Vasa%20recta%20%28kidney%29
The vasa recta of the kidney, (vasa recta renis) are the straight arterioles, and the straight venules of the kidney, – a series of blood vessels in the blood supply of the kidney that enter the medulla as the straight arterioles, and leave the medulla to ascend to the cortex as the straight venules. (Latin: vās, "vessel"; rēctus, "straight"). They lie parallel to the loop of Henle. These vessels branch off the efferent arterioles of juxtamedullary nephrons (those nephrons closest to the medulla). They enter the medulla, and surround the loop of Henle. Whereas the peritubular capillaries surround the cortical parts of the tubules, the vasa recta go into the medulla and are closer to the loop of Henle, and leave to ascend to the cortex. Terminations of the vasa recta form the straight venules, branches from the plexuses at the apices of the medullary pyramids. They run outward in a straight course between the tubes of the medullary substance and join the interlobular veins to form venous arcades. These in turn unite and form veins which pass along the sides of the renal pyramids. The descending vasa recta have a non-fenestrated endothelium that contains a facilitated transport for urea; the ascending vasa recta have, on the other hand, a fenestrated endothelium. Structure Microanatomy On a histological slide, the straight arterioles can be distinguished from the tubules of the loop of Henle by the presence of blood. Function Each straight arteriole has a hairpin turn in the medulla and carries blood at a very slow rate – two factors crucial in the maintenance of countercurrent exchange that prevent washout of the concentration gradients established in the renal medulla. The maintenance of this concentration gradient is one of the components responsible for the kidney's ability to produce concentrated urine. On the descending portion of the vasa recta, sodium, chloride and urea are reabsorbed into the blood, while water is secreted. On the ascending portion, so
https://en.wikipedia.org/wiki/T%20puzzle
The T puzzle is a tiling puzzle consisting of four polygonal shapes which can be put together to form a capital T. The four pieces are usually one isosceles right triangle, two right trapezoids and an irregular shaped pentagon. Despite its apparent simplicity, it is a surprisingly hard puzzle of which the crux is the positioning of the irregular shaped piece. The earliest T puzzles date from around 1900 and were distributed as promotional giveaways. From the 1920s wooden specimen were produced and made available commercially. Most T puzzles come with a leaflet with additional figures to be constructed. Which shapes can be formed depends on the relative proportions of the different pieces. Origins and early history The Latin Cross The Latin cross puzzle consists of a reassembling a five-piece dissection of the cross with three isosceles right triangles, one right trapezoids and an irregular shaped six-sized piece (see figure). When the pieces of the cross puzzle have the right dimensions, they can also be put together as a rectangle. From Chinese origin, the oldest examples date from the first half of the nineteenth century. One of the earliest published descriptions of the puzzle appeared in 1826 in the 'Sequel to the Endless Amusement'. Many other references of the cross puzzle can be found in amusement, puzzle and magicians books throughout the 19th century. The T puzzle is based on the cross puzzle, but without head and has therefore only four pieces. Another difference is that in the dissection of the T, one of the triangles is usually elongated as a right trapezoid. These changes make the puzzle more difficult and clever than the cross puzzle. Advertising premiums The T-puzzle became very popular in the beginning of the 20th century as a giveaway item, with hundreds of different companies using it to promote their business or product. The pieces were made from paper or cardboard and served as trade cards, with advertisement printed on them. They usually c
https://en.wikipedia.org/wiki/Reciprocal%20difference
In mathematics, the reciprocal difference of a finite sequence of numbers on a function is defined inductively by the following formulas: See also Divided differences
https://en.wikipedia.org/wiki/Single-loss%20expectancy
Single-loss expectancy (SLE) is the monetary value expected from the occurrence of a risk on an asset. It is related to risk management and risk assessment. Single-loss expectancy is mathematically expressed as: Where the exposure factor is represented in the impact of the risk over the asset, or percentage of asset lost. As an example, if the asset value is reduced by two thirds, the exposure factor value is 0.66. If the asset is completely lost, the exposure factor is 1. The result is a monetary value in the same unit as the single-loss expectancy is expressed (euros, dollars, yens, etc.): exposure factor is the subjective, potential percentage of loss to a specific asset if a specific threat is realized. The exposure factor is a subjective value that the person assessing risk must define. See also Information assurance Risk assessment Annualized loss expectancy
https://en.wikipedia.org/wiki/Thiele%27s%20interpolation%20formula
In mathematics, Thiele's interpolation formula is a formula that defines a rational function from a finite set of inputs and their function values . The problem of generating a function whose graph passes through a given set of function values is called interpolation. This interpolation formula is named after the Danish mathematician Thorvald N. Thiele. It is expressed as a continued fraction, where ρ represents the reciprocal difference: Be careful that the -th level in Thiele's interpolation formula is while the -th reciprocal difference is defined to be . The two terms are different and can not be cancelled!
https://en.wikipedia.org/wiki/Isotope-ratio%20mass%20spectrometry
Isotope-ratio mass spectrometry (IRMS) is a specialization of mass spectrometry, in which mass spectrometric methods are used to measure the relative abundance of isotopes in a given sample. This technique has two different applications in the earth and environmental sciences. The analysis of 'stable isotopes' is normally concerned with measuring isotopic variations arising from mass-dependent isotopic fractionation in natural systems. On the other hand, radiogenic isotope analysis involves measuring the abundances of decay-products of natural radioactivity, and is used in most long-lived radiometric dating methods. Introduction The isotope-ratio mass spectrometer (IRMS) allows the precise measurement of mixtures of naturally occurring isotopes. Most instruments used for precise determination of isotope ratios are of the magnetic sector type. This type of analyzer is superior to the quadrupole type in this field of research for two reasons. First, it can be set up for multiple-collector analysis, and second, it gives high-quality 'peak shapes'. Both of these considerations are important for isotope-ratio analysis at very high precision and accuracy. The sector-type instrument designed by Alfred Nier was such an advance in mass spectrometer design that this type of instrument is often called the 'Nier type'. In the most general terms the instrument operates by ionizing the sample of interest, accelerating it over a potential in the kilo-volt range, and separating the resulting stream of ions according to their mass-to-charge ratio (m/z). Beams with lighter ions bend at a smaller radius than beams with heavier ions. The current of each ion beam is then measured using a 'Faraday cup' or multiplier detector. Many radiogenic isotope measurements are made by ionization of a solid source, whereas stable isotope measurements of light elements (e.g. H, C, O) are usually made in an instrument with a gas source. In a "multicollector" instrument, the ion collector typica
https://en.wikipedia.org/wiki/Ternary%20search
A ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function. The function Assume we are looking for a maximum of and that we know the maximum lies somewhere between and . For the algorithm to be applicable, there must be some value such that for all with , we have , and for all with , we have . Algorithm Let be a unimodal function on some interval . Take any two points and in this segment: . Then there are three possibilities: if , then the required maximum can not be located on the left side – . It means that the maximum further makes sense to look only in the interval if , that the situation is similar to the previous, up to symmetry. Now, the required maximum can not be in the right side – , so go to the segment if , then the search should be conducted in , but this case can be attributed to any of the previous two (in order to simplify the code). Sooner or later the length of the segment will be a little less than a predetermined constant, and the process can be stopped. choice points and : Run time order Recursive algorithm def ternary_search(f, left, right, absolute_precision) -> float: """Left and right are the current bounds; the maximum is between them. """ if abs(right - left) < absolute_precision: return (left + right) / 2 left_third = (2*left + right) / 3 right_third = (left + 2*right) / 3 if f(left_third) < f(right_third): return ternary_search(f, left_third, right, absolute_precision) else: return ternary_search(f, left, right_third, absolute_precision) Iterative algorithm def ternary_search(f, left, right, absolute_precision) -> float: """Find maximum of unimodal function f() within [left, right]. To find the minimum, reverse the if/else statement or reverse the comparison. """ while abs(right - left) >= absolute_precision: left_third = left + (right - left) / 3 right_t
https://en.wikipedia.org/wiki/Free%20Willy%203%3A%20The%20Rescue
Free Willy 3: The Rescue is a 1997 American family film directed by Sam Pillsbury and written by John Mattson. Released by Warner Bros. under their Warner Bros. Family Entertainment banner, it is the sequel to Free Willy 2: The Adventure Home in addition to being the third film in the Free Willy franchise and final installment of the original storyline as well as the last to be released theatrically. Jason James Richter and August Schellenberg reprise their roles from the previous films while Annie Corley, Vincent Berry and Patrick Kilpatrick joined the cast. The story revolves around Jesse and Randolph attempting to stop a group of whalers, led by its ruthless captain, from illegally hunting Willy while secretly receiving help from an unlikely source involving the captain's young son after an accident changed his view on whales. Filming took place in British Columbia, Canada from July 31 to October 10, 1996 where several scenes were shot in Vancouver, Britannia Beach, Squamish and Howe Sound. The film is dedicated to Free Willy co-writer Keith A. Walker who died two months after production was completed. Free Willy 3: The Rescue premiered on August 8, 1997. It received mixed reviews from critics and was a box office bomb, grossing $3.4 million. Plot Sixteen year-old Jesse works as an orca-research assistant on a research ship called the Noah alongside his old friend Randolph who promised Glen and Annie to keep him out of trouble while on the job. Aboard just such a ship, the Botany Bay, ten year-old Max Wesley takes his first trip to sea with his father, John, a whaler from a long line of whalers and learns the true unlawful nature of the family business which includes selling whale meat to an underground Japanese market. During his first hunt, Max accidentally falls overboard and comes face to face with Willy. Jesse and Randolph discover a spear on Willy's fin, leading them to suspect he, his pregnant mate Nicky and their pod are being illegally hunted by Bota
https://en.wikipedia.org/wiki/Materialized%20view
In computing, a materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function. The process of setting up a materialized view is sometimes called materialization. This is a form of caching the results of a query, similar to memoization of the value of a function in functional languages, and it is sometimes described as a form of precomputation. As with other forms of precomputation, database users typically use materialized views for performance reasons, i.e. as a form of optimization. Materialized views that store data based on remote tables were also known as snapshots (deprecated Oracle terminology). In any database management system following the relational model, a view is a virtual table representing the result of a database query. Whenever a query or an update addresses an ordinary view's virtual table, the DBMS converts these into queries or updates against the underlying base tables. A materialized view takes a different approach: the query result is cached as a concrete ("materialized") table (rather than a view as such) that may be updated from the original base tables from time to time. This enables much more efficient access, at the cost of extra storage and of some data being potentially out-of-date. Materialized views find use especially in data warehousing scenarios, where frequent queries of the actual base tables can be expensive. In a materialized view, indexes can be built on any column. In contrast, in a normal view, it's typically only possible to exploit indexes on columns that come directly from (or have a mapping to) indexed columns in the base tables; often this functionality is not offered at all. Implementations Oracle Materialized views were implemented first by the Oracle Database: the Query rewrite feature was added from version 8i. Ex
https://en.wikipedia.org/wiki/Dijkstra%E2%80%93Scholten%20algorithm
The Dijkstra–Scholten algorithm (named after Edsger W. Dijkstra and Carel S. Scholten) is an algorithm for detecting termination in a distributed system. The algorithm was proposed by Dijkstra and Scholten in 1980. First, consider the case of a simple process graph which is a tree. A distributed computation which is tree-structured is not uncommon. Such a process graph may arise when the computation is strictly a divide-and-conquer type. A node starts the computation and divides the problem in two (or more, usually a multiple of 2) roughly equal parts and distribute those parts to other processors. This process continues recursively until the problems are of sufficiently small size to solve in a single processor. Algorithm The Dijkstra–Scholten algorithm is a tree-based algorithm which can be described by the following: The initiator of a computation is the root of the tree. Upon receiving a computational message: If the receiving process is currently not in the computation: the process joins the tree by becoming a child of the sender of the message. (No acknowledgment message is sent at this point.) If the receiving process is already in the computation: the process immediately sends an acknowledgment message to the sender of the message. When a process has no more children and has become idle, the process detaches itself from the tree by sending an acknowledgment message to its tree parent. Termination occurs when the initiator has no children and has become idle. Dijkstra–Scholten algorithm for a tree For a tree, it is easy to detect termination. When a leaf process determines that it has terminated, it sends a signal to its parent. In general, a process waits for all its children to send signals and then it sends a signal to its parent. The program terminates when the root receives signals from all its children. Dijkstra–Scholten algorithm for directed acyclic graphs The algorithm for a tree can be extended to acyclic directed graphs. We add an add
https://en.wikipedia.org/wiki/Near-field%20%28mathematics%29
In mathematics, a near-field is an algebraic structure similar to a division ring, except that it has only one of the two distributive laws. Alternatively, a near-field is a near-ring in which there is a multiplicative identity and every non-zero element has a multiplicative inverse. Definition A near-field is a set together with two binary operations, (addition) and (multiplication), satisfying the following axioms: A1: is an abelian group. A2: = for all elements , , of (The associative law for multiplication). A3: for all elements , , of (The right distributive law). A4: contains an element 1 such that for every element of (Multiplicative identity). A5: For every non-zero element of there exists an element such that (Multiplicative inverse). Notes on the definition The above is, strictly speaking, a definition of a right near-field. By replacing A3 by the left distributive law we get a left near-field instead. Most commonly, "near-field" is taken as meaning "right near-field", but this is not a universal convention. A (right) near-field is called "planar" if it is also a right quasifield. Every finite near-field is planar, but infinite near-fields need not be. It is not necessary to specify that the additive group is abelian, as this follows from the other axioms, as proved by B.H. Neumann and J.L. Zemmer. However, the proof is quite difficult, and it is more convenient to include this in the axioms so that progress with establishing the properties of near-fields can start more rapidly. Sometimes a list of axioms is given in which A4 and A5 are replaced by the following single statement: A4*: The non-zero elements form a group under multiplication. However, this alternative definition includes one exceptional structure of order 2 which fails to satisfy various basic theorems (such as for all ). Thus it is much more convenient, and more usual, to use the axioms in the form given above. The difference is that A4 requires 1 to be a
https://en.wikipedia.org/wiki/Body%20load
Body load is the specific physical or tactile sensations brought on by psychoactive drugs, especially psychedelics. Generally, body load is an unpleasant physical sensation that is difficult to describe objectively either in terms of other sensations or in its specific location. However, it could be likened to an instinct of the body sensing it is about to be placed under exceptional stress, a state of pre-shock. Common symptoms include stomach ache, nausea, dizziness, feelings of being over-stimulated or "wired," shivering, feelings of excessive tension in the torso, or, in more severe cases, shortness of breath or a feeling of suffocation. Different drugs may cause different body load sensations which vary in intensity and duration. In contrast, many drug users, and particularly users of cannabis, entactogens like MDMA or of certain synthetic phenethylamines (most notably the popular 2C-B) and tryptamines, also often report a "body high" or "body rush", which is similar to body load in many respects but is usually considered pleasant. Causes The causes of the experience of body load are unknown. However, one proposed mechanism is the stimulation of serotonergic 5-HT receptors, particularly those involved in tactile sensation and, equally importantly in many cases where nausea is experienced, those located along the lining of the digestive tract. Serotonin is heavily involved in appetite control, and over-stimulation of serotonergic receptors has been shown to cause nausea in overdoses of SSRIs or MDMA. Many psychedelics which can cause body load are partial serotonin agonists, which work by mimicking the structure of serotonin to varying degrees.
https://en.wikipedia.org/wiki/Garbage%20%28computer%20science%29
In computer science, garbage includes data, objects, or other regions of the memory of a computer system (or other system resources), which will not be used in any future computation by the system, or by a program running on it. Because every computer system has a finite amount of memory, and most software produces garbage, it is frequently necessary to deallocate memory that is occupied by garbage and return it to the heap, or memory pool, for reuse. Classification Garbage is generally classified into two types: syntactic garbage, any object or data which is within a program's memory space but unreachable from the program's root set; and semantic garbage, any object or data which is never accessed by a running program for any combination of program inputs. Objects and data which are not garbage are said to be live. Casually stated, syntactic garbage is data that cannot be reached, and semantic garbage is data that will not be reached. More precisely, syntactic garbage is data that is unreachable due to the reference graph (there is no path to it), which can be determined by many algorithms, as discussed in tracing garbage collection, and only requires analyzing the data, not the code. Semantic garbage is data that will not be accessed, either because it is unreachable (hence also syntactic garbage), or is reachable but will not be accessed; this latter requires analysis of the code, and is in general an undecidable problem. Syntactic garbage is a (usually strict) subset of semantic garbage, as it is entirely possible for an object to hold a reference to another object without ever using that object. Example In the following simple stack implementation in Java, each element popped from the stack becomes semantic garbage once there are no outside references to it: public class Stack { private Object[] elements; private int size; public Stack(int capacity) { elements = new Object[capacity]; } public void push(Object e) { elemen
https://en.wikipedia.org/wiki/Diode%20logic
Diode logic (or diode-resistor logic) constructs AND and OR logic gates with diodes and resistors. An active device (vacuum tubes in early computers, then transistors in diode–transistor logic) is additionally required to provide logical inversion (NOT) for functional completeness and amplification for voltage level restoration, which diode logic alone can't provide. Since voltage levels weaken with each diode logic stage, multiple stages can't easily be cascaded, limiting diode logic's usefulness. However, diode logic has the advantage of utilizing only cheap passive components. Background Logic gates Logic gates evaluate Boolean algebra, typically using electronic switches controlled by logical inputs connected in parallel or series. Diode logic can only implement OR and AND, because inverters (NOT gates) require an active device. Logic voltage levels Main article: Binary logic uses two distinct logic levels of voltage signals that may be labeled high and low. In this discussion, voltages close to +5 volts are high, and voltages close to 0 volts (ground) are low. The exact magnitude of the voltage is not critical, provided that inputs are driven by strong enough sources so that output voltages lie within detectably different ranges. For active-high or positive logic, high represents logic 1 (true) and low represents logic 0 (false). However, the assignment of logical 1 and logical 0 to high or low is arbitrary and is reversed in active-low or negative logic, where low is logical 1 while high is logical 0. The following diode logic gates work in both active-high or active-low logic, however the logical function they implement is different depending on what voltage level is considered active. Switching between active-high and active-low is commonly used to achieve a more efficient logic design. Diode biasing Forward-biased diodes have low impedance approximating a short circuit with a small voltage drop, while reverse-biased diodes have a very high imped
https://en.wikipedia.org/wiki/Method%20ringing
Method ringing (also known as scientific ringing) is a form of change ringing in which the ringers commit to memory the rules for generating each change of sequence, and pairs of bells are affected. This creates a form of bell music which is continually changing, but which cannot be discerned as a conventional melody. It is a way of sounding continually changing mathematical permutations. It is distinct from call changes, where the ringers are instructed on how to generate each new change by calls from a conductor, and strictly, only two adjacent bells swap their position at each change. In method ringing, the ringers are guided from permutation to permutation by following the rules of a method. Ringers typically learn a particular method by studying its "blue line", a diagram which shows its structure. The underlying mathematical basis of method ringing is intimately linked to group theory. The basic building block of method ringing is plain hunt. The first method, Grandsire, was designed around 1650, probably by Robert Roan who became master of the College Youths change ringing society in 1652. Details of the method on five bells appeared in print in 1668 in Tintinnalogia (Fabian Stedman with Richard Duckworth) and Campanalogia (1677 – written solely by Stedman), which are the first two publications on the subject. The practice originated in England and remains most popular there today; in addition to bells in church towers, it is also often performed on handbells. Fundamentals There are thousands of different methods, a few of which are the below. Plain hunt Plain hunt is the simplest form of generating changing permutations continuously, and is a fundamental building-block of change ringing methods. It can be extended to any number of bells. It consists of a plain undeviating course of a bell between the first and last places in the striking order, with two strikes in the first and last position to enable a turn-around. Thus each bell moves one positi
https://en.wikipedia.org/wiki/Densely%20packed%20decimal
Densely packed decimal (DPD) is an efficient method for binary encoding decimal digits. The traditional system of binary encoding for decimal digits, known as binary-coded decimal (BCD), uses four bits to encode each digit, resulting in significant wastage of binary data bandwidth (since four bits can store 16 states and are being used to store only 10), even when using packed BCD. Densely packed decimal is a more efficient code that packs three digits into ten bits using a scheme that allows compression from, or expansion to, BCD with only two or three hardware gate delays. The densely packed decimal encoding is a refinement of Chen–Ho encoding; it gives the same compression and speed advantages, but the particular arrangement of bits used confers additional advantages: Compression of one or two digits (into the optimal four or seven bits respectively) is achieved as a subset of the three-digit encoding. This means that arbitrary numbers of decimal digits (not only multiples of three digits) can be encoded efficiently. For example, 38 = 12 × 3 + 2 decimal digits can be encoded in 12 × 10 + 7 = 127 bits – that is, 12 sets of three decimal digits can be encoded using 12 sets of ten binary bits and the remaining two decimal digits can be encoded using a further seven binary bits. The subset encoding mentioned above is simply the rightmost bits of the standard three-digit encoding; the encoded value can be widened simply by adding leading 0 bits. All seven-bit BCD numbers (0 through 79) are encoded identically by DPD. This makes conversions of common small numbers trivial. (This must break down at 80, because that requires eight bits for BCD, but the above property requires that the DPD encoding must fit into seven bits.) The low-order bit of each digit is copied unmodified. Thus, the non-trivial portion of the encoding can be considered a conversion from three base-5 digits to seven binary bits. Further, digit-wise logical values (in which each digit is either
https://en.wikipedia.org/wiki/Terrestrial%20locomotion
Terrestrial locomotion has evolved as animals adapted from aquatic to terrestrial environments. Locomotion on land raises different problems than that in water, with reduced friction being replaced by the increased effects of gravity. As viewed from evolutionary taxonomy, there are three basic forms of animal locomotion in the terrestrial environment: legged – moving by using appendages limbless locomotion – moving without legs, primarily using the body itself as a propulsive structure. rolling – rotating the body over the substrate Some terrains and terrestrial surfaces permit or demand alternative locomotive styles. A sliding component to locomotion becomes possible on slippery surfaces (such as ice and snow), where location is aided by potential energy, or on loose surfaces (such as sand or scree), where friction is low but purchase (traction) is difficult. Humans, especially, have adapted to sliding over terrestrial snowpack and terrestrial ice by means of ice skates, snow skis, and toboggans. Aquatic animals adapted to polar climates, such as ice seals and penguins also take advantage of the slipperiness of ice and snow as part of their locomotion repertoire. Beavers are known to take advantage of a mud slick known as a "beaver slide" over a short distance when passing from land into a lake or pond. Human locomotion in mud is improved through the use of cleats. Some snakes use an unusual method of movement known as sidewinding on sand or loose soil. Animals caught in terrestrial mudflows are subject to involuntary locomotion; this may be beneficial to the distribution of species with limited locomotive range under their own power. There is less opportunity for passive locomotion on land than by sea or air, though parasitism (hitchhiking) is available toward this end, as in all other habitats. Many species of monkeys and apes use a form of arboreal locomotion known as brachiation, with forelimbs as the prime mover. Some elements of the gymnastic sport of une
https://en.wikipedia.org/wiki/Professional%20audio
Professional audio, abbreviated as pro audio, refers to both an activity and a category of high-quality, studio-grade audio equipment. Typically it encompasses sound recording, sound reinforcement system setup and audio mixing, and studio music production by trained sound engineers, audio engineers, record producers, and audio technicians who work in live event support and recording using mixing consoles, recording equipment and sound reinforcement systems. Professional audio is differentiated from consumer- or home-oriented audio, which are typically geared toward listening in a non-commercial environment. Professional audio can include, but is not limited to broadcast radio, audio mastering in a recording studio, television studio, and sound reinforcement such as a live concert, DJ performances, audio sampling, public address system set up, sound reinforcement in movie theatres, and design and setup of piped music in hotels and restaurants. Professional audio equipment is sold at professional audio stores and music stores. Definition The term professional audio has no precise definition, but it typically includes: Operations carried out by trained audio engineers The capturing of sound with one or more microphones Balancing, mixing and adjusting sound signals from multitrack recording devices using a mixing console The control of audio levels using standardized types of metering Sound signals passing through lengthy signal chains involving processes at different times and places, involving a variety of skills Compliance with organizational, national and international practices and standards established by such bodies as the International Telecommunication Union, Audio Engineering Society and European Broadcasting Union Setting up or designing sound reinforcement systems or recording studios Stores A professional audio store is a retail establishment that sells, and in many cases rents, expensive, high-end sound recording equipment (microphones, audio m
https://en.wikipedia.org/wiki/Thermus%20thermophilus
Thermus thermophilus is a Gram-negative bacterium used in a range of biotechnological applications, including as a model organism for genetic manipulation, structural genomics, and systems biology. The bacterium is extremely thermophilic, with an optimal growth temperature of about . Thermus thermophilus was originally isolated from a thermal vent within a hot spring in Izu, Japan by Tairo Oshima and Kazutomo Imahori. The organism has also been found to be important in the degradation of organic materials in the thermogenic phase of composting. T. thermophilus is classified into several strains, of which HB8 and HB27 are the most commonly used in laboratory environments. Genome analyses of these strains were independently completed in 2004. Thermus also displays the highest frequencies of natural transformation known to date. Cell structure Thermus thermophilus is a Gram-negative bacterium with an outer membrane that is composed of phospholipids and lipopolysaccharides. This bacterium also has a thin peptidoglycan (also known as murein) layer, in this layer there are 29 muropeptides which account for more than 85% of the total murein layer. The presence of Ala, Glu, Gly, Orn, N-acetyl glucosamine and N-acetylmuramic were found in the murein layer of this bacterium. Another unique feature of this murein layer is that the N-terminal Gly is substituted with phenylacetic acid. This is the first instance of phenylacetic acid found in the murein of bacterial cells. The composition and peptide cross-bridges found in this murein layer are typical of Gram-positive bacterium, but the amount, the degree of the cross-linkage and length of the glycan chain gives this bacterium its Gram-negative properties. Survival mechanisms Thermus thermophilus was originally found within a thermal vent in Japan. These bacteria can be found in a variety of geothermal environments. These Thermophiles require a more stringent DNA repair system, as DNA becomes unstable at high temperatures.
https://en.wikipedia.org/wiki/Stercoral%20perforation
Stercoral perforation is the perforation or rupture of the intestine's walls by its internal contents, such as hardened feces or foreign objects. Hardened stools may form in prolonged constipation or other diseases which cause obstruction of transit, such as Chagas disease, Hirschprung's disease, toxic colitis, hypercalcemia, and megacolon. Symptoms can include abdominal distension, pain, and nausea. Stercoral perforation is a rare and very dangerous, life-threatening situation, as well as a surgical emergency, because the spillage of contaminated intestinal contents into the abdominal cavity leads to peritonitis, a rapid bacteremia (bacterial infection of the blood), with many complications. See also Gastrointestinal perforation Stercoral ulcer, which can lead to stercoral perforation
https://en.wikipedia.org/wiki/Toxic%20oil%20syndrome
Toxic oil syndrome (TOS) or simply toxic syndrome (Spanish: síndrome del aceite tóxico or síndrome tóxico) is a musculoskeletal disease. A 1981 outbreak in Spain which affected about 20,000 people, with over 300 dying within a few months and a few thousand remaining disabled, is thought to have been caused by contaminated colza (rapeseed) oil. It was unique because of its size, the novelty of the clinical condition, and the complexity of its aetiology. Its first appearance was as a lung disease, with unusual features, though the symptoms initially resembled a lung infection. The disease appeared to be restricted to certain geographical localities, and several members of a family could be affected, even while their neighbours had no symptoms. Following the acute phase, a range of other chronic symptoms was apparent. Alternative mechanisms The conclusion of the Joint WHO/CISAT Scientific Committee for the Toxic Oil Syndrome from 2002, that oil was the cause for TOS, is based only on epidemiological evidence, since up to now, experimental studies performed in a variety of laboratory animals have failed to reproduce the symptoms of human TOS. None of the in vivo or in vitro studies performed with toxic-oil-specific components, such as fatty acid anilides, and esters of 3-(N-phenylamino)-1,2-propanediol (abbreviated as PAP), have provided evidence that these markers are causally involved in the pathogenesis of TOS. Specifically, three possible causative agents of TOS are PAP (3-(N-phenylamino)-1,2-propanediol), the 1,2-dioleoyl ester of PAP (abbreviated OOPAP), and the 3-oleoyl ester of PAP (abbreviated OPAP). These three compounds are formed by means of similar chemical processes, and oil that contains one of the three substances is likely to contain the other two. Oil samples that are suspected to have been ingested by people who later developed TOS often contain all three of these contaminants (among other substances), but are most likely to contain OOPAP. However
https://en.wikipedia.org/wiki/Pui%20Ching%20Invitational%20Mathematics%20Competition
Pui Ching Invitational Mathematics Competition (Traditional Chinese: 培正數學邀請賽), is held yearly by Pui Ching Middle School since 2002. It was formerly named as Pui Ching Middle School Invitational Mathematics Competition for the first three years. At present, more than 130 secondary schools send teams to participate in the competition. See also List of mathematics competitions Education in Hong Kong External links Official website (in Traditional Chinese) Site with past papers (in Traditional Chinese and English) Competitions in Hong Kong Mathematics competitions Recurring events established in 2002 2002 establishments in Hong Kong
https://en.wikipedia.org/wiki/Media%20processor
A media processor, mostly used as an image/video processor, is a microprocessor-based system-on-a-chip which is designed to deal with digital streaming data in real-time (e.g. display refresh) rates. These devices can also be considered a class of digital signal processors (DSPs). Unlike graphics processing units (GPUs), which are used for computer displays, media processors are targeted at digital televisions and set-top boxes. The streaming digital media classes include: uncompressed video compressed digital video - e.g. MPEG-1, MPEG-2, MPEG-4 digital audio- e.g. PCM, AAC Such SOCs are composed of: a microprocessor optimized to deal with these media datatypes a memory interface streaming media interfaces specialized functional units to help deal with the various digital media codecs The microprocessor might have these optimizations: vector processing or SIMD functional units to efficiently deal with these media datatypes DSP-like features Previous to media processors, these streaming media datatypes were processed using fixed-function, hardwired ASICs, which could not be updated in the field. This was a big disadvantage when any of the media standards were changed. Since media processors are software programmed devices, the processing done on them could be updated with new software releases. This allowed new generations of systems to be created without hardware redesign. For set-top boxes this even allows for the possibility of in-the-field upgrade by downloading of new software through cable or satellite networks. Companies that pioneered the idea of media processors (and created the marketing term of media processor) included: MicroUnity MediaProcessor - Cancelled in 1996 before introduction IBM Mfast - Described at the Microprocessor Forum in 1995, planned to ship in mid-1997 but was cancelled before introduction Equator Semiconductor BSP line - their processors are used in Hitachi televisions, company acquired by Pixelworks Chromatic
https://en.wikipedia.org/wiki/Social%20facilitation
Social facilitation is a social phenomenon in which being in the presence of others improves individual task performance. That is, people do better on tasks when they are with other people rather than when they are doing the task alone. Situations that elicit social facilitation include coaction, performing for an audience, and appears to depend on task complexity. Norman Triplett's early investigations describes social facilitation to occur during instances of coaction, which is performing a task in the presence of other people performing a similar task, while not necessarily engaging in direct interactions with each other. Triplett first observed this in cyclists, finding that cyclists rode at faster speeds when competing against other cyclists compared to when cycling alone. Social facilitation has also been known to occur when performing a task in front of an audience, or during periods of observation, sometimes referred to as audience effects. For instance, during exercise Meumann (1904) found that when being watched, individuals could lift heavier weights compared to when they were not being watched. Research on the effects of coaction and audience effects on social facilitation have been mixed. In an attempt to discover why these types of situations do not always trigger social facilitation, Robert Zajonc (1965) theorized that perhaps task complexity, or how simple versus complex a task is, could influence whether or not social facilitation occurs. Zajonc predicted that simple tasks would result in social facilitation within group settings, whereas more complicated tasks would not. According to Zajonc, some tasks are easier to learn and perform than others because they require dominant responses. Dominant responses are behavioral responses at the top of an organism's behavioral repertoire, making them more readily available, or 'dominant', above all other responses. Tasks that elicit dominant responses are typically simpler, less effortful, and easier to
https://en.wikipedia.org/wiki/Unigine
UNIGINE is a proprietary cross-platform game engine developed by UNIGINE Company used in simulators, virtual reality systems, serious games and visualization. It supports OpenGL 4, Vulkan and DirectX 12. UNIGINE Engine is a core technology for a lineup of benchmarks (CPU, GPU, power supply, cooling system), which are used by overclockers and technical media such as Tom's Hardware, Linus Tech Tips, PC Gamer, and JayzTwoCents. UNIGINE benchmarks are also included as part of the Phoronix Test Suite for benchmarking purposes on Linux and other systems. UNIGINE 1 The first public release was the 0.3 version on May 4, 2005. Platforms UNIGINE 1 supported Microsoft Windows, Linux, OS X, PlayStation 3, Android, and iOS. Experimental support for WebGL existed but was not included into the official SDK. UNIGINE 1 supported DirectX 9, DirectX 10, DirectX 11, OpenGL, OpenGL ES and PlayStation 3, while initial versions (v0.3x) only supported OpenGL. UNIGINE 1 provided C++, C#, and UnigineScript APIs for developers. It also supported the shading languages GLSL and HLSL. Game features UNIGINE 1 had support for large virtual scenarios and specific hardware required by professional simulators and enterprise VR systems, often called serious games. Support for large virtual worlds was implemented via double precision of coordinates (64-bit per axis), zone-based background data streaming, and optional operations in geographic coordinate system (latitude, longitude, and elevation instead of X, Y, Z). Display output was implemented via multi-channel rendering (network-synchronized image generation of a single large image with several computers), which typical for professional simulators. The same system enabled support of multiple output devices with asymmetric projections (e.g. CAVE). Curved screens with multiple projectors were also supported. UNIGINE 1 had stereoscopic output support for anaglyph rendering, separate images output, Nvidia 3D Vision, and virtual reality headsets
https://en.wikipedia.org/wiki/Hyperaccumulator
A hyperaccumulator is a plant capable of growing in soil or water with very high concentrations of metals, absorbing these metals through their roots, and concentrating extremely high levels of metals in their tissues. The metals are concentrated at levels that are toxic to closely related species not adapted to growing on the metalliferous soils. Compared to non-hyperaccumulating species, hyperaccumulator roots extract the metal from the soil at a higher rate, transfer it more quickly to their shoots, and store large amounts in leaves and roots. The ability to hyperaccumulate toxic metals compared to related species has been shown to be due to differential gene expression and regulation of the same genes in both plants. Hyperaccumulating plants are of interest for their ability to extract metals from the soils of contaminated sites (phytoremediation) to return the ecosystem to a less toxic state. The plants also hold potential to be used to mine metals from soils with very high concentrations (phytomining) by growing the plants, then harvesting them for the metals in their tissues. The genetic advantage of hyperaccumulation of metals may be that the toxic levels of heavy metals in leaves deter herbivores or increase the toxicity of other anti-herbivory metabolites. Physiological basis Metals are predominantly accumulated in the roots causing an unbalanced shoot to root ratio of metal concentrations in most plants. However, in hyperaccumulators, the shoot to root ratio of metal concentrations are abnormally higher in the leaves and much lower in the roots. As this process occurs, metals are efficiently shuttled from the root to the shoot as an enhanced ability in order to protect the roots from metal toxicity. Delving into tolerance: Throughout the research of hyperaccumulation, there is a conundrum with tolerance. There are several different understandings of tolerance associated with accumulation; however, there are a few similarities. Evidence has conveyed
https://en.wikipedia.org/wiki/Track%20and%20trace
In the distribution and logistics of many types of products, track and trace or tracking and tracing concerns a process of determining the current and past locations (and other information) of a unique item or property. Mass serialization is the process that manufacturers go through to assign and mark each of their products with a unique identifier such as an Electronic Product Code (EPC) for track and trace purposes. The marking or "tagging" of products is usually completed within the manufacturing process through the use of various combinations of human readable or machine readable technologies such as DataMatrix barcodes or RFID. The track and trace concept can be supported by means of reckoning and reporting of the position of vehicles and containers with the property of concern, stored, for example, in a real-time database. This approach leaves the task to compose a coherent depiction of the subsequent status reports. Another approach is to report the arrival or departure of the object and recording the identification of the object, the location where observed, the time, and the status. This approach leaves the task to verify the reports regarding consistency and completeness. An example of this method might be the package tracking provided by shippers, such as the United States Postal Service, Deutsche Post, Royal Mail, United Parcel Service, AirRoad, or FedEx. Technology The international standards organization EPCglobal under GS1 has ratified the EPC network standards (esp. the EPC information services EPCIS standard) which codify the syntax and semantics for supply chain events and the secure method for selectively sharing supply chain events with trading partners. These standards for Tracking and Tracing have been used in successful deployments in many industries and there are now a wide range of products that are certified as being compatible with these standards. In response to a growing number of recall incidents (food, pharmaceutical, toys, etc.)
https://en.wikipedia.org/wiki/Genotyping
Genotyping is the process of determining differences in the genetic make-up (genotype) of an individual by examining the individual's DNA sequence using biological assays and comparing it to another individual's sequence or a reference sequence. It reveals the alleles an individual has inherited from their parents. Traditionally genotyping is the use of DNA sequences to define biological populations by use of molecular tools. It does not usually involve defining the genes of an individual. Techniques Current methods of genotyping include restriction fragment length polymorphism identification (RFLPI) of genomic DNA, random amplified polymorphic detection (RAPD) of genomic DNA, amplified fragment length polymorphism detection (AFLPD), polymerase chain reaction (PCR), DNA sequencing, allele specific oligonucleotide (ASO) probes, and hybridization to DNA microarrays or beads. Genotyping is important in research of genes and gene variants associated with disease. Due to current technological limitations, almost all genotyping is partial. That is, only a small fraction of an individual's genotype is determined, such as with (epi)GBS (Genotyping by sequencing) or RADseq. New mass-sequencing technologies promise to provide whole-genome genotyping (or whole genome sequencing) in the future. Applications Genotyping applies to a broad range of individuals, including microorganisms. For example, viruses and bacteria can be genotyped. Genotyping in this context may help in controlling the spreading of pathogens, by tracing the origin of outbreaks. This area is often referred to as molecular epidemiology or forensic microbiology. Human genotyping Humans can also be genotyped. For example, when testing fatherhood or motherhood, scientists typically only need to examine 10 or 20 genomic regions (like single-nucleotide polymorphism (SNPs)), which represent a tiny fraction of the human genome. When genotyping transgenic organisms, a single genomic region may be all t
https://en.wikipedia.org/wiki/Ascending%20aorta
The ascending aorta (AAo) is a portion of the aorta commencing at the upper part of the base of the left ventricle, on a level with the lower border of the third costal cartilage behind the left half of the sternum. Structure It passes obliquely upward, forward, and to the right, in the direction of the heart's axis, as high as the upper border of the second right costal cartilage, describing a slight curve in its course, and being situated, about behind the posterior surface of the sternum. The total length is about . Components The aortic root is the portion of the aorta beginning at the aortic annulus and extending to the sinotubular junction. It is sometimes regarded as a part of the ascending aorta, and sometimes regarded as a separate entity from the rest of the ascending aorta. Between each commissure of the aortic valve and opposite the cusps of the aortic valve, three small dilatations called the aortic sinuses. The sinotubular junction is the point in the ascending aorta where the aortic sinuses end and the aorta becomes a tubular structure. Size A thoracic aorta diameter greater than 3.5 cm is generally considered dilated, whereas a diameter greater than 4.5 cm is generally considered to be a thoracic aortic aneurysm. Still, the average diameter in the population varies by for example age and sex. The upper limit of standard reference range of the ascending aorta may be up to 4.3 cm among large, elderly individuals. Relations At the union of the ascending aorta with the aortic arch the caliber of the vessel is increased, owing to a bulging of its right wall. This dilatation is termed the bulb of the aorta, and on transverse section presents a somewhat oval figure. The ascending aorta is contained within the pericardium, and is enclosed in a tube of the serous pericardium, common to it and the pulmonary artery. The ascending aorta is covered at its commencement by the trunk of the pulmonary artery and the right auricula, and, higher up, is sepa
https://en.wikipedia.org/wiki/Maupertuis%27s%20principle
In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). It is a special case of the more generally stated principle of least action. Using the calculus of variations, it results in an integral equation formulation of the equations of motion for the system. Mathematical formulation Maupertuis's principle states that the true path of a system described by generalized coordinates between two specified states and is a stationary point (i.e., an extremum (minimum or maximum) or a saddle point) of the abbreviated action functional where are the conjugate momenta of the generalized coordinates, defined by the equation where is the Lagrangian function for the system. In other words, any first-order perturbation of the path results in (at most) second-order changes in . Note that the abbreviated action is a functional (i.e. a function from a vector space into its underlying scalar field), which in this case takes as its input a function (i.e. the paths between the two specified states). Jacobi's formulation For many systems, the kinetic energy is quadratic in the generalized velocities although the mass tensor may be a complicated function of the generalized coordinates . For such systems, a simple relation relates the kinetic energy, the generalized momenta and the generalized velocities provided that the potential energy does not involve the generalized velocities. By defining a normalized distance or metric in the space of generalized coordinates one may immediately recognize the mass tensor as a metric tensor. The kinetic energy may be written in a massless form or, Therefore, the abbreviated action can be written since the kinetic energy equals the (constant) total energy minus the potential energy . In particular, if the potential energy is a constant, then Jacobi's principle re
https://en.wikipedia.org/wiki/Aortic%20arches
The aortic arches or pharyngeal arch arteries (previously referred to as branchial arches in human embryos) are a series of six paired embryological vascular structures which give rise to the great arteries of the neck and head. They are ventral to the dorsal aorta and arise from the aortic sac. The aortic arches are formed sequentially within the pharyngeal arches and initially appear symmetrical on both sides of the embryo, but then undergo a significant remodelling to form the final asymmetrical structure of the great arteries. Structure Arches 1 and 2 The first and second arches disappear early. A remnant of the 1st arch forms part of the maxillary artery, a branch of the external carotid artery. The ventral end of the second develops into the ascending pharyngeal artery, and its dorsal end gives origin to the stapedial artery, a vessel which typically atrophies in humans but persists in some mammals. The stapedial artery passes through the ring of the stapes and divides into supraorbital, infraorbital, and mandibula branches which follow the three divisions of the trigeminal nerve. A remnant of the second arch also forms the hyoid artery. The infraorbital and mandibular branches arise from a common stem, the terminal part of which anastomoses with the external carotid artery. On the obliteration of the stapedial artery, this anastomosis enlarges and forms the internal maxillary artery; branches formerly of the stapedial artery are subsequently considered branches of the internal maxillary artery. The common stem of the infraorbital and mandibular branches passes between the two roots of the auriculotemporal nerve and becomes the middle meningeal artery; the original supraorbital branch of the stapedial is represented by the orbital twigs of the middle meningeal. Note that the external carotid buds from the horns of the aortic sac left behind by the regression of the first two arches. Arch 3 The third aortic arch constitutes the commencement of the internal
https://en.wikipedia.org/wiki/Bregma
The bregma is the anatomical point on the skull at which the coronal suture is intersected perpendicularly by the sagittal suture. Structure The bregma is located at the intersection of the coronal suture and the sagittal suture on the superior middle portion of the calvaria. It is the point where the frontal bone and the two parietal bones meet. Development The bregma is known as the anterior fontanelle during infancy. The anterior fontanelle is membranous and closes in the first 18-36 months of life. Clinical significance Cleidocranial dysostosis In the birth defect cleidocranial dysostosis, the anterior fontanelle never closes to form the bregma. Surgical landmark The bregma is often used as a reference point for stereotactic surgery of the brain. It may be identified by blunt scraping of the surface of the skull and washing to make the meeting point of the sutures clearer. Neonatal examination Examination of an infant includes palpating the anterior fontanelle. It should be flat, soft, and less than 3.5cm across. A sunken fontanelle indicates dehydration, whereas a very tense or bulging anterior fontanelle indicates raised intracranial pressure. Height assessment Cranial height is defined as the distance between the bregma and the midpoint of the foramen magnum (the basion). This is strongly linked to more general growth. This can be used to assess the general health of a deceased person as part of an archaeological excavation, giving information on the health of a population. Etymology The word "bregma" comes from the Ancient Greek βρέγμα (brégma), meaning the bone directly above the brain.
https://en.wikipedia.org/wiki/Canine%20fossa
In the musculoskeletal anatomy of the human head, lateral to the incisive fossa of the maxilla is a depression called the canine fossa. It is larger and deeper than the comparable incisive fossa, and is separated from it by a vertical ridge, the canine eminence, corresponding to the socket of the canine tooth; See also Fossa
https://en.wikipedia.org/wiki/Oral%20medicine
An oral medicine or stomatology doctor/dentist (or stomatologist) has received additional specialized training and experience in the diagnosis and management of oral mucosal abnormalities (growths, ulcers, infection, allergies, immune-mediated and autoimmune disorders) including oral cancer, salivary gland disorders, temporomandibular disorders (e.g.: problems with the TMJ) and facial pain (due to musculoskeletal or neurologic conditions), taste and smell disorders; and recognition of the oral manifestations of systemic and infectious diseases. It lies at the interface between medicine and dentistry. An oral medicine doctor is trained to diagnose and manage patients with disorders of the orofacial region, essentially as a "physician of the mouth". History The importance of the mouth in medicine has been recognized since the earliest known medical writings. For example, Hippocrates, Galen and others considered the tongue to be a "barometer" of health, and emphasized the diagnostic and prognostic importance of the tongue. However, oral medicine as a specialization is a relatively new subject area. It used to be termed "stomatology" (-stomato- + -ology). In some institutions, it is termed "oral medicine and oral diagnosis". American physician and dentist, Thomas E Bond authored the first book on oral and maxillofacial pathology in 1848, entitled "A Practical Treatise on Dental Medicine". The term "oral medicine" was not used again until 1868. Jonathan Hutchinson is also considered the father of oral medicine by some. Oral medicine grew from a group of New York dentists (primarily periodontists), who were interested in the interactions between medicine and dentistry in the 1940s. Before becoming its own specialty in the United States, oral medicine was historically once a subset of the specialty of periodontics, with many periodontists achieving board certification in oral medicine as well as periodontics. Scope Oral medicine is concerned with clinical diagnosis a
https://en.wikipedia.org/wiki/Steering%20ratio
Steering ratio refers to the ratio between the turn of the steering wheel (in degrees) or handlebars and the turn of the wheels (in degrees). The steering ratio is the ratio of the number of degrees of turn of the steering wheel to the number of degrees the wheel(s) turn as a result. In motorcycles, delta tricycles and bicycles, the steering ratio is always 1:1, because the steering wheel is fixed to the front wheel. A steering ratio of x:y means that a turn of the steering wheel x degree(s) causes the wheel(s) to turn y degree(s). In most passenger cars, the ratio is between 12:1 and 20:1. For example, if one and a half turns of the steering wheel, 540 degrees, causes the inner & outer wheel to turn 35 and 30 degrees respectively, due to Ackermann steering geometry, the ratio is then 540:((35+30)/2) = 16.6:1. A higher steering ratio means that the steering wheel is turned more to get the wheels turning, but it will be easier to turn the steering wheel. A lower steering ratio means that the steering wheel is turned less to get the wheels turning, but it will be harder to turn the steering wheel. Larger and heavier vehicles will often have a higher steering ratio, which will make the steering wheel easier to turn. If a truck had a low steering ratio, it would be very hard to turn the steering wheel. In normal and lighter cars, the wheels are easier to turn, so the steering ratio doesn't have to be as high. In race cars the ratio is typically very low, because the vehicle must respond to steering input much faster than in normal cars. The steering wheel is therefore harder to turn. Variable-ratio steering Variable-ratio steering is a system that uses different ratios on the rack in a rack and pinion steering system. At the center of the rack, the space between the teeth are smaller and the space becomes larger as the pinion moves down the rack. In the middle of the rack there is a higher ratio and the ratio becomes lower as the steering wheel is turned towards loc
https://en.wikipedia.org/wiki/Tributyltin
Tributyltin (TBT) is an umbrella term for a class of organotin compounds which contain the group, with a prominent example being tributyltin oxide. For 40 years TBT was used as a biocide in anti-fouling paint, commonly known as bottom paint, applied to the hulls of oceangoing vessels. Bottom paint improves ship performance and durability as it reduces the rate of biofouling, the growth of organisms on the ship's hull. The TBT slowly leaches out into the marine environment where it is highly toxic toward nontarget organisms. TBT toxicity can lead to biomagnification or bioaccumulation within such nontarget organisms like invertebrates, vertebrates, and a variety of mammals. TBT is also an obesogen. After it led to collapse of local populations of organisms, TBT was banned. Chemical properties TBT, or tributyltin, tributylstannyl or tributyl stannic hydride compounds are organotin compounds. They have three butyl groups covalently bonded to a tin(IV) atom. A general formula for these compounds is . The is typically a chloride , hydroxide , or a carboxylate , where R is an organyl group. TBT is also known to be an endocrine disrupting compound, which influences biological activities such as growth, reproduction and other physiological processes. TBT compounds have a low water solubility, a property that is ideal for antifouling agents. The toxicity of TBT prevents the growth of algae, barnacles, molluscs and other organisms on ships hulls. When introduced into a marine or aquatic environment, TBT adheres to bed sediments. TBT has a low Log Kow of 3.19 – 3.84 in distilled water and 3.54 for sea water, this makes TBT moderately hydrophobic. TBT compounds have a high fat solubility and tend to absorb more readily to organic matter in soils or sediment. The bioaccumulation of TBT in organisms such as molluscs, oysters and dolphins, have extreme effects on their reproductive systems, central nervous systems and endocrine systems. However, the adsorption of TBT to sedime
https://en.wikipedia.org/wiki/Referer%20spoofing
In HTTP networking, typically on the World Wide Web, referer spoofing (based on a canonised misspelling of "referrer") sends incorrect referer information in an HTTP request in order to prevent a website from obtaining accurate data on the identity of the web page previously visited by the user. Overview Referer spoofing is typically done for data privacy reasons, in testing, or in order to request information (without genuine authority) which some web servers may only supply in response to requests with specific HTTP referers. To improve their privacy, individual browser users may replace accurate referer data with inaccurate data, though many simply suppress their browser's sending of any referer data. Sending no referrer information is not technically spoofing, though sometimes also described as such. In software, systems and networks testing, and sometimes penetration testing, referer spoofing is often just part of a larger procedure of transmitting both accurate and inaccurate as well as expected and unexpected input to the HTTPD system being tested and observing the results. While many websites are configured to gather referer information and serve different content depending on the referer information obtained, exclusively relying on HTTP referer information for authentication and authorization purposes is not a genuine computer security measure. HTTP referer information is freely alterable and interceptable, and is not a password, though some poorly configured systems treat it as such. Application Some websites, especially many image hosting sites, use referer information to secure their materials: only browsers arriving from their web pages are served images. Additionally a site may want users to click through pages with advertisements before directly being able to access a downloadable file – using the referring page or referring site information can help a site redirect unauthorized users to the landing page the site would like to use. If attacker
https://en.wikipedia.org/wiki/X.3
X.3 is an ITU-T standard indicating what functions are to be performed by a Packet Assembler/Disassembler (PAD) when connecting character-mode data terminal equipment (DTE), such as a computer terminal, to a packet switched network such as an X.25 network, and specifying the parameters that control this operation. The following is list of X.3 parameters associated with a PAD: 1 PAD recall using a character 2 Echo 3 Selection of data forwarding character 4 Selection of idle timer delay 5 Ancillary device control 6 Control of PAD service signals 7 Operation on receipt of break signal 8 Discard output 9 Padding after carriage return 10 Line folding 11 DTE speed 12 Flow control of the PAD 13 Linefeed insertion after carriage return 14 Padding after linefeed 15 Editing 16 Character delete 17 Line delete 18 Line display 19 Editing PAD service signals 20 Echo mask 21 Parity treatment 22 Page wait
https://en.wikipedia.org/wiki/Physics%20beyond%20the%20Standard%20Model
Physics beyond the Standard Model (BSM) refers to the theoretical developments needed to explain the deficiencies of the Standard Model, such as the inability to explain the fundamental parameters of the standard model, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself: the Standard Model is inconsistent with that of general relativity, and one or both theories break down under certain conditions, such as spacetime singularities like the Big Bang and black hole event horizons. Theories that lie beyond the Standard Model include various extensions of the standard model through supersymmetry, such as the Minimal Supersymmetric Standard Model (MSSM) and Next-to-Minimal Supersymmetric Standard Model (NMSSM), and entirely novel explanations, such as string theory, M-theory, and extra dimensions. As these theories tend to reproduce the entirety of current phenomena, the question of which theory is the right one, or at least the "best step" towards a Theory of Everything, can only be settled via experiments, and is one of the most active areas of research in both theoretical and experimental physics. Problems with the Standard Model Despite being the most successful theory of particle physics to date, the Standard Model is not perfect. A large share of the published output of theoretical physicists consists of proposals for various forms of "Beyond the Standard Model" new physics proposals that would modify the Standard Model in ways subtle enough to be consistent with existing data, yet address its imperfections materially enough to predict non-Standard Model outcomes of new experiments that can be proposed. Phenomena not explained The Standard Model is inherently an incomplete theory. There are fundamental physical phenomena in nature that the Standard Model does not adequately explain: Gravity. The standard mod
https://en.wikipedia.org/wiki/Boil-water%20advisory
A boil-water advisory (BWA), boil-water notice, boil-water warning, boil-water order, or boil order is a public-health advisory or directive issued by governmental or other health authorities to consumers when a community's drinking water is or could be contaminated by pathogens. Under a BWA, the Centers for Disease Control and Prevention recommends that water be brought to a rolling boil for one minute before it is consumed in order to kill protozoa, bacteria, and viruses. At altitudes above , boiling should be extended to 3 minutes, as the lower boiling point at high altitudes requires more time to kill such organisms. A boil water advisory usually lasts up to 24-48 hours, but sometimes more. BWA's are typically issued when monitoring of water being served to consumers detects E. coli or other microbiological indicators of sewage contamination. Another reason for a BWA is a failure of distribution system integrity evidenced by a loss of system pressure. While loss of pressure does not necessarily mean the water has been contaminated, it does mean that pathogens may be able to enter the piped-water system and thus be carried to consumers. In the United States, this has been defined as a drop below . History John Snow's 1849 recommendation that water be "filtered and boiled before it is used" is one of the first practical applications of the germ theory of disease in the area of public health and is the antecedent to the modern boil water advisory. Snow demonstrated a clear understanding of germ theory in his writings. He first published his theory in an 1849 essay On the Mode of Communication of Cholera, in which he correctly suggested that the fecal-oral route was the mode of communication, and that the disease replicated itself in the lower intestines. Snow later went so far as to accurately propose in his 1855 edition of the work that the structure of cholera was that of a cell. Snow's ideas were not fully accepted until years after his death in 1858. The
https://en.wikipedia.org/wiki/Higgs%20sector
In particle physics, the Higgs sector is the collection of quantum fields and/or particles that are responsible for the Higgs mechanism, i.e. for the spontaneous symmetry breaking of the Higgs field. The word "sector" refers to a subgroup of the total set of fields and particles. See also Higgs boson Hidden sector
https://en.wikipedia.org/wiki/Spin%E2%80%93charge%20separation
In condensed matter physics, spin–charge separation is an unusual behavior of electrons in some materials in which they 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles. The theory of spin–charge separation originates with the work of Sin-Itiro Tomonaga who developed an approximate method for treating one-dimensional interacting quantum systems in 1950. This was then developed by Joaquin Mazdak Luttinger in 1963 with an exactly solvable model which demonstrated spin–charge separation. In 1981 F. Duncan M. Haldane generalized Luttinger's model to the Tomonaga–Luttinger liquid concept whereby the physics of Luttinger's model was shown theoretically to be a general feature of all one-dimensional metallic systems. Although Haldane treated spinless fermions, the extension to spin-½ fermions and associated spin–charge separation was so clear that the promised follow-up paper did not appear. Spin–charge separation is one of the most unusual manifestations of the concept of quasiparticles. This property is counterintuitive, because neither the spinon, with zero charge and spin half, nor the chargon, with charge minus one and zero spin, can be constructed as combinations of the electrons, holes, phonons and photons that are the constituents of the system. It is an example of fractionalization, the phenomenon in which the quantum numbers of the quasiparticles are not multiples of those of the elementary particles, but fractions. The same theoretical ideas have been applied in the framework of ultracold atoms. In a two-component Bose gas in 1D, strong interactions can produce a maximal form of spin–charge separation. Observation Building on
https://en.wikipedia.org/wiki/Pitch%20class%20space
In music theory, pitch-class space is the circular space representing all the notes (pitch classes) in a musical octave. In this space, there is no distinction between tones that are separated by an integral number of octaves. For example, C4, C5, and C6, though different pitches, are represented by the same point in pitch class space. Since pitch-class space is a circle, we return to our starting point by taking a series of steps in the same direction: beginning with C, we can move "upward" in pitch-class space, through the pitch classes C♯, D, D♯, E, F, F♯, G, G♯, A, A♯, and B, returning finally to C. By contrast, pitch space is a linear space: the more steps we take in a single direction, the further we get from our starting point. Tonal pitch-class space , and Lerdahl and Jackendoff (1983) use a "reductional format" to represent the perception of pitch-class relations in tonal contexts. These two-dimensional models resemble bar graphs, using height to represent a pitch class's degree of importance or centricity. Lerdahl's version uses five levels: the first (highest) contains only the tonic, the second contains tonic and dominant, the third contains tonic, mediant, and dominant, the fourth contains all the notes of the diatonic scale, and the fifth contains the chromatic scale. In addition to representing centricity or importance, the individual levels are also supposed to represent "alphabets" that describe the melodic possibilities in tonal music . The model asserts that tonal melodies will be cognized in terms of one of the five levels a-e: Note that Lerdahl's model is meant to be cyclical, with its right edge identical to its left. One could therefore display Lerdahl's graph as a series of five concentric circles representing the five melodic "alphabets." In this way one could unite the circular representation depicted at the beginning of this article with Lerdahl's flat two-dimensional representation depicted above. According to David , "Harmon
https://en.wikipedia.org/wiki/Recursion%20%28computer%20science%29
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages (for instance, Clojure) do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as and . Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficient, and, for large problems, it is fundamental to use optimization techniques such as tail call optimization. Recursive functions and algorithms A common algorithm design tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; when combined with a lookup table that stores the results of previously solved sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to as dynamic programming or memoization. Base case A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itsel
https://en.wikipedia.org/wiki/Programming%20model
A programming model is an execution model coupled to an API or a particular pattern of code. In this style, there are actually two execution models in play: the execution model of the base programming language and the execution model of the programming model. An example is Spark where Java is the base language, and Spark is the programming model. Execution may be based on what appear to be library calls. Other examples include the POSIX Threads library and Hadoop's MapReduce. In both cases, the execution model of the programming model is different from that of the base language in which the code is written. For example, the C programming language has no behavior in its execution model for input/output or thread behavior. But such behavior can be invoked from C syntax, by making what appears to be a call to a normal C library. What distinguishes a programming model from a normal library is that the behavior of the call cannot be understood in terms of the language the program is written in. For example, the behavior of calls to the POSIX thread library cannot be understood in terms of the C language. The reason is that the call invokes an execution model that is different from the execution model of the language. This invocation of an outside execution model is the defining characteristic of a programming model, in contrast to a programming language. In parallel computing, the execution model often must expose features of the hardware in order to achieve high performance. The large amount of variation in parallel hardware causes a concurrent need for a similarly large number of parallel execution models. It is impractical to make a new language for each execution model, hence it is a common practice to invoke the behaviors of the parallel execution model via an API. So, most of the programming effort is done via parallel programming models rather than parallel languages. The terminology around such programming models tends to focus on the details of the hardwar
https://en.wikipedia.org/wiki/Chinese%20yam
Dioscorea polystachya or Chinese yam (), also called cinnamon-vine, is a species of flowering plant in the yam family. It is sometimes called Chinese potato or by its Korean name ma. It is also called huaishan in Mandarin and waisan in Cantonese. It is a perennial climbing vine, native to East Asia. The edible tubers are cultivated largely in Asia and sometimes used in alternative medicine. This species of yam is unique as the tubers can be eaten raw. Range This plant grows throughout East Asia. It is believed to have been introduced to Japan in the 17th century or earlier. Introduced to the United States as early as the 19th century for culinary and cultural uses, it is now considered an invasive plant species. The plant was introduced to Europe in the 19th century during the European Potato Failure, where cultivation continues to this day for the Asian food market. Taxonomy The botanical names Dioscorea opposita and Dioscorea oppositifolia have been consistently misapplied to Chinese yam. The name D. opposita is now an accepted synonym of D. oppositifolia. Botanical works that point out the error may list, e.g., Dioscorea opposita auct. as a synonym of D. polystachya. Furthermore, neither D. oppositifolia nor the prior D. opposita have been found growing in North America and have no historical range in China or East Asia, this grouping is native only to the subcontinent of India and should not be confused with Dioscorea polystachya. Description Dioscorea polystachya vines typically grow long, and can be longer. They twine clockwise. The leaves are up to long and wide. They are lobed at the base and larger ones may have lobed edges. The arrangement is variable; they may be alternately or oppositely arranged or borne in whorls. In the leaf axils appear warty rounded bulbils under long. The bulbils are sometimes informally referred to as "yam berries" or "yamberries". New plants sprout from the bulbils or parts of them. The flowers of Chinese yam are