source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Lowest%20common%20ancestor
In graph theory and computer science, the lowest common ancestor (LCA) (also called least common ancestor) of two nodes and in a tree or directed acyclic graph (DAG) is the lowest (i.e. deepest) node that has both and as descendants, where we define each node to be a descendant of itself (so if has a direct connection from , is the lowest common ancestor). The LCA of and in is the shared ancestor of and that is located farthest from the root. Computation of lowest common ancestors may be useful, for instance, as part of a procedure for determining the distance between pairs of nodes in a tree: the distance from to can be computed as the distance from the root to , plus the distance from the root to , minus twice the distance from the root to their lowest common ancestor . In a tree data structure where each node points to its parent, the lowest common ancestor can be easily determined by finding the first intersection of the paths from and to the root. In general, the computational time required for this algorithm is where is the height of the tree (length of longest path from a leaf to the root). However, there exist several algorithms for processing trees so that lowest common ancestors may be found more quickly. Tarjan's off-line lowest common ancestors algorithm, for example, preprocesses a tree in linear time to provide constant-time LCA queries. In general DAGs, similar algorithms exist, but with super-linear complexity. History The lowest common ancestor problem was defined by , but were the first to develop an optimally efficient lowest common ancestor data structure. Their algorithm processes any tree in linear time, using a heavy path decomposition, so that subsequent lowest common ancestor queries may be answered in constant time per query. However, their data structure is complex and difficult to implement. Tarjan also found a simpler but less efficient algorithm, based on the union-find data structure, for computing lowest common
https://en.wikipedia.org/wiki/Functionally%20graded%20material
In materials science Functionally Graded Materials (FGMs) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials. History The concept of FGM was first considered in Japan in 1984 during a space plane project, where a combination of materials used would serve the purpose of a thermal barrier capable of withstanding a surface temperature of 2000 K and a temperature gradient of 1000 K across a 10 mm section. In recent years this concept has become more popular in Europe, particularly in Germany. A transregional collaborative research center (SFB Transregio) is funded since 2006 in order to exploit the potential of grading monomaterials, such as steel, aluminium and polypropylen, by using thermomechanically coupled manufacturing processes. General information FGMs can vary in either composition and structure, for example, porosity, or both to produce the resulting gradient. The gradient can be categorized as either continuous or discontinuous, which exhibits a stepwise gradient. There are several examples of FGMs in nature, including bamboo and bone, which alter their microstructure to create a material property gradient. In biological materials, the gradients can be produced through changes in the chemical composition, structure, interfaces, and through the presence of gradients spanning multiple length scales. Specifically within the variation of chemical compositions, the manipulation of the mineralization, the presence of inorganic ions and biomolecules, and the level of hydration have all been known to cause gradients in plants and animals. The basic structural units of FGMs are elements or material ingredients repr
https://en.wikipedia.org/wiki/International%20Congress%20of%20Actuaries
The International Congress of Actuaries (ICA) is a conference held under the auspices of the International Actuarial Association every four years. The most recent conference was the 31st Congress, held in Berlin, Germany from 4 to 8 June 2018. The 33rd Congress will be held in Sydney, Australia in 2023 and the 34th in Tokyo, Japan in 2026. Past congresses 1895 Brussels, Belgium 1898 London, United Kingdom 1900 Paris, France 1903 New York, United States 1906 Berlin, Germany 1909 Vienna, Austria 1912 Amsterdam, Netherlands 1915 St. Petersburg, Russia (organised but not held) 1927 London, United Kingdom 1930 Stockholm, Sweden 1934 Rome, Italy 1937 Paris, France 1940 Lucerne, Switzerland (organised but not held; papers published) 1951 Scheveningen, Netherlands 1954 Madrid, Spain 1957 New York, United States and Toronto, Canada 1960 Brussels, Belgium 1964 London and Edinburgh, United Kingdom 1968 Munich, Germany 1972 Oslo, Norway 1976 Tokyo, Japan 1980 Zurich and Lausanne, Switzerland 1984 Sydney, Australia 1988 Helsinki, Finland 1992 Montreal, Canada 1995 Brussels, Belgium 1998 Birmingham, United Kingdom 2002 Cancún, Mexico 2006 Paris, France 2010 Cape Town, South Africa 2014 Washington, D.C., United States 2018 Berlin, Germany 2023 Sydney, Australia Future congresses 2026 Tokyo, Japan External links ICA 2006 Paris ICA 2010 Cape Town ICA 2014 Washington ICA 2018 Berlin ICA 2023 Sydney International conferences Actuarial science Academic conferences
https://en.wikipedia.org/wiki/Monoclonal%20antibody%20therapy
Monoclonal antibody therapy is a form of immunotherapy that uses monoclonal antibodies (mAbs) to bind monospecifically to certain cells or proteins. The objective is that this treatment will stimulate the patient's immune system to attack those cells. Alternatively, in radioimmunotherapy a radioactive dose localizes a target cell line, delivering lethal chemical doses. Antibodies are used to bind to molecules involved in T-cell regulation to remove inhibitory pathways that block T-cell responses. This is known as immune checkpoint therapy. It is possible to create a mAb that is specific to almost any extracellular/cell surface target. Research and development is underway to create antibodies for diseases (such as rheumatoid arthritis, multiple sclerosis, Alzheimer's disease, Ebola and different types of cancers). Antibody structure and function Immunoglobulin G (IgG) antibodies are large heterodimeric molecules, approximately 150 kDa and are composed of two kinds of polypeptide chain, called the heavy (~50kDa) and the light chain (~25kDa). The two types of light chains are kappa (κ) and lambda (λ). By cleavage with enzyme papain, the Fab (fragment-antigen binding) part can be separated from the Fc (fragment crystallizable region) part of the molecule. The Fab fragments contain the variable domains, which consist of three antibody hypervariable amino acid domains responsible for the antibody specificity embedded into constant regions. The four known IgG subclasses are involved in antibody-dependent cellular cytotoxicity. Antibodies are a key component of the adaptive immune response, playing a central role in both in the recognition of foreign antigens and the stimulation of an immune response to them. The advent of monoclonal antibody technology has made it possible to raise antibodies against specific antigens presented on the surfaces of tumors. Monoclonal antibodies can be acquired in the immune system via passive immunity or active immunity. The advantage of
https://en.wikipedia.org/wiki/Circuit%20ID
A circuit ID is a company-specific identifier assigned to a data or voice network connection between two locations. This connection, often called a circuit, may then be leased to a customer referring to that ID. In this way, the circuit ID is similar to a serial number on any product sold from a retailer to a customer. Each circuit ID is unique, so a specific customer having many circuit connections sold to them would have many circuit IDs to refer to those connections. As an example of a use of the circuit ID, when a subscriber/customer has an issue (or trouble) with a circuit, they may contact the Controlling Local Exchange Carrier (Controlling LEC) telecommunications provider, identifying the circuit that has the issue by giving the LEC that circuit ID reference. The LEC would refer to their internal records for this circuit ID to take corrective action on the designated circuit. Telecom circuit ID formats Although telecommunication providers are not required to follow any specific standard for circuit IDs, many do. In the United States, LECs typically generate circuit IDs based on Telcordia Technologies' Common Language Information Services. Using the Telcordia standards for circuit naming allow a LEC the ability to build a certain amount of intelligence into the name of a circuit. The way Telcordia has developed circuit IDs, different types of circuit connections require different formats for the circuit ID. In each format, different segments of the ID have very specific meaning. At one time, abbreviations used for circuit types were meaningful (for example, HC for high capacity) but the complexity of the business no longer allows for it. Now, with many different technologies and uses for circuit connections, different types of circuits may use different types of circuit ID formats that provide more meaning for that type of circuit. Below are "examples" of how one telecommunications provider, CenturyLink, has published their choice for circuit IDs for three
https://en.wikipedia.org/wiki/Configuration%20%28geometry%29
In mathematics, specifically projective geometry, a configuration in the plane consists of a finite set of points, and a finite arrangement of lines, such that each point is incident to the same number of lines and each line is incident to the same number of points. Although certain specific configurations had been studied earlier (for instance by Thomas Kirkman in 1849), the formal study of configurations was first introduced by Theodor Reye in 1876, in the second edition of his book Geometrie der Lage, in the context of a discussion of Desargues' theorem. Ernst Steinitz wrote his dissertation on the subject in 1894, and they were popularized by Hilbert and Cohn-Vossen's 1932 book Anschauliche Geometrie, reprinted in English as . Configurations may be studied either as concrete sets of points and lines in a specific geometry, such as the Euclidean or projective planes (these are said to be realizable in that geometry), or as a type of abstract incidence geometry. In the latter case they are closely related to regular hypergraphs and biregular bipartite graphs, but with some additional restrictions: every two points of the incidence structure can be associated with at most one line, and every two lines can be associated with at most one point. That is, the girth of the corresponding bipartite graph (the Levi graph of the configuration) must be at least six. Notation A configuration in the plane is denoted by (), where is the number of points, the number of lines, the number of lines per point, and the number of points per line. These numbers necessarily satisfy the equation as this product is the number of point-line incidences (flags). Configurations having the same symbol, say (), need not be isomorphic as incidence structures. For instance, there exist three different (93 93) configurations: the Pappus configuration and two less notable configurations. In some configurations, and consequently, . These are called symmetric or balanced configurations a
https://en.wikipedia.org/wiki/Biochemistry%20of%20Alzheimer%27s%20disease
The biochemistry of Alzheimer's disease, the most common cause of dementia, is not yet very well understood. Alzheimer's disease (AD) has been identified as a proteopathy: a protein misfolding disease due to the accumulation of abnormally folded amyloid beta (Aβ) protein in the brain. Amyloid beta is a short peptide that is an abnormal proteolytic byproduct of the transmembrane protein amyloid-beta precursor protein (APP), whose function is unclear but thought to be involved in neuronal development. The presenilins are components of proteolytic complex involved in APP processing and degradation. Amyloid beta monomers are soluble and contain short regions of beta sheet and polyproline II helix secondary structures in solution, though they are largely alpha helical in membranes; however, at sufficiently high concentration, they undergo a dramatic conformational change to form a beta sheet-rich tertiary structure that aggregates to form amyloid fibrils. These fibrils and oligomeric forms of Aβ deposit outside neurons in formations known as senile plaques. There are different types of plaques, including the diffuse, compact, cored or neuritic plaque types, as well as Aβ deposits in the walls of small blood vessel walls in the brain called cerebral amyloid angiopathy. AD is also considered a tauopathy due to abnormal aggregation of the tau protein, a microtubule-associated protein expressed in neurons that normally acts to stabilize microtubules in the cell cytoskeleton. Like most microtubule-associated proteins, tau is normally regulated by phosphorylation; however, in Alzheimer's disease, hyperphosphorylated tau accumulates as paired helical filaments that in turn aggregate into masses inside nerve cell bodies known as neurofibrillary tangles and as dystrophic neurites associated with amyloid plaques. Although little is known about the process of filament assembly, depletion of a prolyl isomerase protein in the parvulin family has been shown to accelerate the accumu
https://en.wikipedia.org/wiki/Reduction%20criterion
In quantum information theory, the reduction criterion is a necessary condition a mixed state must satisfy in order for it to be separable. In other words, the reduction criterion is a separability criterion. It was first proved and independently formulated in 1999. Violation of the reduction criterion is closely related to the distillability of the state in question. Details Let H1 and H2 be Hilbert spaces of finite dimensions n and m respectively. L(Hi) will denote the space of linear operators acting on Hi. Consider a bipartite quantum system whose state space is the tensor product An (un-normalized) mixed state ρ is a positive linear operator (density matrix) acting on H. A linear map Φ: L(H2) → L(H1) is said to be positive if it preserves the cone of positive elements, i.e. A is positive implied Φ(A) is also. From the one-to-one correspondence between positive maps and entanglement witnesses, we have that a state ρ is entangled if and only if there exists a positive map Φ such that is not positive. Therefore, if ρ is separable, then for all positive map Φ, Thus every positive, but not completely positive, map Φ gives rise to a necessary condition for separability in this way. The reduction criterion is a particular example of this. Suppose H1 = H2. Define the positive map Φ: L(H2) → L(H1) by It is known that Φ is positive but not completely positive. So a mixed state ρ being separable implies Direct calculation shows that the above expression is the same as where ρ1 is the partial trace of ρ with respect to the second system. The dual relation is obtained in the analogous fashion. The reduction criterion consists of the above two inequalities. Connection with Fréchet bounds The above last two inequalities together with lower bounds for ρ can be seen as quantum Fréchet inequalities, that is as the quantum analogous of the classical Fréchet probabilistic bounds, that hold for separable quantum states. The upper bounds are the previous ones , , an
https://en.wikipedia.org/wiki/D%27Alembert%27s%20equation
In mathematics, d'Alembert's equation is a first order nonlinear ordinary differential equation, named after the French mathematician Jean le Rond d'Alembert. The equation reads as where . After differentiating once, and rearranging we have The above equation is linear. When , d'Alembert's equation is reduced to Clairaut's equation.
https://en.wikipedia.org/wiki/SuperPrime
SuperPrime is a computer program used for calculating the primality of a large set of positive natural numbers. Because of its multi-threaded nature and dynamic load scheduling, it scales excellently when using more than one thread (execution core). It is commonly used as an overclocking benchmark to test the speed and stability of a system. Background information In August 1995, the calculation of Pi up to 4,294,960,000 decimal digits was achieved by using a supercomputer at the University of Tokyo. The program used to achieve this was ported to personal computers, for operating systems such as Windows NT and Windows 95 and called Super-PI. SuperPrime is another take on this procedure, substituting raw floating-point calculations for the value of Pi with more complex instructions to calculate the primality of a set of natural numbers. Landmarks On September 29, 2006, a milestone was broken when bachus_anonym of www.xtremesystems.org broke the 30 seconds barrier using a highly overclocked Core 2 Duo machine See also Erodov.com, the 'home forum' for the SuperPrime benchmark.
https://en.wikipedia.org/wiki/Macedonian%20Ecological%20Society
The Macedonian Ecological Society (MES) was founded in 1972 in what was then known as the Socialist Republic of Macedonia. Principal tasks and goals Encourage the development of ecological scientific disciplines in North Macedonia Promote the improvement of ecological and environmental education in the Macedonian education system Raise public environmental awareness Draw attention to environmental problems and, using scientific arguments, stimulate public pressure on institutions and authorities to invest in environmental protection enterprises Provide expert assistance to the Macedonian Government in establishing legislative environmental policy Promote monitoring systems and follow-up for environment quality in North Macedonia Nature conservation with special emphasis on biodiversity Principal activities in recent years Complex ecological investigations in the beech forests in Mavrovo National Park 1997-2000 Balkan Bear Carnivores Conservation Network (BALKAN NET) - (Led by Arcturos, Greece) TEDDY Project (Led by Arcturos, Greece) Symposium "Sustainable development of transboundary Prespa region," Prespa, 2000 "Integrated Preservation of the Balkan Wolf Population by Diminishing the Human-Carnivore Conflict and Altering the Attitudes towards the Wolf", 2000. MES partnered with the Bulgarian Wilderness Fund and the Albanian Society for the Protection of Birds and Mammals (ASPBM) ECO-NET: Creation of a network for legal protection and management of protected areas in the southern Balkans, in cooperation with Arcturos (Greece), Journalist Environmental Center - ERINA (North Macedonia) and NGOs from Bulgaria, Albania and Yugoslavia (Project in the frame of DAC) (unclear), 2001–2002 "BEZFOS" - Use of phosphate-free detergents in the Lake Ohrid regions of Ohrid and Struga (led by Farmahem (Pharmachem, Skopje), 2003. Vulture conservation project, North Macedonia, 2003- Preparation of an Action Plan for sustainable development of the village of Galicn
https://en.wikipedia.org/wiki/Interchange%20of%20limiting%20operations
In mathematics, the study of interchange of limiting operations is one of the major concerns of mathematical analysis, in that two given limiting operations, say L and M, cannot be assumed to give the same result when applied in either order. One of the historical sources for this theory is the study of trigonometric series. Formulation In symbols, the assumption LM = ML, where the left-hand side means that M is applied first, then L, and vice versa on the right-hand side, is not a valid equation between mathematical operators, under all circumstances and for all operands. An algebraist would say that the operations do not commute. The approach taken in analysis is somewhat different. Conclusions that assume limiting operations do 'commute' are called formal. The analyst tries to delineate conditions under which such conclusions are valid; in other words mathematical rigour is established by the specification of some set of sufficient conditions for the formal analysis to hold. This approach justifies, for example, the notion of uniform convergence. It is relatively rare for such sufficient conditions to be also necessary, so that a sharper piece of analysis may extend the domain of validity of formal results. Professionally speaking, therefore, analysts push the envelope of techniques, and expand the meaning of well-behaved for a given context. G. H. Hardy wrote that "The problem of deciding whether two given limit operations are commutative is one of the most important in mathematics". An opinion apparently not in favour of the piece-wise approach, but of leaving analysis at the level of heuristic, was that of Richard Courant. Examples Examples abound, one of the simplest being that for a double sequence am,n: it is not necessarily the case that the operations of taking the limits as m → ∞ and as n → ∞ can be freely interchanged. For example take am,n = 2m − n in which taking the limit first with respect to n gives 0, and with respect to m gives ∞. Man
https://en.wikipedia.org/wiki/Pfister%20form
In mathematics, a Pfister form is a particular kind of quadratic form, introduced by Albrecht Pfister in 1965. In what follows, quadratic forms are considered over a field F of characteristic not 2. For a natural number n, an n-fold Pfister form over F is a quadratic form of dimension 2n that can be written as a tensor product of quadratic forms for some nonzero elements a1, ..., an of F. (Some authors omit the signs in this definition; the notation here simplifies the relation to Milnor K-theory, discussed below.) An n-fold Pfister form can also be constructed inductively from an (n−1)-fold Pfister form q and a nonzero element a of F, as . So the 1-fold and 2-fold Pfister forms look like: . For n ≤ 3, the n-fold Pfister forms are norm forms of composition algebras. In that case, two n-fold Pfister forms are isomorphic if and only if the corresponding composition algebras are isomorphic. In particular, this gives the classification of octonion algebras. The n-fold Pfister forms additively generate the n-th power I n of the fundamental ideal of the Witt ring of F. Characterizations A quadratic form q over a field F is multiplicative if, for vectors of indeterminates x and y, we can write q(x).q(y) = q(z) for some vector z of rational functions in the x and y over F. Isotropic quadratic forms are multiplicative. For anisotropic quadratic forms, Pfister forms are multiplicative, and conversely. For n-fold Pfister forms with n ≤ 3, this had been known since the 19th century; in that case z can be taken to be bilinear in x and y, by the properties of composition algebras. It was a remarkable discovery by Pfister that n-fold Pfister forms for all n are multiplicative in the more general sense here, involving rational functions. For example, he deduced that for any field F and any natural number n, the set of sums of 2n squares in F is closed under multiplication, using that the quadratic form is an n-fold Pfister form (namely, ). Another striking feature of
https://en.wikipedia.org/wiki/Norm%20form
In mathematics, a norm form is a homogeneous form in n variables constructed from the field norm of a field extension L/K of degree n. That is, writing N for the norm mapping to K, and selecting a basis e1, ..., en for L as a vector space over K, the form is given by N(x1e1 + ... + xnen) in variables x1, ..., xn. In number theory norm forms are studied as Diophantine equations, where they generalize, for example, the Pell equation. For this application the field K is usually the rational number field, the field L is an algebraic number field, and the basis is taken of some order in the ring of integers OL of L. See also Trace form
https://en.wikipedia.org/wiki/Comparison%20of%20business%20integration%20software
This article is a comparison of notable business integration and business process automation software. General Scope Scope of this comparison: Service-oriented architecture implementations; Message-oriented middleware and message brokers; Enterprise service bus implementations; BPEL implementations; Enterprise application integration software. General information Compatibility and interoperability Operating system support Hardware support Supported hardware depends on supported operating systems. Database support Web servers support See also List of application servers List of BPEL engines List of BPMN 2.0 engines Notes Footnotes
https://en.wikipedia.org/wiki/Involuntary%20park
Involuntary park is a neologism coined by science fiction author and environmentalist Bruce Sterling to describe previously inhabited areas that for environmental, economic, or political reasons have, in Sterling's words, "lost their value for technological instrumentalism" and been allowed to return to an overgrown, feral state. Origin of the term Discussing involuntary parks in the context of rising sea levels due to global warming, Sterling writes: While Sterling's original vision of an involuntary park was of places abandoned due to collapse of economy or rising sea-level, the term has come to be used on any land where human inhabitation or use for one reason or other has been stopped, including military exclusion zones, minefields, and areas considered dangerous due to pollution. Existing examples Abandoned human settlements and developments overtaken by foliage and wild animals are known to exist in numerous locations around the world. Ghost towns, disused railways, mines, and airfields, or areas experiencing urban decay or deindustrialization may be subject to a resurgence in ecological proliferation as human presence is reduced. The Chernobyl Exclusion Zone has seen the return of previously extirpated indigenous species such as boars, wolves, and brown bears, as well as a thriving herd of re-introduced Przewalski's horses. While wildlife flourishes in the least affected areas, tumors, infertility, and lower brain weight are reported in many small animals (including mice and birds) living in areas subject to severe contamination. The former Rocky Mountain Arsenal in Denver was abandoned for years due to contamination from production of chemical weapons, yet the wildlife returned and the site was eventually turned into a wildlife refugium. Involuntary parks where human presence is severely limited can host animal species that are otherwise extremely threatened in their range. The Korean Demilitarized Zone is hypothesized to house not only Korean tigers,
https://en.wikipedia.org/wiki/Pumping%20%28computer%20systems%29
Pumping, when referring to computer systems, is an informal term for transmitting a data signal more than one time per clock signal. Overview Early types of system memory (RAM), such as SDRAM, transmitted data on only the rising edge of the clock signal. With the advent of double data rate synchronous dynamic RAM or DDR SDRAM, the data was transmitted on both rising and falling edges. However, quad-pumping has been used for a while for the front-side bus (FSB) of a computer system. This works by transmitting data at the rising edge, peak, falling edge, and trough of each clock cycle. Intel computer systems (and others) use this technology to reach effective FSB speeds of 1600 MT/s (million transfers per second), even though the FSB clock speed is only 400 MHz (cycles per second). A phase-locked loop in the CPU then multiplies the FSB clock by a factor in order to get the CPU speed. Example: A Core 2 Duo E6600 processor is listed as 2.4 GHz with a 1066 MHz FSB. The FSB is known to be quad-pumped, so its clock frequency is 1066/4 = 266 MHz. Therefore, the CPU multiplier is 2400/266, or 9×. The DDR2 RAM that it is compatible with is known to be double-pumped and to have an Input/Output Bus twice that of the true FSB frequency (effectively transferring data 4 times a clock cycle), so to run the system synchronously (see front-side bus) the type of RAM that is appropriate is quadruple 266 MHz, or DDR2-1066 (PC2-8400 or PC2-8500, depending on the manufacturer's labeling.).
https://en.wikipedia.org/wiki/Sound%20amplification%20by%20stimulated%20emission%20of%20radiation
Sound amplification by stimulated emission of radiation (SASER) refers to a device that emits acoustic radiation. It focuses sound waves in a way that they can serve as accurate and high-speed carriers of information in many kinds of applications—similar to uses of laser light. Acoustic radiation (sound waves) can be emitted by using the process of sound amplification based on stimulated emission of phonons. Sound (or lattice vibration) can be described by a phonon just as light can be considered as photons, and therefore one can state that SASER is the acoustic analogue of the laser. In a SASER device, a source (e.g., an electric field as a pump) produces sound waves (lattice vibrations, phonons) that travel through an active medium. In this active medium, a stimulated emission of phonons leads to amplification of the sound waves, resulting in a sound beam coming out of the device. The sound wave beams emitted from such devices are highly coherent. The first successful SASERs were developed in 2009. Terminology Instead of a feedback-built wave of electromagnetic radiation (i.e., a laser beam), a SASER delivers a sound wave. SASER may also be referred to as phonon laser, acoustic laser or sound laser. Uses and applications SASERs could have wide applications. Apart from facilitating the investigation of terahertz-frequency ultrasound, the SASER is also likely to find uses in optoelectronics (electronic devices that detect and control light—as a method of transmitting a signal from an end to the other of, for instance, fiber optics), as a method of signal modulation and/or transmission. Such devices could be high precision measurement instruments and they could lead to high energy focused sound. Using SASERs to manipulate electrons inside semiconductors could theoretically result in terahertz-frequency computer processors, much faster than the current chips. History This concept can be more conceivable by imagining it in analogy to laser theory. Theodore Maim
https://en.wikipedia.org/wiki/EDINA
EDINA is a centre for digital expertise, based at the University of Edinburgh as a division of the Information Services Group. Services EDINA front-end services (those accessed directly by the user) are available free at the point of use for University of Edinburgh students and academic staff in the UK working on and off campus. Access to services by external universities, colleges or schools involves licence or subscription and requires some form of authentication by end users. Some services are also provided to researchers outside the UK academic sector. A key service, offered since January 2000, is Digimap, with its core Ordnance Survey collection. Since 2017, EDINA has also offered Noteable, an online hosting platform for computational notebooks, which is built from the open-source Jupyter Notebook environment. History Edinburgh University Data Library EDINA has its origin in Edinburgh University Data Library, which was set up in 1983/4. Researchers at the University of Edinburgh working with data from government surveys were looking to the university to provide university-wide provision for files that were too large to be stored on individual computing accounts. Arrangements for the University Library to purchase the small area statistics from the 1981 Population Census became the opportunity to petition action by the Program Library Unit (PLU) - which had both local responsibility for software provision and a national role to convert software for various computing platforms for UK universities. The PLU was also active in the design and implementation of the code for SASPAC, the program used widely for the extraction of census data, as part of a project led by David Rhind of Durham University. In response, the Data Library was formed as a small group within the PLU led by Trevor Jones plus 1.5 staff: use of a programmer and a computing assistant. Peter Burnhill took over full-time responsibility in 1984. Early holdings were the 1981 UK population census,
https://en.wikipedia.org/wiki/Functional%20square%20root
In mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function is a function satisfying for all . Notation Notations expressing that is a functional square root of are and . History The functional square root of the exponential function (now known as a half-exponential function) was studied by Hellmuth Kneser in 1950. The solutions of over (the involutions of the real numbers) were first studied by Charles Babbage in 1815, and this equation is called Babbage's functional equation. A particular solution is for . Babbage noted that for any given solution , its functional conjugate by an arbitrary invertible function is also a solution. In other words, the group of all invertible functions on the real line acts on the subset consisting of solutions to Babbage's functional equation by conjugation. Solutions A systematic procedure to produce arbitrary functional -roots (including arbitrary real, negative, and infinitesimal ) of functions relies on the solutions of Schröder's equation. Infinitely many trivial solutions exist when the domain of a root function f is allowed to be sufficiently larger than that of g. Examples is a functional square root of . A functional square root of the th Chebyshev polynomial, , is , which in general is not a polynomial. is a functional square root of . [red curve] [blue curve] [orange curve] [black curve above the orange curve] [dashed curve] (See. For the notation, see .) See also Iterated function Function composition Abel equation Schröder's equation Flow (mathematics) Superfunction Fractional calculus Half-exponential function
https://en.wikipedia.org/wiki/Stellar%20mass%20loss
Stellar mass loss is a phenomenon observed in stars. All stars lose some mass over their lives at widely varying rates. Triggering events can cause the sudden ejection of a large portion of the star's mass. Stellar mass loss can also occur when a star gradually loses material to a binary companion or into interstellar space. Causes A number of factors can contribute to the loss of mass in giant stars, including: Gravitational attraction of a binary companion Coronal mass ejection-type events Ascension to red giant or red supergiant status Solar wind The Sun, a low-mass star, loses mass due to the solar wind at a very small rate, solar masses per year. Gravitational mass loss Often when a star is a member of a pair of close-orbiting binary stars, the tidal attraction of the gasses near the center of mass is sufficient to pull gas from one star onto its partner. This effect is especially prominent when the partner is a white dwarf, neutron star, or black hole. Mass ejection Certain classes of stars, especially Wolf-Rayet stars are sufficiently massive and distended that their hold on their upper layers is rather weak. Often, events such as solar flares and coronal mass ejections will then be sufficiently powerful to blast some of the upper material into space. Red giant mass loss Stars which have entered the red giant phase are notorious for rapid mass loss. As above, the gravitational hold on the upper layers is weakened, and they may be shed into space by violent events such as the beginning of a helium flash in the core. The final stage of a red giant's life will also result in prodigious mass loss as the star loses its outer layers to form a planetary nebula. See also Red giant Red supergiant Betelgeuse Coronal mass ejection Helium flash
https://en.wikipedia.org/wiki/Rostelecom
Rostelecom (Ростелеком) is Russia’s largest provider of digital services for a wide variety of consumers, households, private businesses, government and municipal authorities, and other telecom providers. Rostelecom interconnects all local public operators’ networks into a single national network for long-distance service. In other words, if one makes a long-distance call or originates Internet contact to or from Russia, it is likely that Rostelecom is providing part of the service. The company's stock trades primarily on the Moscow Exchange. History Prior to 1990, responsibility for the provision of telecommunications services lie at the Ministry of Communications of the USSR. On June 26, 1990, the Ministry of Communications of the USSR established a state-owned joint-stock company Sovtelekom, which was given the rights to operate the telecommunications network of the USSR. On December 30, 1992, by order of the State Property Committee of Russia, a state-owned enterprise Rostelecom, which consisted of 20 state long-distance and international calls, as well as communication equipment Intertelekom was organized. Throughout the 1990s, the company which was part of Svyazinvest, was the sole long-distance operator in Russia. Alongside it, local companies operated in the different regions of Russia under the umbrella of Svyazinvest while Rostelecom connected between their networks. In 2001, these companies were merged to form a number of regional incumbent telecommunications operators: CentreTelecom, SibirTelecom, Dalsvyaz, Uralsvyazinform, VolgaTelecom, North-West Telecom, Southern Telecommunications Company and Dagsvyazinform. On 2011, Svyazinvest was liquidated with the regional subsidiaries merged into Rostelecom. On October 18, 2006 "Rostelecom" received a certificate of quality of IP-MPLS network and became the ISP backbone. In December 2006, Rostelecom and the telecommunications company KDDI in Japan under the "Transit Europe - Asia" signed an agreement to bui
https://en.wikipedia.org/wiki/International%20Young%20Physicists%27%20Tournament
The International Young Physicists' Tournament (IYPT), sometimes referred to as the “Physics World Cup”, is a scientific competition between teams of secondary school students. It mimics, as close as possible, the real-world scientific research and the process of presenting and defending the results obtained. Participants have almost a year to work on 17 open-ended inquiry problems that are published yearly in late July. A good part of the problems involves easy-to-reproduce phenomena presenting unexpected behaviour. The aim of the solutions is not to calculate or reach “the correct answer” as there is no such notion here. The Tournament is rather conclusions-oriented as participants have to design and perform experiments, and to draw conclusions argued from the experiments’ outcome. The competition itself is not a pen-and-paper competition but an enactment of a scientific discussion (or a defence of a thesis) where participants take the roles of Reporter, Opponent and Reviewer, thus learning about peer review early on in their school years. Discussion-based sessions are called Physics Fights and the performances of the teams are judged by expert physicists. Teams can take quite different routes to tackle the same problem. As long as they stay within the broadly defined statement of the problem, all routes are legitimate and teams will be judged according to the depths reached by their investigations. The IYPT is a week-long event in which currently around 150 international pre-university contestants participate. IYPT is associated with The European Physical Society (EPS) and in 2013, IYPT was awarded the medal of The International Union of Pure and Applied Physics (IUPAP) "in recognition of its inspiring and wide-ranging contribution to physics education that has touched many lives and countries, over the past 25 years". Tournament structure The most important structural parts of the IYPT are the Physics Fights. There are 5 Selective Fights, and one Final F
https://en.wikipedia.org/wiki/Desargues%20configuration
In geometry, the Desargues configuration is a configuration of ten points and ten lines, with three points per line and three lines per point. It is named after Girard Desargues. The Desargues configuration can be constructed in two dimensions from the points and lines occurring in Desargues's theorem, in three dimensions from five planes in general position, or in four dimensions from the 5-cell, the four-dimensional regular simplex. It has a large group of symmetries, taking any point to any other point and any line to any other line. It is also self-dual, meaning that if the points are replaced by lines and vice versa using projective duality, the same configuration results. Graphs associated with the Desargues configuration include the Desargues graph (its graph of point-line incidences) and the Petersen graph (its graph of non-incident lines). The Desargues configuration is one of ten different configurations with ten points and lines, three points per line, and three lines per point, nine of which can be realized in the Euclidean plane. Constructions Two dimensions Two triangles and are said to be in perspective centrally if the lines , , and meet in a common point, called the center of perspectivity. They are in perspective axially if the intersection points of the corresponding triangle sides, , , and all lie on a common line, the axis of perspectivity. Desargues's theorem in geometry states that these two conditions are equivalent: if two triangles are in perspective centrally then they must also be in perspective axially, and vice versa. When this happens, the ten points and ten lines of the two perspectivities (the six triangle vertices, three crossing points, and center of perspectivity, and the six triangle sides, three lines through corresponding pairs of vertices, and axis of perspectivity) together form an instance of the Desargues configuration. Three dimensions Although it may be embedded in two dimensions, the Desargues configuration has
https://en.wikipedia.org/wiki/Kay%20Toliver
Kay Toliver is a teacher specialising in mathematics education. Background Kay Toliver was born and raised in East Harlem and the South Bronx. A product of the New York City public school system, she graduated from Harriet Beecher Stowe Junior High, Walton High School and Hunter College (AB 1967, MA 1971) with graduate work at the City College of New York in mathematics. For more than 30 years, Kay Toliver taught mathematics and communication arts at P.S. 72/East Harlem Tech in Community School District 4. Prior to instructing seventh and eighth grade students, she taught grades one through six for 15 years. "Becoming a teacher was the fulfillment of a childhood dream. My parents always stressed that education was the key to a better life. By becoming a teacher, I hoped to inspire African-American and Hispanic youths to realize their own dreams. I wanted to give something back to the communities I grew up in." At East Harlem Tech, with the support of her principal, she established the "Challenger" program. The program, for grades 4-8, presents the basics of geometry and algebra in an integrated curriculum. This is a program for "gifted" students, but following her belief that all children can learn, she accepted students from all ability levels. Teaching methods The Math Fair These events are similar to science fairs but involve students in creating and displaying projects relating to mathematics. Participants had to be able to explain thoroughly the mathematical theories and concepts behind their projects, which were placed on display at the school so that students from the lower grades could examine the older students' research. Students have created mathematic games such as "Dunking for Prime Numbers," "Fishing for Palindromes," and "Black Jack Geometry." The Math Trail Kay Toliver developed a lesson called the "Math Trail" to give students an appreciation for the community as well as an opportunity to see mathematics at work. To create a Math Trail, t
https://en.wikipedia.org/wiki/Integral%20cryptanalysis
In cryptography, integral cryptanalysis is a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. It was originally designed by Lars Knudsen as a dedicated attack against Square, so it is commonly known as the Square attack. It was also extended to a few other ciphers related to Square: CRYPTON, Rijndael, and SHARK. Stefan Lucks generalized the attack to what he called a saturation attack and used it to attack Twofish, which is not at all similar to Square, having a radically different Feistel network structure. Forms of integral cryptanalysis have since been applied to a variety of ciphers, including Hierocrypt, IDEA, Camellia, Skipjack, MISTY1, MISTY2, SAFER++, KHAZAD, and FOX (now called IDEA NXT). Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant, and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences of pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus.
https://en.wikipedia.org/wiki/Species%20Survival%20Plan
The American Species Survival Plan or SSP program was developed in 1981 by the (American) Association of Zoos and Aquariums to help ensure the survival of selected species in zoos and aquariums, most of which are threatened or endangered in the wild. SSP program SSP programs focus on animals that are near threatened, threatened, endangered, or otherwise in danger of extinction in the wild, when zoo and zoology conservationists believe captive breeding programs will aid in their chances of survival. These programs help maintain healthy and genetically diverse animal populations within the Association of Zoos and Aquariums-accredited zoo community. AZA accredited zoos and AZA conservation partners that are involved in SSP programs engage in cooperative population management and conservation efforts that include research, conservation genetics, public education, reintroduction, and in situ or field conservation projects. The process for selecting recommended species is guided by Taxon Advisory Groups, whose sole objective is to curate Regional Collection Plans for the conservation needs of a species and how AZA institutions will cooperate to reach those needs. Today, there are almost 300 existing SSP programs. The SSP has been met with widespread success in ensuring that, should a species population become functionally extinct in its natural habitat, a viable population still exists within a zoological setting. This has also led to AZA species reintroduction programs, examples of which include the black-footed ferret, the California condor, the northern riffleshell, the golden lion tamarin, the Karner blue butterfly, the Oregon spotted frog, the palila finch, the red wolf, and the Wyoming toad. SSP master plan An SSP master plan is a document produced by the SSP coordinator (generally a zoo professional under the guidance of an elected management committee) for a certain species. This document sets ex situ population goals and other management recommendations to ac
https://en.wikipedia.org/wiki/Crystallization%20adjutant
A crystallization adjutant is a material used to promote crystallization, normally in a context where a material does not crystallize naturally from a pure solution. Additives in Macromolecular Crystallization In macromolecular crystallography, the term additive is used instead of adjutant. An additive can either interact directly with the protein, and become incorporated at a fixed position in the resulting crystal or have a role within the disordered solvent, that in protein crystals constitute roughly 50% of the lattice volume. Polyethylene glycols of various molecular weights and high-ionic strength salts such as ammonium sulfate and sodium citrate that induce protein precipitation when used in high concentrations are classified as precipitants, while certain other salts such as zinc sulfate or calcium sulfate that may cause a protein to precipitate vigorously even when used in small amounts are considered adjutants. Crystallization adjutants are considered additives when they are effective at relatively low concentrations. The distinction between buffers and adjutants is also fuzzy. Buffer molecules can become part of the lattice (for example HEPES in becomes incorporated in crystals of human neutrophil collagenase) but their main use is to maintain the rather precise pH requirements for crystallization that many proteins have. Commonly used buffers such as citrate have a high ionic strength and at the typical buffer concentrations they also act as precipitants. Various species such as Ca2+ and Zn2+ are a biological requirement for certain proteins to fold correctly and certain co-factors are needed to maintain a well defined conformation. Certain strategies, like replacing precipitants and buffers with others intended to have a similar effect, have been used to differentiate between the roles played in protein crystallization by the various components in the crystallization solution. Additives for Membrane Protein Crystallization For membrane proteins, the
https://en.wikipedia.org/wiki/Ohio%20Supercomputer%20Center
The Ohio Supercomputer Center (OSC) is a supercomputer facility located on the western end of the Ohio State University campus, just north of Columbus. Established in 1987, the OSC partners with Ohio universities, labs and industries, providing students and researchers with high performance computing, advanced cyberinfrastructure, research and computational science education services. OSC is member-organization of the Ohio Technology Consortium, the technology and information division of the Ohio Department of Higher Education. OSC works with an array of statewide/regional/national communities, including education, academic research, industry, and state government. The Center's research programs are primarily aligned with three of several key areas of research identified by the state to be well positioned for growth and success, such as the biosciences, advanced materials and energy/environment. OSC is funded through the Ohio Department of Higher Education by the state operating and capital budgets of the Ohio General Assembly. History OSC was established by the Ohio Board of Regents (now the Ohio Department of Higher Education) in 1987 as a statewide resource designated to place Ohio's research universities and private industry in the forefront of computational research. Also in 1987, the OSC networking initiative — known today as OARnet — provided the first network access to the Center’s first Cray supercomputer. In 1988, OSC launched the Center’s Industrial Interface Program to serve businesses interested in accessing the supercomputer. Battelle Memorial Institute, located just south of Ohio State, became OSC’s first industrial user. Today, the Center continues to offer HPC services to researcher in industry, primarily through its AweSim industrial engagement program. In the summer of 1989, 20 talented high school students attended the first Governor’s Summer Institute. Today, OSC offers summer STEM education programs through Summer Institute and Young Wome
https://en.wikipedia.org/wiki/Decision%20field%20theory
Decision field theory (DFT) is a dynamic-cognitive approach to human decision making. It is a cognitive model that describes how people actually make decisions rather than a rational or normative theory that prescribes what people should or ought to do. It is also a dynamic model of decision making rather than a static model, because it describes how a person's preferences evolve across time until a decision is reached rather than assuming a fixed state of preference. The preference evolution process is mathematically represented as a stochastic process called a diffusion process. It is used to predict how humans make decisions under uncertainty, how decisions change under time pressure, and how choice context changes preferences. This model can be used to predict not only the choices that are made but also decision or response times. The paper "Decision Field Theory" was published by Jerome R. Busemeyer and James T. Townsend in 1993. The DFT has been shown to account for many puzzling findings regarding human choice behavior including violations of stochastic dominance, violations of strong stochastic transitivity, violations of independence between alternatives, serial-position effects on preference, speed accuracy tradeoff effects, inverse relation between probability and decision time, changes in decisions under time pressure, as well as preference reversals between choices and prices. The DFT also offers a bridge to neuroscience. Recently, the authors of decision field theory also have begun exploring a new theoretical direction called Quantum Cognition. Introduction The name decision field theory was chosen to reflect the fact that the inspiration for this theory comes from an earlier approach – avoidance conflict model contained in Kurt Lewin's general psychological theory, which he called field theory. DFT is a member of a general class of sequential sampling models that are commonly used in a variety of fields in cognition. The basic ideas underlying the
https://en.wikipedia.org/wiki/Global%20distance%20test
The global distance test (GDT), also written as GDT_TS to represent "total score", is a measure of similarity between two protein structures with known amino acid correspondences (e.g. identical amino acid sequences) but different tertiary structures. It is most commonly used to compare the results of protein structure prediction to the experimentally determined structure as measured by X-ray crystallography, protein NMR, or, increasingly, cryoelectron microscopy. The metric was developed by Adam Zemla at Lawrence Livermore National Laboratory and originally implemented in the Local-Global Alignment (LGA) program. It is intended as a more accurate measurement than the common root-mean-square deviation (RMSD) metric - which is sensitive to outlier regions created, for example, by poor modeling of individual loop regions in a structure that is otherwise reasonably accurate. The conventional GDT_TS score is computed over the alpha carbon atoms and is reported as a percentage, ranging from 0 to 100. In general, the higher the GDT_TS score, the more closely a model approximates a given reference structure. GDT_TS measurements are used as major assessment criteria in the production of results from the Critical Assessment of Structure Prediction (CASP), a large-scale experiment in the structure prediction community dedicated to assessing current modeling techniques. The metric was first introduced as an evaluation standard in the third iteration of the biannual experiment (CASP3) in 1998. Various extensions to the original method have been developed; variations that accounts for the positions of the side chains are known as global distance calculations (GDC). Calculation The GDT score is calculated as the largest set of amino acid residues' alpha carbon atoms in the model structure falling within a defined distance cutoff of their position in the experimental structure, after iteratively superimposing the two structures. By the original design the GDT algorithm calculat
https://en.wikipedia.org/wiki/Mucous%20gland
Mucous gland, also known as muciparous glands, are found in several different parts of the body, and they typically stain lighter than serous glands during standard histological preparation. Most are multicellular, but goblet cells are single-celled glands. Mucous salivary glands The mucous salivary glands are similar in structure to the buccal and labial glands. They are found especially at the back part behind the vallate papillae, but are also present at the apex and marginal parts. In this connection the anterior lingual glands require special notice. They are situated on the under surface of the apex of the tongue, one on either side of the frenulum, where they are covered by a fascicle of muscular fibers derived from the styloglossus and inferior longitudinal muscles. They produce a glycoprotein, mucin that absorbs water to form a sticky secretion called mucus. They are from 12 to 25 mm. long, and about 8 mm. broad, and each opens by three or four ducts on the under surface of the apex. The Weber's glands are an example of muciparous glands located along the tongue. See also Mucus Gland Exocrine gland Weber's glands
https://en.wikipedia.org/wiki/Common%20Criteria%20Testing%20Laboratory
The Common Criteria model provides for the separation of the roles of evaluator and certifier. Product certificates are awarded by national schemes on the basis of evaluations carried by independent testing laboratories. A Common Criteria testing laboratory is a third-party commercial security testing facility that is accredited to conduct security evaluations for conformance to the Common Criteria international standard. Such facility must be accredited according to ISO/IEC 17025 with its national certification body. Examples List of laboratory designations by country: In the US they are called Common Criteria Testing Laboratory (CCTL) In Canada they are called Common Criteria Evaluation Facility (CCEF) In the UK they are called Commercial Evaluation Facilities (CLEF) In France they are called Centres d’Evaluation de la Sécurité des Technologies de l’Information (CESTI) In Germany they are called IT Security Evaluation Facility (ITSEF) Common Criteria Recognition Arrangement Common Criteria Recognition Arrangement (CCRA) or Common Criteria Mutual Recognition Arrangement (MRA) is an international agreement that recognizes evaluations against the Common Criteria standard performed in all participating countries. There are some limitations to this agreement and, in the past, only evaluations up to EAL4+ were recognized. With on-going transition away from EAL levels and the introduction of NDPP evaluations that “map” to up to EAL4 assurance components continue to be recognized. United States In the United States the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CCTLs to meet National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme requirements and conduct IT security evaluations for conformance to the Common Criteria. CCTL requirements These laboratories must meet the following requirements: NIST Handbook 150, NVLAP Procedures and General
https://en.wikipedia.org/wiki/Enriched%20Xenon%20Observatory
The Enriched Xenon Observatory (EXO) is a particle physics experiment searching for neutrinoless double beta decay of xenon-136 at WIPP near Carlsbad, New Mexico, U.S. Neutrinoless double beta decay (0νββ) detection would prove the Majorana nature of neutrinos and impact the neutrino mass values and ordering. These are important open topics in particle physics. EXO currently has a 200-kilogram xenon liquid time projection chamber (EXO-200) with R&D efforts on a ton-scale experiment (nEXO). Xenon double beta decay was detected and limits have been set for 0νββ. Overview EXO measures the rate of neutrinoless decay events above the expected background of similar signals, to find or limit the double beta decay half-life, which relates to the effective neutrino mass using nuclear matrix elements. A limit on effective neutrino mass below 0.01 eV would determine the neutrino mass order. The effective neutrino mass is dependent on the lightest neutrino mass in such a way that that bound indicates the normal mass hierarchy. The expected rate of 0νββ events is very low, so background radiation is a significant problem. WIPP has of rock overburden—equivalent to of water—to screen incoming cosmic rays. Lead shielding and a cryostat also protect the setup. The neutrinoless decays would appear as narrow spike in the energy spectrum around the xenon Q-value (Qββ = 2457.8 keV), which is fairly high and above most gamma decays. EXO-200 History EXO-200 was designed with a goal of less than 40 events per year within two standard deviations of expected decay energy. This background was achieved by selecting and screening all materials for radiopurity. Originally the vessel was to be made of Teflon, but the final design of the vessel uses thin, ultra-pure copper. EXO-200 was relocated from Stanford to WIPP in the summer of 2007. Assembly and commissioning continued until the end of 2009 with data taking beginning in May 2011. Calibration was done using 228Th, 137Cs, a
https://en.wikipedia.org/wiki/Unidirectional%20network
A unidirectional network (also referred to as a unidirectional gateway or data diode) is a network appliance or device that allows data to travel in only one direction. Data diodes can be found most commonly in high security environments, such as defense, where they serve as connections between two or more networks of differing security classifications. Given the rise of industrial IoT and digitization, this technology can now be found at the industrial control level for such facilities as nuclear power plants, power generation and safety critical systems like railway networks. After years of development, data diodes have evolved from being only a network appliance or device allowing raw data to travel only in one direction, used in guaranteeing information security or protection of critical digital systems, such as industrial control systems, from inbound cyber attacks, to combinations of hardware and software running in proxy computers in the source and destination networks. The hardware enforces physical unidirectionality, and the software replicates databases and emulates protocol servers to handle bi-directional communication. Data Diodes are now capable of transferring multiple protocols and data types simultaneously. It contains a broader range of cybersecurity features like secure boot, certificate management, data integrity, forward error correction (FEC), secure communication via TLS, among others. A unique characteristic is that data is transferred deterministically (to predetermined locations) with a protocol "break" that allows the data to be transferred through the data diode. Data diodes are commonly found in high security military and government environments, and are now becoming widely spread in sectors like oil & gas, water/wastewater, airplanes (between flight control units and in-flight entertainment systems), manufacturing and cloud connectivity for industrial IoT. New regulations have increased demand and with increased capacity, major techno
https://en.wikipedia.org/wiki/Journal%20of%20High%20Energy%20Physics
The Journal of High Energy Physics is a monthly peer-reviewed open access scientific journal covering the field of high energy physics. It is published by Springer Science+Business Media on behalf of the International School for Advanced Studies. The journal is part of the SCOAP3 initiative. According to the Journal Citation Reports, the journal has a 2020 impact factor of 5.810.
https://en.wikipedia.org/wiki/Contact%20order
The contact order of a protein is a measure of the locality of the inter-amino acid contacts in the protein's native state tertiary structure. It is calculated as the average sequence distance between residues that form native contacts in the folded protein divided by the total length of the protein. Higher contact orders indicate longer folding times, and low contact order has been suggested as a predictor of potential downhill folding, or protein folding that occurs without a free energy barrier. This effect is thought to be due to the lower loss of conformational entropy associated with the formation of local as opposed to nonlocal contacts. Relative contact order (CO) is formally defined as: where N is the total number of contacts, ΔSi,j is the sequence separation, in residues, between contacting residues i and j, and L is the total number of residues in the protein. The value of contact order typically ranges from 5% to 25% for single-domain proteins, with lower contact order belonging to mainly helical proteins, and higher contact order belonging to proteins with a high beta-sheet content. Protein structure prediction methods are more accurate in predicting the structures of proteins with low contact orders. This may be partly because low contact order proteins tend to be small, but is likely to be explained by the smaller number of possible long-range residue-residue interactions to be considered during global optimization procedures that minimize an energy function. Even successful structure prediction methods such as the Rosetta method overproduce low-contact-order structure predictions compared to the distributions observed in experimentally determined protein structures. The percentage of the natively folded contact order can also be used as a measure of the "nativeness" of folding transition states. Phi value analysis in concert with molecular dynamics has produced transition-state models whose contact order is close to that of the folded state in pr
https://en.wikipedia.org/wiki/Armorial%20of%20the%20Netherlands
This is an armorial of the Kingdom of the Netherlands. Kingdom of the Netherlands The different versions of Coat of arms of the Netherlands are shown here: Countries The coats of arms of the countries of the Kingdom of the Netherlands are shown here: Provinces The coats of arms of the twelve provinces of the Netherlands are shown here: Public bodies The coats of arms of the three public bodies of the Caribbean Netherlands are shown here: See also Flags of provinces of the Netherlands Dutch coats of arms Netherlands .Coats of arms nl:Lijst van wapens van Nederlandse deelgebieden#Wapens van provincies van Nederland
https://en.wikipedia.org/wiki/Coastal%E2%80%93Karst%20Statistical%20Region
The Coastal–Karst Statistical Region (, ) is a statistical region in southwest Slovenia. It covers the traditional and historical regions of Slovenian Istria and most of the Karst Plateau, which traditionally belonged to the County of Gorizia and Gradisca. The region has a sub-Mediterranean climate and is Slovenia's only statistical region bordering the sea. Its natural features enable the development of tourism, transport, and special agricultural crops. More than two-thirds of gross value added are generated by services (trade, accommodation, and transport); most was generated by activities at the Port of Koper and through seaside and spa tourism. The region recorded almost a quarter of all tourist nights in the country in 2013; slightly less than half by domestic tourists. Among foreign tourists, Italians, Austrians, and Germans predominated. In 2012 the region was one of four regions with a positive annual population growth rate (8.1‰). However, the age structure of the population was less favourable: in mid-2013 the ageing index was 133.3, which means that for every 100 inhabitants under 15 there were 133 inhabitants 65 or older. The farms in this region are among the smallest in Slovenia in terms of average utilised agricultural area per farm and in terms of the number of livestock on farms. Cities and towns The Coastal–Karst Statistical Region includes four cities and towns, the largest of which is Koper. Municipalities The Coastal–Karst Statistical Region comprises the following eight municipalities: Ankaran Divača Hrpelje-Kozina Izola Komen Koper Piran Sežana Demographics It has an area of 1,044 km² and an estimated 112,942 inhabitants (at 1 July 2015)—of whom almost half live in the coastal city of Koper—and the second-highest GDP per capita of the Slovenian regions. It has high percentage of foreigners, at 10% (after the Central Slovenia Statistical Region with 33%, the Drava Statistical Region with 12.6%, and the Savinja Statistical Region
https://en.wikipedia.org/wiki/Aurophilicity
In chemistry, aurophilicity refers to the tendency of gold complexes to aggregate via formation of weak metallophilic interactions. The main evidence for aurophilicity is from the crystallographic analysis of Au(I) complexes. The aurophilic bond has a length of about 3.0 Å and a strength of about 7–12 kcal/mol, which is comparable to the strength of a hydrogen bond. The effect is greatest for gold as compared with copper or silver—the higher elements in its periodic table group—due to increased relativistic effects. Observations and theory show that, on average, 28% of the binding energy in the aurophilic interaction can be attributed to relativistic expansion of the gold d orbitals. An example of aurophilicity is the propensity of gold centres to aggregate. While both intramolecular and intermolecular aurophilic interactions have been observed, only intramolecular aggregation has been observed at such nucleation sites. Role in self-assembly The similarity in strength between hydrogen bonding and aurophilic interaction has proven to be a convenient tool in the field of polymer chemistry. Much research has been conducted on self-assembling supramolecular structures, both those that aggregate by aurophilicity alone and those that contain both aurophilic and hydrogen-bonding interactions. An important and exploitable property of aurophilic interactions relevant to their supramolecular chemistry is that while both inter- and intramolecular interactions are possible, intermolecular aurophilic linkages are comparatively weak and easily broken by solvation; most complexes that exhibit intramolecular aurophilic interactions retain such moieties in solution.
https://en.wikipedia.org/wiki/Core%20binding%20factor
The Core binding factor (CBF) is a group of heterodimeric transcription factors. Core binding factors are composed of: a non-DNA-binding CBFβ chain (CBFB) a DNA-binding CBFα chain (RUNX1, RUNX2, RUNX3)
https://en.wikipedia.org/wiki/New%20Math%20%28song%29
New Math is a 1965 song by American musician Tom Lehrer. Found on his album That Was the Year That Was, the song is a satire of the then-contemporary educational concept of New Math. Composition The song is composed in the key of C major in a 2/4 time signature. It correctly describes the step-by-step process for subtracting 173 from 342 in decimal and then subtracting the numbers 1738 and 3428 having the same digits in octal. The song features a spoken-word intro by Lehrer, followed by "piano played at a quick tempo and brisk lines". Context Lehrer, at the time a doctoral student of mathematics at Harvard University, used the song to satirize the then-new educational concept of New Math, introduced in American schools in the late 1950s and early 1960s as an attempt to reform education of mathematics. According to the book The New Math: A Political History, the song "purported to be a lesson for parents confused by recent changes in their children's arithmetic textbook". The same book states that by the time of the song's release in 1965, the concept was at its peak in American education. Lehrer's song has been described as "well-informed and literate ... enjoyed by new math proponents and critics alike". Historian Christopher J. Phillips writes that, by including this song among other songs of great political and social import on That Was the Year That Was, Lehrer "seamlessly—and accurately—placed the new math among the major events of the mid-twentieth-century United States".
https://en.wikipedia.org/wiki/Aperiodic%20graph
In the mathematical area of graph theory, a directed graph is said to be aperiodic if there is no integer k > 1 that divides the length of every cycle of the graph. Equivalently, a graph is aperiodic if the greatest common divisor of the lengths of its cycles is one; this greatest common divisor for a graph G is called the period of G. Graphs that cannot be aperiodic In any directed bipartite graph, all cycles have a length that is divisible by two. Therefore, no directed bipartite graph can be aperiodic. In any directed acyclic graph, it is a vacuous truth that every k divides all cycles (because there are no directed cycles to divide) so no directed acyclic graph can be aperiodic. And in any directed cycle graph, there is only one cycle, so every cycle's length is divisible by n, the length of that cycle. Testing for aperiodicity Suppose that G is strongly connected and that k divides the lengths of all cycles in G. Consider the results of performing a depth-first search of G, starting at any vertex, and assigning each vertex v to a set Vi where i is the length (taken mod k) of the path in the depth-first search tree from the root to v. It can be shown that this partition into sets Vi has the property that each edge in the graph goes from a set Vi to another set V(i + 1) mod k. Conversely, if a partition with this property exists for a strongly connected graph G, k must divide the lengths of all cycles in G. Thus, we may find the period of a strongly connected graph G by the following steps: Perform a depth-first search of G For each e in G that connects a vertex on level i of the depth-first search tree to a vertex on level j, let ke = j - i - 1. Compute the greatest common divisor of the set of numbers ke. The graph is aperiodic if and only if the period computed in this fashion is 1. If G is not strongly connected, we may perform a similar computation in each strongly connected component of G, ignoring the edges that pass from one strongly connecte
https://en.wikipedia.org/wiki/Tannakian%20formalism
In mathematics, a Tannakian category is a particular kind of monoidal category C, equipped with some extra structure relative to a given field K. The role of such categories C is to approximate, in some sense, the category of linear representations of an algebraic group G defined over K. A number of major applications of the theory have been made, or might be made in pursuit of some of the central conjectures of contemporary algebraic geometry and number theory. The name is taken from Tadao Tannaka and Tannaka–Krein duality, a theory about compact groups G and their representation theory. The theory was developed first in the school of Alexander Grothendieck. It was later reconsidered by Pierre Deligne, and some simplifications made. The pattern of the theory is that of Grothendieck's Galois theory, which is a theory about finite permutation representations of groups G which are profinite groups. The gist of the theory is that the fiber functor Φ of the Galois theory is replaced by a tensor functor T from C to K-Vect. The group of natural transformations of Φ to itself, which turns out to be a profinite group in the Galois theory, is replaced by the group (a priori only a monoid) of natural transformations of T into itself, that respect the tensor structure. This is by nature not an algebraic group, but an inverse limit of algebraic groups (pro-algebraic group). Formal definition A neutral Tannakian category is a rigid abelian tensor category, such that there exists a K-tensor functor to the category of finite dimensional K-vector spaces that is exact and faithful. Applications The construction is used in cases where a Hodge structure or l-adic representation is to be considered in the light of group representation theory. For example, the Mumford–Tate group and motivic Galois group are potentially to be recovered from one cohomology group or Galois module, by means of a mediating Tannakian category it generates. Those areas of application are closely connect
https://en.wikipedia.org/wiki/Wind%20direction
Wind direction is generally reported by the direction from which the wind originates. For example, a north or northerly wind blows from the north to the south; the exceptions are onshore winds (blowing onto the shore from the water) and offshore winds (blowing off the shore to the water). Wind direction is usually reported in cardinal (or compass) direction, or in degrees. Consequently, a wind blowing from the north has a wind direction referred to as 0° (360°); a wind blowing from the east has a wind direction referred to as 90°, etc. Weather forecasts typically give the direction of the wind along with its speed, for example a "northerly wind at 15 km/h" is a wind blowing from the north at a speed of 15 km/h. If wind gusts are present, their speed may also be reported. Measurement techniques A variety of instruments can be used to measure wind direction, such as the anemoscope, windsock, and wind vane. All these instruments work by moving to minimize air resistance. The way a weather vane is pointed by prevailing winds indicates the direction from which the wind is blowing. The larger opening of a windsock faces the direction that the wind is blowing from; its tail, with the smaller opening, points in the same direction as the wind is blowing. Modern instruments used to measure wind speed and direction are called anemoscopes, anemometers and wind vanes. These types of instruments are used by the wind energy industry, both for wind resource assessment and turbine control. When a high measurement frequency is needed (such as in research applications), wind can be measured by the propagation speed of ultrasound signals or by the effect of ventilation on the resistance of a heated wire. Another type of anemometer uses pitot tubes that take advantage of the pressure differential between an inner tube and an outer tube that is exposed to the wind to determine the dynamic pressure, which is then used to compute the wind speed. In situations where modern instrumen
https://en.wikipedia.org/wiki/Earthpark
Earthpark is a proposed best-in-class educational facility with indoor rain forest and aquarium elements, and a mission of "inspiring generations to learn from the natural world." It was previously called the Environmental Project. Earthpark was to be located around Lake Red Rock, near the town of Pella, Iowa. In December 2007, federal funding from the U.S. Department of Energy for the Pella location was rescinded. Proposals from a variety of potential hosts, some outside of Iowa are being considered. Location Talks between Earthpark and the city of Coralville, Iowa, the original planned location of Earthpark, ended in 2006. The project then found a new home in Pella, Iowa. but that did not become a reality. A similar project exists in Cornwall, England, called The Eden Project. Size and scope The total project cost for Earthpark is estimated to be US$155 million. The complex was planned to be in area with a 600,000 gallon aquarium and outdoor wetland and prairie exhibits. The Earthpark project was expected to employ 150 people directly and create an additional 2000 indirect jobs. The economic impact was estimated to be US$130 million annually. The park was projected to draw 1 million visitors annually to the Pella area. Funding In August 2008, when asked if any efforts would be made to get additional federal money for the project, U.S. Senator Chuck Grassley said "Not by this senator, and I don't think there will be any by other senators."
https://en.wikipedia.org/wiki/Cryptographic%20Module%20Testing%20Laboratory
Cryptographic Module Testing Laboratory (CMTL) is an information technology (IT) computer security testing laboratory that is accredited to conduct cryptographic module evaluations for conformance to the FIPS 140-2 U.S. Government standard. The National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CMTLs to meet Cryptographic Module Validation Program (CMVP) standards and procedures. This has been replaced by FIPS 140-2 and the Cryptographic Module Validation Program (CMVP). CMTL requirements These laboratories must meet the following requirements: NIST Handbook 150, NVLAP Procedures and General Requirements NIST Handbook 150-17 Information Technology Security Testing - Cryptographic Module Testing NVLAP Specific Operations Checklist for Cryptographic Module Testing FIPS 140-2 in relation to the Common Criteria A CMTL can also be a Common Criteria (CC) Testing Laboratory (CCTL). The CC and FIPS 140-2 are different in the abstractness and focus of evaluation. FIPS 140-2 testing is against a defined cryptographic module and provides a suite of conformance tests to four FIPS 140 security levels. FIPS 140-2 describes the requirements for cryptographic modules and includes such areas as physical security, key management, self tests, roles and services, etc. The standard was initially developed in 1994 - prior to the development of the CC. The CC is an evaluation against a Protection Profile (PP), or security target (ST). Typically, a PP covers a broad range of products. A CC evaluation does not supersede or replace a validation to either FIPS 140-1,FIPS140-2 or FIPS 140-3. The four security levels in FIPS 140-1 and FIPS 140-2 do not map directly to specific CC EALs or to CC functional requirements. A CC certificate cannot be a substitute for a FIPS 140-1 or FIPS 140-2 certificate. If the operational environment is a modifiable operational environment, the operating system requirements of th
https://en.wikipedia.org/wiki/Superdistribution
Superdistribution is an approach to distributing digital products such as software, videos, and recorded music in which the products are made publicly available and distributed in encrypted form instead of being sold in retail outlets or online shops. Such products can be passed freely among users on physical media, over the Internet or other networks, or using mobile technologies such as Bluetooth, IrDA or MMS (Multimedia Messaging Service). Over 280 models of telephones support superdistribution based on OMA DRM; companies such as Vodafone and Deutsche Telekom have been exploring it. Superdistribution allows and indeed encourages digital products to be distributed freely in encrypted form, even as the product's owner retains control over the ability to use and modify the product. Superdistribution is a highly efficient means of distribution because distribution is not impeded by any barriers and anyone can become a distributor. A product made available through superdistribution may be free, in which case the user can use it immediately and without restriction, or restricted by means of Digital Rights Management (DRM). Restricted products generally require a license that the user must purchase either immediately or after a trial period (in the case of so-called demoware). Superdistribution was invented in 1983 by the Japanese engineer Ryoichi Mori and patented by him in 1990. Mori's prototype, which he called the Software Service System (SSS), took the form of a peer-to-peer-architecture with the following components: a cryptographic wrapper for digital products that cannot be removed and remains in place whenever the product is copied. a digital rights management system for tracking usage of the product and assuring that any usage of the product or access to its code conforms to the terms set by the product's owner. an arrangement for secure payments from the product's users to its owner. See also Friend-to-friend shareware peer-to-peer
https://en.wikipedia.org/wiki/Futoshiki
, or More or Less, is a logic puzzle game from Japan. Its name means "inequality". It is also spelled hutosiki (using Kunrei-shiki romanization). Futoshiki was developed by Tamaki Seto in 2001. The puzzle is played on a square grid. The objective is to place the numbers such that each row and column contains only one of each digit. Some digits may be given at the start. Inequality constraints are initially specified between some of the squares, such that one must be higher or lower than its neighbor. These constraints must be honored in order to complete the puzzle. Strategy Solving the puzzle requires a combination of logical techniques. Numbers in each row and column restrict the number of possible values for each position, as do the inequalities. Once the table of possibilities has been determined, a crucial tactic to solve the puzzle involves "AB elimination", in which subsets are identified within a row whose range of values can be determined. Another important technique is to work through the range of possibilities in open inequalities. A value on one side of an inequality determines others, which then can be worked through the puzzle until a contradiction is reached and the first value is excluded. A solved futoshiki puzzle is a Latin square. Futoshiki in the United Kingdom A futoshiki puzzle is published in the following UK newspapers: The Daily Telegraph — Saturdays Dundee Courier — daily i — Mondays through Fridays The Guardian — Saturdays The Times — daily Futoshiki igre Notes Logic puzzles Latin squares
https://en.wikipedia.org/wiki/Watermarking%20attack
In cryptography, a watermarking attack is an attack on disk encryption methods where the presence of a specially crafted piece of data can be detected by an attacker without knowing the encryption key. Problem description Disk encryption suites generally operate on data in 512-byte sectors which are individually encrypted and decrypted. These 512-byte sectors alone can use any block cipher mode of operation (typically CBC), but since arbitrary sectors in the middle of the disk need to be accessible individually, they cannot depend on the contents of their preceding/succeeding sectors. Thus, with CBC, each sector has to have its own initialization vector (IV). If these IVs are predictable by an attacker (and the filesystem reliably starts file content at the same offset to the start of each sector, and files are likely to be largely contiguous), then there is a chosen plaintext attack which can reveal the existence of encrypted data. The problem is analogous to that of using block ciphers in the electronic codebook (ECB) mode, but instead of whole blocks, only the first block in different sectors are identical. The problem can be relatively easily eliminated by making the IVs unpredictable with, for example, ESSIV. Alternatively, one can use modes of operation specifically designed for disk encryption (see disk encryption theory). This weakness affected many disk encryption programs, including older versions of BestCrypt as well as the now-deprecated cryptoloop. To carry out the attack, a specially crafted plaintext file is created for encryption in the system under attack, to "NOP-out" the IV such that the first ciphertext block in two or more sectors is identical. This requires that the input to the cipher (plaintext, , XOR initialisation vector, ) for each block must be the same; i.e., . Thus, we must choose plaintexts, such that . The ciphertext block patterns generated in this way give away the existence of the file, without any need for the disk to be
https://en.wikipedia.org/wiki/Darwinian%20literary%20studies
Darwinian literary studies (also known as literary Darwinism) is a branch of literary criticism that studies literature in the context of evolution by means of natural selection, including gene-culture coevolution. It represents an emerging trend of neo-Darwinian thought in intellectual disciplines beyond those traditionally considered as evolutionary biology: evolutionary psychology, evolutionary anthropology, behavioral ecology, evolutionary developmental psychology, cognitive psychology, affective neuroscience, behavioural genetics, evolutionary epistemology, and other such disciplines. History and scope Interest in the relationship between Darwinism and the study of literature began in the nineteenth century, for example, among Italian literary critics. For example, Ugo Angelo Canello argued that literature was the history of the human psyche, and as such, played a part in the struggle for natural selection, while Francesco de Sanctis argued that Emile Zola "brought the concepts of natural selection, struggle for existence, adaptation and environment to bear in his novels". Modern Darwinian literary studies arose in part as a result of its proponents' dissatisfaction with the poststructuralist and postmodernist philosophies that had come to dominate literary study during the 1970s and 1980s. In particular, the Darwinists took issue with the argument that discourse constructs reality. The Darwinists argue that biologically grounded dispositions constrain and inform discourse. This argument runs counter to what evolutionary psychologists assert is the central idea in the "Standard Social Science Model": that culture wholly constitutes human values and behaviors. Literary Darwinists use concepts from evolutionary biology and the evolutionary human sciences to formulate principles of literary theory and interpret literary texts. They investigate interactions between human nature and the forms of cultural imagination, including literature and its oral antecedent
https://en.wikipedia.org/wiki/Mauisaurus
Mauisaurus ("Māui lizard") is a dubious genus of plesiosaur that lived during the Late Cretaceous period in what is now New Zealand. Numerous specimens have been attributed to this genus in the past, but a 2017 paper restricts Mauisaurus to the lectotype and declares it a nomen dubium. History of discovery Mauisaurus remains have all been found in New Zealand's South Island, in Canterbury. Mauisaurus haasti was described by Hector in 1874 based on eight specimens and diagnosed by its cervical vertebrae and a humerus with large tuberosities. However, of these eight specimens, two, consisting of ribs and paddle, were lost, while another, the cast of a jaw fragment (the original fossil of which was also lost) was found to be a mosasaur. The most substantial specimen, 8a (DM R1529), consisted of fragmentary pubes, a partial ilium and hindlimbs, originally misidentified as part of the pectoral girdle. Mauisaurus gets its name from the New Zealand Māori mythological demigod, Māui. Māui is said to have pulled New Zealand up from the seabed using a fish hook, thus creating the country. Thus, Mauisaurus means "Māui lizard". Mauisaurus gets its scientific last name from its original finder, Julius von Haast, who found the first Mauisaurus fossil in 1870 around Gore Bay, New Zealand. The specimen was then first described in 1874. A second species was also named by Hector, Mauisaurus brachiolatus, based on the proximal end of a very large humerus as well as a humerus together with radius and radiale. There was some confusion regarding this species, as the description named it M. latibrachialis, while the specimen list included it under the name M. brachiolatus. In 1962 specimen 8a was declared the lectotype of Mauisaurus haasti by Welles who further suggested that M. brachiolatus should be deemed a nomen vanum in an overview of Cretaceous plesiosaurs. Later in 1971 Welles & Gregg revised the diagnosis of M. haasti and produced a detailed description of the lectotype, ass
https://en.wikipedia.org/wiki/Semantic%20interoperability
Semantic interoperability is the ability of computer systems to exchange data with unambiguous, shared meaning. Semantic interoperability is a requirement to enable machine computable logic, inferencing, knowledge discovery, and data federation between information systems. Semantic interoperability is therefore concerned not just with the packaging of data (syntax), but the simultaneous transmission of the meaning with the data (semantics). This is accomplished by adding data about the data (metadata), linking each data element to a controlled, shared vocabulary. The meaning of the data is transmitted with the data itself, in one self-describing "information package" that is independent of any information system. It is this shared vocabulary, and its associated links to an ontology, which provides the foundation and capability of machine interpretation, inference, and logic. Syntactic interoperability (see below) is a prerequisite for semantic interoperability. Syntactic interoperability refers to the packaging and transmission mechanisms for data. In healthcare, HL7 has been in use for over thirty years (which predates the internet and web technology), and uses the pipe character (|) as a data delimiter. The current internet standard for document markup is XML, which uses "< >" as a data delimiter. The data delimiters convey no meaning to the data other than to structure the data. Without a data dictionary to translate the contents of the delimiters, the data remains meaningless. While there are many attempts at creating data dictionaries and information models to associate with these data packaging mechanisms, none have been practical to implement. This has only perpetuated the ongoing "babelization" of data and inability to exchange data with meaning. Since the introduction of the Semantic Web concept by Tim Berners-Lee in 1999, there has been growing interest and application of the W3C (World Wide Web Consortium) standards to provide web-scale semantic data
https://en.wikipedia.org/wiki/WEA%20Manufacturing
WEA Manufacturing was the record, tape, and compact disc manufacturing arm of WEA International Inc. from 1978 to 2003, when it was sold and merged into Cinram International, a previous competitor. The last owner when the plant closed was Technicolor. History WEA Manufacturing Inc. was created in 1978–1979 when Warner Communications Inc. purchased two of its longtime suppliers: the record pressing plants Specialty Records Corporation (Olyphant, Pennsylvania) and Allied Record Company (Los Angeles). The company was headquartered in Olyphant, where the original plant was replaced in late 1981 by a new facility which retained the name Specialty Records Corporation. The Specialty Records Corporation name was dropped in 1996 in favor of WEA Manufacturing. The company invested in CD manufacturing in 1986, matching a $247,000 contribution by economic development corporation Ben Franklin Technology Partners to develop and implement new processes of manufacturing audio CDs and CD-ROMs. BFTP assembled a team of experts in physics, electrical engineering, and thin film technology from the University of Scranton and Lehigh University to carry out the research and development. The Olyphant plant and another plant in Alsdorf, Germany, were expanded to support CD pressing that year, with the Olyphant facility's production commencing first in September 1986. WEA Manufacturing grew to become one of the largest manufacturers of recorded media in the world. The company began manufacturing Laserdiscs in July 1991. The company's DVD division, Warner Advanced Media Operations (WAMO), helped design the high-density format used in DVDs, and manufactured some of the first DVDs in the late 1990s. The company was sold to Cinram International in October 2003 and no longer exists under the name WEA Manufacturing, but the Olyphant plant continued to operate under its new ownership. In 2005, the company was Lackawanna County's largest employer, with over 2,300 people working at the Olyphant
https://en.wikipedia.org/wiki/Greater%20palatine%20canal
The greater palatine canal (or pterygopalatine canal) is a passage in the skull that transmits the descending palatine artery, vein, and greater and lesser palatine nerves between the pterygopalatine fossa and the oral cavity. Structure The greater palatine canal starts on the inferior aspect of the pterygopalatine fossa. It goes through the maxilla and palatine bones to reach the palate, ending at the greater palatine foramen. From this canal, accessory canals branch off; these are known as the lesser palatine canals. The canal is formed by a vertical groove on the posterior part of the maxillary surface of the palatine bone; it is converted into a canal by articulation with the maxilla. The canal transmits the descending palatine vessels, the greater palatine nerve, and the lesser palatine nerve. See also Greater palatine foramen Pterygopalatine fossa Additional images
https://en.wikipedia.org/wiki/Heuristic%20analysis
Heuristic analysis is a method employed by many computer antivirus programs designed to detect previously unknown computer viruses, as well as new variants of viruses already in the "wild". Heuristic analysis is an expert based analysis that determines the susceptibility of a system towards particular threat/risk using various decision rules or weighing methods. MultiCriteria analysis (MCA) is one of the means of weighing. This method differs from statistical analysis, which bases itself on the available data/statistics. Operation Most antivirus programs that utilize heuristic analysis perform this function by executing the programming commands of a questionable program or script within a specialized virtual machine, thereby allowing the anti-virus program to internally simulate what would happen if the suspicious file were to be executed while keeping the suspicious code isolated from the real-world machine. It then analyzes the commands as they are performed, monitoring for common viral activities such as replication, file overwrites, and attempts to hide the existence of the suspicious file. If one or more virus-like actions are detected, the suspicious file is flagged as a potential virus, and the user alerted. Another common method of heuristic analysis is for the anti-virus program to decompile the suspicious program, then analyze the machine code contained within. The source code of the suspicious file is compared to the source code of known viruses and virus-like activities. If a certain percentage of the source code matches with the code of known viruses or virus-like activities, the file is flagged, and the user alerted. Effectiveness Heuristic analysis is capable of detecting many previously unknown viruses and new variants of current viruses. However, heuristic analysis operates on the basis of experience (by comparing the suspicious file to the code and functions of known viruses). This means it is likely to miss new viruses that contain previousl
https://en.wikipedia.org/wiki/SDS%20930
The SDS 930 was a commercial 24-bit computer using bipolar junction transistors sold by Scientific Data Systems. It was announced in December 1963, with first installations in June 1964. Description An SDS 930 system consists of at least three standard () cabinets, weighing about . It is composed of an arithmetic and logic unit, at least 8,192 words (24-bit + simple parity bit) magnetic-core memory, and the IO unit. Two's complement integer arithmetic is used. The machine has integer multiply and divide, but no floating-point hardware. An optional correlation and filtering unit (CFE) can be added, which is capable of very fast floating-point multiply-add operations (primarily intended for digital signal processing applications). A free-standing console is also provided, which includes binary displays of the machine's registers and switches to boot and debug programs. User input is by a Teletype Model 35 ASR unit and a high-speed paper-tape reader (300 cps). Most systems include at least two magnetic-tape drives, operating at up to 75 in/s at 800 bpi. The normal variety of peripherals is also available, including magnetic-drum units, card readers and punches, and an extensive set of analog-digital/digital-analog conversion devices. A (vector mode) graphic display unit is also available, but it does not include a means of keyboard input. The SDS 930 is a typical small- to medium-scale scientific computer of the 1960s. Speed is good for its cost, but with an integer add time of 3.5 microseconds, it is not in the same league as the scientific workhorses of the day (the CDC 6600, for example). A well equipped 930 can easily exceed 10 cabinets and require a climate-controlled room. The price of such a system in 1966 would be in the neighborhood of $500K. Programming languages available include FORTRAN II, ALGOL 60, and the assembly language known as Meta-Symbol. The FORTRAN system is very compact, having been designed and implemented by Digitek for SDS to co
https://en.wikipedia.org/wiki/Dry%20needling
Dry needling, also known as trigger point dry needling and intramuscular stimulation, is a treatment technique used by various healthcare practitioners, including physical therapists, physicians, and chiropractors, among others. Acupuncturists usually maintain that dry needling is adapted from acupuncture, but others consider dry needling as a variation of trigger point injections. It involves the use of either solid filiform needles or hollow-core hypodermic needles for therapy of muscle pain, including pain related to myofascial pain syndrome. Dry needling is mainly used to treat myofascial trigger points, but it is also used to target connective tissue, neural ailments, and muscular ailments. The American Physical Therapy Association defines dry needling as a technique used to treat dysfunction of skeletal muscle and connective tissue, minimize pain, and improve or regulate structural or functional damage. There is conflicting evidence regarding the effectiveness of dry needling. Some results suggest that it is an effective treatment for certain kinds of muscle pain, while other studies have shown no benefit compared to a placebo; however, not enough high-quality, long-term, and large-scale studies have been done on the technique to draw clear conclusions about its efficacy. Currently, dry needling is being practiced in the United States, Canada, Europe, Australia, and other parts of the world. Origin Etymology and terminology The origin of the term dry needling is attributed to Janet G. Travell. In her 1983 book, Myofascial Pain and Dysfunction: Trigger Point Manual, Travell uses the term dry needling to differentiate between two hypodermic needle techniques when performing trigger point therapy. However, Travell did not elaborate on the details on the techniques of dry needling; the current techniques of dry needling were based on the traditional and western medical acupuncture. Initial techniques The two techniques Travell described are the injection of a
https://en.wikipedia.org/wiki/Free%20viewpoint%20television
Free viewpoint television (FTV) is a system for viewing natural video, allowing the user to interactively control the viewpoint and generate new views of a dynamic scene from any 3D position. The equivalent system for computer-simulated video is known as virtual reality. With FTV, the focus of attention can be controlled by the viewers rather than a director, meaning that each viewer may be observing a unique viewpoint. It remains to be seen how FTV will affect television watching as a group activity. History Systems for rendering arbitrary views of natural scenes have been well known in the computer vision community for a long time but only in recent years has the speed and quality reached levels that are suitable for serious consideration as an end-user system. Professor Masayuki Tanimoto from Nagoya University (Japan) has done much to promote the use of the term "free viewpoint television" and has published many papers on the ray space representation, although other techniques can be, and are used for FTV. QuickTime VR might be considered a predecessor to FTV. Capture and display In order to acquire the views necessary to allow a high-quality rendering of the scene from any angle, several cameras are placed around the scene; either in a studio environment or an outdoor venue, such as a sporting arena for example. The output Multiview Video (MVV) must then be packaged suitably so that the data may be compressed and also so that the users' viewing device may easily access the relevant views to interpolate new views. It is not enough to simply place cameras around the scene to be captured. The geometry of the camera set up must be measured by a process known in computer vision as "camera calibration." Manual alignment would be too cumbersome so typically a "best effort" alignment is performed prior to capturing a test pattern that is used to generate calibration parameters. Restricted free viewpoint television views for large environments can be captured from
https://en.wikipedia.org/wiki/Subharmonic%20synthesizer
A subharmonic synthesizer is a device or system that generates subharmonics of an input signal. The nth subharmonic of a signal of fundamental frequency F is a signal with frequency F/n. This differs from ordinary harmonics, where the nth harmonic of fundamental frequency F is a signal of frequency nF. Subharmonic synthesizers can be used in professional audio applications as bass enhancement devices during the playback of recorded music. Other uses for subharmonic synthesizers include the application in bandwidth extension. A subharmonic synthesizer can be used to extend low frequency response due to bandwidth limitations of telephone systems. Subharmonic synthesizers are used extensively in dance clubs in certain genres of music such as disco and house music. They are often implemented to enhance the lower frequencies, in an attempt to gain a "heavier" or more vibrant sound. Various harmonics can be amplified or modulated, although it is most common to boost the fundamental frequency's lower octave. The kick drum can benefit greatly from this type of processing. A subharmonic synthesizer (or "synth" as it is known in the industry) creates a bigger presence and can give the music that much sought-after "punch". History During the disco era, sound engineers aimed to create a more powerful, deep bass sound in dance clubs and nightclubs. A key approach used by engineers to get heavier, deeper bass sound was to add huge subwoofer cabinets to reproduce the sub-bass frequencies. The Paradise Garage discotheque in New York City, which operated from 1977 to 1987, had "custom designed 'sub-bass' speakers" developed by Alex Rosner's disciple, sound engineer Richard ("Dick") Long that were called "Levan Horns" (in honor of resident DJ Larry Levan). By the end of the 1970s, subwoofers were used in dance venue sound systems to enable the playing of "[b]ass-heavy dance music" that we "do not 'hear' with our ears but with our entire body". One challenge with getting deep sub-
https://en.wikipedia.org/wiki/On%20the%20Track%20of%20Unknown%20Animals
On the Track of Unknown Animals is a cryptozoological book by the Belgian-French zoologist Bernard Heuvelmans that was first published in 1955 under the title Sur la Piste des Bêtes Ignorées. The English translation by Richard Garnett was published in 1958 with some updating by the author and with a foreword by Gerald Durrell. A revised and abridged edition was published in 1965, and a further edition in 1995. It is credited with introducing the term cryptozoology and established its author as the "Father of Cryptozoology." Subject As one reviewer explained, it is a book "about animals that might exist." On the Track of Unknown Animals cites animals that had only been discovered relatively recently, such as the pygmy chimpanzee, coelacanth, Komodo dragon and giant panda; and those that are believed to have become extinct relatively recently, such as the moa and Tasmanian tiger. A major theme is that these animals were generally known to local peoples, but their stories were dismissed by visiting zoologists, particularly the okapi. The author then discusses evidence for mystery animals from all over the world including the Mokele-mbembe, sea serpents and the Yeti, with an extensive bibliography. He begins by complaining that "The Press has made such a laughing-stock of the Loch Ness Monster... that no scientific commission has ever dared tackle the problem" and ends with the wish that any new species are not merely slaughtered for trophies: "Have pity on them all, for it is we who are the real monsters." Reviewers praised the breadth of study, careful citation and the author's knowledge but it was criticized for being somewhat shallow and "overly long and rambling." Contents Part 1: The Great Days of Zoology are Not Done Part 2: The Man-Faced Animals of South-East Asia Part 3: The Living Fossils of Oceania Part 4: Riddles of the Green Continent Part 5: The Giants of the Far North Part 6: The Lesson of the Malagasy Ghosts Bibliography Sur la Piste des Bête
https://en.wikipedia.org/wiki/Endoplasmic-reticulum-associated%20protein%20degradation
Endoplasmic-reticulum-associated protein degradation (ERAD) designates a cellular pathway which targets misfolded proteins of the endoplasmic reticulum for ubiquitination and subsequent degradation by a protein-degrading complex, called the proteasome. Mechanism The process of ERAD can be divided into three steps: Recognition of misfolded or mutated proteins in the endoplasmic reticulum The recognition of misfolded or mutated proteins depends on the detection of substructures within proteins such as exposed hydrophobic regions, unpaired cysteine residues and immature glycans. In mammalian cells for example, there exists a mechanism called glycan processing. In this mechanism, the lectin-type chaperones calnexin/calreticulin (CNX/CRT) provide immature glycoproteins the opportunity to reach their native conformation. They can do this by way of reglucosylating these glycoproteins by an enzyme called UDP-glucose-glycoprotein glucosyltransferase also known as UGGT. Terminally misfolded proteins, however, must be extracted from CNX/CRT. This is carried out by members of the EDEM (ER degradation-enhancing α-mannosidase-like protein) family (EDEM1-3) and ER mannosidase I. This mannosidase removes one mannose residue from the glycoprotein and the latter is recognized by EDEM. Eventually EDEM will target the misfolded glycoproteins for degradation by facilitating binding of ERAD lectins OS9 and XTP3-B. Retro-translocation into the cytosol Because the ubiquitin–proteasome system (UPS) is located in the cytosol, terminally misfolded proteins have to be transported from the endoplasmic reticulum back into cytoplasm. Most evidence suggest that the Hrd1 E3 ubiquitin-protein ligase can function as a retrotranslocon or dislocon to transport substrates into the cytosol. Hrd1 is not required for all ERAD events, so it is likely that other proteins contribute to this process. For example, glycosylated substrates are recognized by the E3 Fbs2 lectin. Further, this translocati
https://en.wikipedia.org/wiki/Authorize.Net
Authorize.Net, A Visa Solution is a United States-based payment gateway service provider, allowing merchants to accept credit card and electronic check payments through their website and over an Internet Protocol (IP) connection. Founded in 1996 as Authorize.Net, Inc., the company is now a subsidiary of Visa Inc. Its service permits customers to enter credit card and shipping information directly onto a web page, in contrast to some alternatives that require the customer to sign up for a payment service before performing a transaction. History Authorize.Net was founded in 1996, in Utah, by Jeff Knowles. As of 2004, it had about 90,000 customers. Authorize.Net was one of several companies acquired by Go2Net, a company backed by Microsoft founder Paul Allen, in 1999, for 90.5 million in cash and stock. Go2Net was acquired by InfoSpace in 2000 for about $4 billion; Authorize.Net was acquired by Lightbridge in 2004 for $82 million and then by CyberSource in 2007. Visa Inc. acquired CyberSource in 2010 for $2 billion. Visa has maintained Authorize.Net and Cybersource as separate services, with Authorize.Net concentrating on small- to medium-sized businesses and Cybersource concentrating on international and large-scale payment processing. At the time of the 2010 acquisition, the company's CEO identified three priorities: expanding the ecommerce market, enhancing fraud detection and prevention, and improving data security. As of 2014, along with parent CyberSource, it had about 450,000 customers. Outages In September 2004, Authorize.Net's servers were hit by a distributed denial-of-service (DDoS) attack. The DDoS attack lasted for over one week and caused a virtual shut down of the payment gateway's service. The attackers demanded money from Authorize.net in exchange for stopping the attack. On July 2, 2009, at 11:00 pm PST, the entire web infrastructure for Authorize.Net (main website, merchant gateway website, etc.) went offline and stayed down all morning July 3
https://en.wikipedia.org/wiki/Power%20cycling
Power cycling is the act of turning a piece of equipment, usually a computer, off and then on again. Reasons for power cycling include having an electronic device reinitialize its set of configuration parameters or recover from an unresponsive state of its mission critical functionality, such as in a crash or hang situation. Power cycling can also be used to reset network activity inside a modem. It can also be among the first steps for troubleshooting an issue. Overview Power cycling can be done manually, usually using a switch on the device to be cycled; automatically, through some type of device, system, or network management monitoring and control; or by remote control; through a communication channel. In the data center environment, remote control power cycling can usually be done through a power distribution unit, over TCP/IP. In the home environment, this can be done through home automation powerline communications or IP protocols. Most Internet Service Providers publish a "how-to" on their website showing their customers the correct procedure to power cycle their devices. Power cycling is a standard diagnostic procedure usually performed first when the computer freezes. However, frequently power cycling a computer can cause thermal stress. Reset has an equal effect on the software but may be less problematic for the hardware as power is not interrupted. Historical uses On all Apollo missions to the moon, the landing radar was required to acquire the surface before a landing could be attempted. But on Apollo 14, the landing radar was unable to lock on. Mission control told the astronauts to cycle the power. They did, the radar locked on just in time, and the landing was completed. During the Rosetta mission to comet 67P/Churyumov–Gerasimenko, the Philae lander did not return the expected telemetry on awakening after arrival at the comet. The problem was diagnosed as "somehow a glitch in the electronics", engineers cycled the power, and the lander aw
https://en.wikipedia.org/wiki/Potential%20space
In anatomy, a potential space is a space between two adjacent structures that are normally pressed together (directly apposed). Many anatomic spaces are potential spaces, which means that they are potential rather than realized (with their realization being dynamic according to physiologic or pathophysiologic events). In other words, they are like an empty plastic bag that has not been opened (two walls collapsed against each other; no interior volume until opened) or a balloon that has not been inflated. The pleural space, between the visceral and parietal pleura of the lung, is a potential space. Though it only contains a small amount of fluid normally, it can sometimes accumulate fluid or air that widens the space. The pericardial space is another potential space that may fill with fluid (effusion) in certain disease states (e.g. pericarditis; a large pericardial effusion may result in cardiac tamponade). Examples costodiaphragmatic recess pericardial cavity epidural space (within the skull) subdural space peritoneal cavity buccal space See also Fascial spaces of the head and neck
https://en.wikipedia.org/wiki/Jaikoz
Jaikoz is a Java program used for editing and mass tagging music file tags. Jaikoz generates acoustic fingerprints from music files using the AcoustId service, it can then look up the metadata from MusicBrainz using the AcoustId, additionally it can match based on metadata to MusicBrainz or Discogs. Matching is first applied at album level, falling back to track level where a match at album level could not be made. This allows Jaikoz to automatically fix most of a users song collection. Jaikoz uses a relatively unusual spreadsheet metaphor for both viewing and editing data, and allows editing of over fifty fields using this spreadsheet interface, the underlying jaudiotagger tag library is released under LGPL and is used by various Java applications. Jaikoz is commercially licensed software, written in Java 1.5 by Paul Taylor. A shareware version, in which changes can only be saved to 20 files during one use, is also available as a 30-day free trial. 10% of every sale is paid to the MetaBrainz Foundation to support MusicBrainz development. History Originally released in 2006 as a standalone music tagger without any MusicBrainz support, but support for MusicBrainz was soon added. Changes in Jaikoz have always reflected changes in MusicBrainz, for example Jaikoz was the first application to make use of the new web service released as part of the MusicBrainz NGS release in 2011, and the first application to use the MusicBrainz seeding mechanism for adding new releases. Summary of features Acoustic matching using MusicBrainz and AcoustId to match songs based on the actual music MetaData matching using MusicBrainz and Discogs to match tracks from the metadata in your files either automatically or manually Fixes artwork. Supports Multiple Audio formats and different audio formats can be edited the same easy way Export/Import metadata to/from a spreadsheet. Delete Duplicates based on Musicbrainz Id and/or Acoustic Fingerprint Find And Replace feature that can
https://en.wikipedia.org/wiki/Tektronix%204050
The Tektronix 4050 is a series of three desktop computers produced by Tektronix in the late 1970s through the early 1980s. The display technology is similar to the Tektronix 4010 terminal, using a storage tube display to avoid the need for video RAM. They are all-in-one designs with the display, keyboard, CPU and DC300 tape drive in a single desktop case. They also include a GPIB parallel bus interface for controlling lab and test equipment as well as connecting to external peripherals. A simple operating system and BASIC interpreter are included in ROM. A key concept of the systems is the use of a storage tube for the display. This allows the screen to retain images drawn to it, eliminating the need for a framebuffer, computer memory devoted to the display. Most systems of the era had limited resolution due to the expense of the buffer needed to hold higher resolution images, but this is eliminated in the 4050s and allows the resolution to be as high as the hardware can handle, which was ostensibly 1024 by 1024 but limited by the physical layout of the screen to 1024 by 780. It also allows the machine to dedicate all of its memory to the programs running on it, as opposed to partitioning off a section for the buffer. Models The first model, the 4051, was based on 8-bit Motorola 6800 running at a 1 MHz. It normally shipped with 8 KB of RAM and was expandable using 8 KB modules to 32 KB. The remaining 32 KB of address space was reserved for ROM, which could be expanded using two external ROM cartridge of 8 KB each. It included six character sets in ROM and an extended dialect of BASIC that included various vector drawing commands. The 4051 was released in 1975 for the base price of . Adding the optional RS-232 interface allowed it to emulate a Tektronix 4012 terminal. The second model was the 4052, which in spite of the similar name was a very different system. This had a CPU based on four AMD 2901 4-bit bit-slice processors used together to make a single 16-bit p
https://en.wikipedia.org/wiki/Green%20report
The Green report was written by Andrew Conway Ivy, a medical researcher and vice president of the University of Illinois at Chicago. Ivy was in charge of the medical school and its hospitals. The report justified testing malaria vaccines on Statesville Prison, Joliet, Illinois prisoners in the 1940s. Ivy mentioned the report in the 1946 Nuremberg Medical Trial for Nazi war criminals. He used it to refute any similarity between human experimentation in the United States and the Nazis. Background Malaria experiments in the Statesville Prison were publicized in the June 1945 edition of LIFE, entitled "Prisoners Expose Themselves to Malaria". When Ivy testified at the 1946 Nuremberg Medical Trial for Nazi war criminals, he misled the trial about the report, in order to strengthen the prosecution case. Ivy stated that the committee had debated and issued the report, when the committee had not met at that time. It was only formed when Ivy departed for Nuremberg after he requested then Illinois Governor Dwight Green to convene a group that would advise on ethical considerations concerning medical experimentation. An account stated that he wrote the report on his own after he cited its existence in the trial. It was later published in the Journal of the American Medical Association (JAMA). Notes Further reading Biological warfare United States Nuremberg Military Tribunals Human subject research in the United States
https://en.wikipedia.org/wiki/Eparterial%20bronchus
The eparterial bronchus (right superior lobar bronchus) is a branch of the right main bronchus given off about 2.5 cm from the bifurcation of the trachea. This branch supplies the superior lobe of the right lung and is the most superior of all secondary bronchi. It arises above the level of the right pulmonary artery, and for this reason is named the eparterial bronchus. All other distributions falling below the pulmonary artery are termed hyparterial. The eparterial bronchus is the only secondary bronchus with a specific name apart from the name of its corresponding lobe. Name The classification of eparterial and hyparterial is attributed to Swiss anatomist and anthropologist Christoph Theodor Aeby, and is central to his model of the anatomical lung. He presented this model in a monograph titled, "Der Bronchialbaum der Säugethiere und des Menschen, nebst Bemerkungen über den Bronchialbaum der Vögel und Reptilien".
https://en.wikipedia.org/wiki/T.%20Hasegawa
is a major producer of flavors and fragrances headquartered in Japan. As of 2021, it is one of world's top ten flavor and fragrances companies. History T. Hasegawa was established in 1903 as Hasegawa Totaro Shoten in Tokyo, Japan by Totaro Hasegawa. In 1941, Shozo Hasegawa succeeded his father at the company. In 1961 T. Hasegawa Co., Ltd. was founded with Shozo Hasegawa as president and took over all business of Hasegawa Totaro Shoten. The new company established headquarters in the prestigious Nihon-bashi district of Tokyo. The strong demand for Hasegawa products resulted in establishing Hasegawa's first overseas production facilities in Lawndale, California in 1978 and the formation of T. Hasegawa USA. In 1989 this production facility moved to the city of Cerritos, California where it is located to this day. On December 18, 1998, Tokujiro Hasegawa, a grandson of Totaro Hasegawa, was appointed President. Shozo Hasegawa was designated as Chairman of the Board, and Mr. Ryoshiro Hayashi as vice chairman. In March 2000 T. Hasegawa was listed on the Tokyo 2nd stock market, and by March 2001 T. Hasegawa was moved to the Tokyo 1st stock market.
https://en.wikipedia.org/wiki/NFAT
Nuclear factor of activated T-cells (NFAT) is a family of transcription factors shown to be important in immune response. One or more members of the NFAT family is expressed in most cells of the immune system. NFAT is also involved in the development of cardiac, skeletal muscle, and nervous systems. NFAT was first discovered as an activator for the transcription of IL-2 in T cells (as a regulator of T cell immune response) but has since been found to play an important role in regulating many more body systems. NFAT transcription factors are involved in many normal body processes as well as in development of several diseases, such as inflammatory bowel diseases and several types of cancer. NFAT is also being investigated as a drug target for several different disorders. Family members The NFAT transcription factor family consists of five members NFATc1, NFATc2, NFATc3, NFATc4, and NFAT5. NFATc1 through NFATc4 are regulated by calcium signalling, and are known as the classical members of the NFAT family. NFAT5 is a more recently discovered member of the NFAT family that has special characteristics that differentiate it from other NFAT members. Calcium signalling is critical to activation of NFATc1-4 because calmodulin (CaM), a well-known calcium sensor protein, activates the serine/threonine phosphatase calcineurin (CN). Activated CN binds to its binding site located in the N-terminal regulatory domain of NFATc1-4 and rapidly dephosphorylates the serine-rich region (SRR) and SP-repeats which are also present in the N-terminus of the NFAT proteins. This dephosphorylation results in a conformational change that exposes a nuclear localization signal which promotes nuclear translocation. On the other hand, NFAT5 lacks a crucial part of the N-terminal regulatory domain which in the aforementioned group harbours the essential CN binding site. This makes NFAT5 activation completely independent of calcium signalling. It is, however, controlled by MAPK during osmotic str
https://en.wikipedia.org/wiki/Ombient
Ombient is the moniker under which Mike Hunter performs his completely improvised ambient/drone music. Ombient's ambient/drone music, being of a live and improvisational nature, is representative of the feeling of the moment in which it is performed and of the subtle feedback between the audience and the cool performer. It features amplified guitar which is processed and layered using digital looping equipment enabling a single guitar to produce symphonic levels of density. Ombient has in the last 3 years delved deeply into the world of analog synthesis and more recently analog modular synthesis. Mike Hunter also plays in the progressive/world/ambient/space rock band known as Brainstatik, of which Mike Hunter is a member. Mike Hunter is also a Fractal artist. Mike Hunter also hosts the long time running FM radio program "Music With Space" on WPRB 103.3 FM, in Princeton.
https://en.wikipedia.org/wiki/Sankofa
(pronounced SAHN-koh-fah) is a word in the Twi language of Ghana meaning “to retrieve" (literally "go back and get"; - to return; - to go; - to fetch, to seek and take) and also refers to the Bono Adinkra symbol represented either with a stylized heart shape or by a bird with its head turned backwards while its feet face forward carrying a precious egg in its mouth. Sankofa is often associated with the proverb, “Se wo were fi na wosankofa a yenkyi," which translates as: "It is not wrong to go back for that which you have forgotten." The sankofa bird appears frequently in traditional Akan art, and has also been adopted as an important symbol in an African-American and African Diaspora context to represent the need to reflect on the past to build a successful future. It is one of the most widely dispersed adinkra symbols, appearing in modern jewelry, tattoos, and clothing. Akan symbolism The Akan people of Ghana use an adinkra symbol to represent the same concept. One version of it is similar to the eastern symbol of a heart, and another is that of a bird with its head turned backwards to symbolically capture an egg depicted above its back. It symbolizes taking from the past what is good and bringing it into the present in order to make positive progress through the benevolent use of knowledge. Adinkra symbols are used by the Akan people to express proverbs and other philosophical ideas. The sankofa bird also appears on carved wooden Akan stools, in Akan goldweights, on some ruler's state umbrella or parasol (ntuatire) finials and on the staff finials of some court linguists. It functions to foster mutual respect and unity in tradition. Use in North America and the United Kingdom During a building excavation in Lower Manhattan in 1991, a cemetery for free and enslaved Africans was discovered. Over 400 remains were identified, but one coffin in particular stood out. Nailed into its wooden lid were iron tacks, 51 of which formed an enigmatic, heart-shaped de
https://en.wikipedia.org/wiki/Hallerian%20physiology
Hallerian physiology was a theory competing with galvanism in Italy in the late 18th century. It is named after Albrecht von Haller, a Swiss physician who is considered the father of neurology. The Hallerians' fundamental tenet held that muscular movements were produced by a mechanical force, different from life and from the nervous system, and which operated beyond consciousness. The activity of this function could be controlled in dead and dissected animals by touching a metal knife to the muscle fiber or by a spark being discharged on them. The electricity operated only as a stimulus of irritability, and it was irritability which was the one, true cause of the contractions. Sources The Controversy on Animal Electricity in Eighteenth-Century Italy: Galvani, Volta and Others by Walter Bernardi Neurophysiology
https://en.wikipedia.org/wiki/Quantum%20dot%20solar%20cell
A quantum dot solar cell (QDSC) is a solar cell design that uses quantum dots as the captivating photovoltaic material. It attempts to replace bulk materials such as silicon, copper indium gallium selenide (CIGS) or cadmium telluride (CdTe). Quantum dots have bandgaps that are adjustable across a wide range of energy levels by changing their size. In bulk materials, the bandgap is fixed by the choice of material(s). This property makes quantum dots attractive for multi-junction solar cells, where a variety of materials are used to improve efficiency by harvesting multiple portions of the solar spectrum. As of 2022, efficiency exceeds 18.1%. Quantum dot solar cells have the potential to increase the maximum attainable thermodynamic conversion efficiency of solar photon conversion up to about 66% by utilizing hot photogenerated carriers to produce higher photovoltages or higher photocurrents. Background Solar cell concepts In a conventional solar cell light is absorbed by a semiconductor, producing an electron-hole (e-h) pair; the pair may be bound and is referred to as an exciton. This pair is separated by an internal electrochemical potential (present in p-n junctions or Schottky diodes) and the resulting flow of electrons and holes creates an electric current. The internal electrochemical potential is created by doping one part of the semiconductor interface with atoms that act as electron donors (n-type doping) and another with electron acceptors (p-type doping) that results in a p-n junction. The generation of an e-h pair requires that the photons have energy exceeding the bandgap of the material. Effectively, photons with energies lower than the bandgap do not get absorbed, while those that are higher can quickly (within about 10−13 s) thermalize to the band edges, reducing output. The former limitation reduces current, while the thermalization reduces the voltage. As a result, semiconductor cells suffer a trade-off between voltage and current (which can be
https://en.wikipedia.org/wiki/Firing%20squad%20synchronization%20problem
The firing squad synchronization problem is a problem in computer science and cellular automata in which the goal is to design a cellular automaton that, starting with a single active cell, eventually reaches a state in which all cells are simultaneously active. It was first proposed by John Myhill in 1957 and published (with a solution by John McCarthy and Marvin Minsky) in 1962 by Edward F. Moore. Problem statement The name of the problem comes from an analogy with real-world firing squads: the goal is to design a system of rules according to which an officer can command an execution detail to fire so that its members fire their rifles simultaneously. More formally, the problem concerns cellular automata, arrays of finite state machines called "cells" arranged in a line, such that at each time step each machine transitions to a new state as a function of its previous state and the states of its two neighbors in the line. For the firing squad problem, the line consists of a finite number of cells, and the rule according to which each machine transitions to the next state should be the same for all of the cells interior to the line, but the transition functions of the two endpoints of the line are allowed to differ, as these two cells are each missing a neighbor on one of their two sides. The states of each cell include three distinct states: "active", "quiescent", and "firing", and the transition function must be such that a cell that is quiescent and whose neighbors are quiescent remains quiescent. Initially, at time , all states are quiescent except for the cell at the far left (the general), which is active. The goal is to design a set of states and a transition function such that, no matter how long the line of cells is, there exists a time such that every cell transitions to the firing state at time , and such that no cell belongs to the firing state prior to time . Solutions The first solution to the FSSP was found by John McCarthy and Marvin Minsky and
https://en.wikipedia.org/wiki/Security%20and%20safety%20features%20new%20to%20Windows%20Vista
There are a number of security and safety features new to Windows Vista, most of which are not available in any prior Microsoft Windows operating system release. Beginning in early 2002 with Microsoft's announcement of its Trustworthy Computing initiative, a great deal of work has gone into making Windows Vista a more secure operating system than its predecessors. Internally, Microsoft adopted a "Security Development Lifecycle" with the underlying ethos of "Secure by design, secure by default, secure in deployment". New code for Windows Vista was developed with the SDL methodology, and all existing code was reviewed and refactored to improve security. Some specific areas where Windows Vista introduces new security and safety mechanisms include User Account Control, parental controls, Network Access Protection, a built-in anti-malware tool, and new digital content protection mechanisms. User Account Control User Account Control is a new infrastructure that requires user consent before allowing any action that requires administrative privileges. With this feature, all users, including users with administrative privileges, run in a standard user mode by default, since most applications do not require higher privileges. When some action is attempted that needs administrative privileges, such as installing new software or changing system or security settings, Windows will prompt the user whether to allow the action or not. If the user chooses to allow, the process initiating the action is elevated to a higher privilege context to continue. While standard users need to enter a username and password of an administrative account to get a process elevated (Over-the-shoulder Credentials), an administrator can choose to be prompted just for consent or ask for credentials. If the user doesn't click Yes, after 30 seconds the prompt is denied. UAC asks for credentials in a Secure Desktop mode, where the entire screen is faded out and temporarily disabled, to present only the
https://en.wikipedia.org/wiki/Nesfatin-1
Nesfatin-1 is a neuropeptide produced in the hypothalamus of mammals. It participates in the regulation of hunger and fat storage. Increased nesfatin-1 in the hypothalamus contributes to diminished hunger, a 'sense of fullness', and a potential loss of body fat and weight. A study of metabolic effects of nesfatin-1 in rats was done in which subjects administered nesfatin-1 ate less, used more stored fat and became more active. Nesfatin-1-induced inhibition of feeding may be mediated through the inhibition of orexigenic neurons. In addition, the protein stimulated insulin secretion from the pancreatic beta cells of both rats and mice. Biochemistry Nesfatin-1 is a polypeptide encoded in the N-terminal region of the protein precursor, Nucleobindin-2 (NUCB2). Recombinant human Nesfatin-1 is a 9.7 kDa protein containing 82 amino acid residues. Nesfatin-1 is expressed in the hypothalamus, in other areas of the brain, and in pancreatic islets, gastric endocrine cells and adipocytes. Satiety Nesfatin/NUCB2 is expressed in the appetite-control hypothalamic nuclei such as paraventricular nucleus (PVN), arcuate nucleus (ARC), supraoptic nucleus (SON) of hypothalamus, lateral hypothalamic area (LHA), and zona incerta in rats. Nesfatin-1 immunoreactivity was also found in the brainstem nuclei such as nucleus of the solitary tract (NTS) and Dorsal nucleus of vagus nerve. Brain Nesfatin-1 can cross the blood–brain barrier without saturation. The receptors within the brain are in the hypothalamus and the solitary nucleus, where nesfatin-1 is believed to be produced via peroxisome proliferator-activated receptors (PPARs). It appears there is a relationship between nesfatin-1 and cannabinoid receptors. Nesfatin-1-induced inhibition of feeding may be mediated through the inhibition of orexigenic NPY neurons. Nesfatin/NUCB2 expression has been reported to be modulated by starvation and re-feeding in the Paraventricular nucleus (PVN) and supraoptic nucleus (SON) of the brain. Ne
https://en.wikipedia.org/wiki/Roemer%20model%20of%20political%20competition
The Roemer model of political competition is a game between political parties in which each party announces a multidimensional policy vector. Since Nash equilibria do not normally exist when the policy space is multidimensional, John Roemer introduced the concept of party-unanimity Nash equilibrium (PUNE), which can be considered a generalization of the concept of Nash equilibrium in models of political competition. It is also a generalization of the Wittman model of political competition. In Roemer's model, all political parties are assumed to consist of three types of factions—opportunists, militants, and reformers. Opportunists seek solely to maximize the party's vote share in an election; militants seek to announce (and implement) the preferred policy of the average party member; and reformers have an objective function that is a convex combination of the objective functions of the opportunists and militants. It has been shown that the existence of reformers has no effect on what policies the party announces. With two parties, a pair of policy announcements constitute a PUNE if and only if the reformers and militants of any given party do not unanimously agree to deviate from their announced policy, given the policy put forth by the other party. In other words, if a pair of policies constitute a PUNE, then it should not be the case that both factions of a party can be made weakly better off (and one faction strictly better off) by deviating from the policy that they put forward. Such unanimity to deviate can be rare, and thus PUNEs are more likely to exist than regular Nash equilibria. Although there are no known cases where PUNEs do not exist, no simple necessary and sufficient conditions for the existence of non-trivial PUNEs have yet been offered. (A nontrivial PUNE is one in which no party offers the ideal policy of either its militants or opportunists.) The question of the existence of non-trivial PUNEs remains an important open question in the the
https://en.wikipedia.org/wiki/Pappus%20graph
In the mathematical field of graph theory, the Pappus graph is a bipartite, 3-regular, undirected graph with 18 vertices and 27 edges, formed as the Levi graph of the Pappus configuration. It is named after Pappus of Alexandria, an ancient Greek mathematician who is believed to have discovered the "hexagon theorem" describing the Pappus configuration. All the cubic, distance-regular graphs are known; the Pappus graph is one of the 13 such graphs. The Pappus graph has rectilinear crossing number 5, and is the smallest cubic graph with that crossing number . It has girth 6, diameter 4, radius 4, chromatic number 2, chromatic index 3 and is both 3-vertex-connected and 3-edge-connected. It has book thickness 3 and queue number 2. The Pappus graph has a chromatic polynomial equal to: The name "Pappus graph" has also been used to refer to a related nine-vertex graph, with a vertex for each point of the Pappus configuration and an edge for every pair of points on the same line; this nine-vertex graph is 6-regular, is the complement graph of the union of three disjoint triangle graphs, and is the complete tripartite graph K3,3,3. The first Pappus graph can be embedded in the torus to form a self-Petrie dual regular map with nine hexagonal faces; the second, to form a regular map with 18 triangular faces. The two regular toroidal maps are dual to each other. Algebraic properties The automorphism group of the Pappus graph is a group of order 216. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore the Pappus graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the Foster census, the Pappus graph, referenced as F018A, is the only cubic symmetric graph on 18 vertices. The characteristic polynomial of the Pappus graph is . It is the only graph with this characteristic polynomial, making it a graph determined by its spectrum. Gallery
https://en.wikipedia.org/wiki/Atomic%20Runner%20Chelnov
Atomic Runner Chelnov is a Japanese runner arcade video game developed and published by Data East in 1988. Gameplay The player controls Chelnov's movements with the eight-way joystick, and the three buttons to attack, jump, or turn around. Six types of weapons can be obtained during the game: laser, fire rings, boomerangs, spike bola balls, spike ball whip, missiles. By collecting power-ups you can improve Chelnov's attack power, rapid-firing capability, attack range or jumping height. The game is a forced side-scrolling game where the screen continually scrolls to the left at a constant speed unless the player is fighting a boss, in which the screen will stop scrolling. Chelnov will continue to run with the screen even if the player lets go of the joystick. Though the player can move to the left or right of the scrolling screen by entering the corresponding direction on the joystick, it is impossible to stop or move backwards except when fighting a boss (Chelnov can turn backwards while jumping). The main character's sprite animation is highly detailed and smooth for its time, comparable to the level of Karateka and the early Prince of Persia games. The ending screen appears when the player finishes all seven levels of the game. Plot The player takes the role of Chelnov (Челнов), a coal miner who miraculously survives the malfunction and explosion of a nuclear power plant. Chelnov's body gains superhuman abilities due to the massive amount of radiation given off by the explosion, and a secret organization seeks to harness those abilities for its own evil purposes. Chelnov must battle and defeat the secret organization using his newfound abilities. Development Atomic Runner Chelnov was controversial at the time of release. The setting, where a coal miner is caught in a nuclear accident, a hammer and sickle visible on the game's opening screen, and the game's title (Chernobyl is written チェルノブイリ in Japanese) led many to interpret the game as a parody of the Chern
https://en.wikipedia.org/wiki/Proteinoplast
Proteinoplasts (sometimes called proteoplasts, aleuroplasts, and aleuronaplasts) are specialized organelles found only in plant cells. Proteinoplasts belong to a broad category of organelles known as plastids. Plastids are specialized double-membrane organelles found in plant cells. Plastids perform a variety of functions such as metabolism of energy, and biological reactions. There are multiple types of plastids recognized including Leucoplasts, Chromoplasts, and Chloroplasts. Plastids are broken up into different categories based on characteristics such as size, function and physical traits. Chromoplasts help to synthesize and store large amounts of carotenoids. Chloroplasts are photosynthesizing structures that help to make light energy for the plant.  Leucoplasts are a colorless type of plastid which means that no photosynthesis occurs here. The colorless pigmentation of the leucoplast is due to not containing the structural components of thylakoids unlike what is found in chloroplasts and chromoplasts that gives them their pigmentation. From leucoplasts stems the subtype, proteinoplasts, which contain proteins for storage. They contain crystalline bodies of protein and can be the sites of enzyme activity involving those proteins. Proteinoplasts are found in many seeds, such as brazil nuts, peanuts and pulses. Although all plastids contain high concentrations of protein, proteinoplasts were identified in the 1960s and 1970s as having large protein inclusions that are visible with both light microscopes and electron microscopes. Other subtypes of Leucoplasts include amyloplast, and elaioplasts. Amyloplasts help to store and synthesize starch molecules found in plants, while elaioplasts synthesize and store lipids in plant cells. See also Chloroplast and etioplast Chromoplast Leucoplast Amyloplast Elaioplast
https://en.wikipedia.org/wiki/Principle%20of%20covariance
In physics, the principle of covariance emphasizes the formulation of physical laws using only those physical quantities the measurements of which the observers in different frames of reference could unambiguously correlate. Mathematically, the physical quantities must transform covariantly, that is, under a certain representation of the group of coordinate transformations between admissible frames of reference of the physical theory. This group is referred to as the covariance group. The principle of covariance does not require invariance of the physical laws under the group of admissible transformations although in most cases the equations are actually invariant. However, in the theory of weak interactions, the equations are not invariant under reflections (but are, of course, still covariant). Covariance in Newtonian mechanics In Newtonian mechanics the admissible frames of reference are inertial frames with relative velocities much smaller than the speed of light. Time is then absolute and the transformations between admissible frames of references are Galilean transformations which (together with rotations, translations, and reflections) form the Galilean group. The covariant physical quantities are Euclidean scalars, vectors, and tensors. An example of a covariant equation is Newton's second law, where the covariant quantities are the mass of a moving body (scalar), the velocity of the body (vector), the force acting on the body, and the invariant time . Covariance in special relativity In special relativity the admissible frames of reference are all inertial frames. The transformations between frames are the Lorentz transformations which (together with the rotations, translations, and reflections) form the Poincaré group. The covariant quantities are four-scalars, four-vectors etc., of the Minkowski space (and also more complicated objects like bispinors and others). An example of a covariant equation is the Lorentz force equation of motion of a cha
https://en.wikipedia.org/wiki/Semisimple%20operator
In mathematics, a linear operator T : V → V on a vector space V is semisimple if every T-invariant subspace has a complementary T-invariant subspace. If T is a semisimple linear operator on V, then V is a semisimple representation of T. Equivalently, a linear operator is semisimple if its minimal polynomial is a product of distinct irreducible polynomials. A linear operator on a finite dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable. Over a perfect field, the Jordan–Chevalley decomposition expresses an endomorphism as a sum of a semisimple endomorphism s and a nilpotent endomorphism n such that both s and n are polynomials in x. See also Jordan–Chevalley decomposition Notes
https://en.wikipedia.org/wiki/Ophthalmia
Ophthalmia (also called ophthalmitis) is inflammation of the eye. It results in congestion of the eyeball, often eye-watering, redness and swelling, itching and burning, and a general feeling of irritation under the eyelids. Ophthalmia can have different causes, such as infection from bacteria, viruses, fungi, or may result from a physical trauma to the eye, chemical irritation, and allergies. A bacteria infection can result in a mucus and pus secretion. Severe cases of ophthalmia can cause blindness if not treated, especially in newborns, who contract it from the environment in the womb. Treatments vary according to the nature of the cause, with minor irritations going away on their own. Types Types include sympathetic ophthalmia (inflammation of both eyes following trauma to one eye), gonococcal ophthalmia, trachoma or "Egyptian" ophthalmia, ophthalmia neonatorum (a conjunctivitis of the newborn due to either of the two previous pathogens), photophthalmia and actinic conjunctivitis (inflammation resulting from prolonged exposure to ultraviolet rays), and others. Noted historical cases Aristodemus, a Spartan captain during the Second Persian invasion of Greece, was affected by ophthalmia and was thus unable to fight at the Battle of Thermopylae (of the famous Spartan 300). However, he fought bravely and died at the Battle of Plataea. Due to the ophthalmia, and his absence from the first battle, he was not buried with proper funeral rites of a Spartan captain. Cicero, on 1 March 49 BC wrote to Atticus that he had ophthalmia. Eratosthenes, who among other things was a Greek geographer and mathematician, contracted ophthalmia as he aged, becoming blind around 195 BC, depressing him and causing him to voluntarily starve himself to death. He died in 194 BC at the age of 82. Hannibal's sight was lost in his right eye in 217 BC by what was likely ophthalmia. He lost the sight while crossing a swamp area on a four-day march through water early in his Italian campaign.
https://en.wikipedia.org/wiki/Constant%20false%20alarm%20rate
Constant false alarm rate (CFAR) detection refers to a common form of adaptive algorithm used in radar systems to detect target returns against a background of noise, clutter and interference. Principle In the radar receiver, the returning echoes are typically received by the antenna, amplified, down-converted to an intermediate frequency, and then passed through detector circuitry that extracts the envelope of the signal, known as the video signal. This video signal is proportional to the power of the received echo. It comprises the desired echo signal as well as the unwanted signals from internal receiver noise and external clutter and interference. The term video refers to the resulting signal being appropriate for display on a cathode ray tube, or "video screen". The role of the constant false alarm rate circuitry is to determine the power threshold above which any return can be considered to probably originate from a target as opposed to one of the spurious sources. If this threshold is too low, more real targets will be detected, but at the expense of increased numbers of false alarms. Conversely, fewer targets will be detected if the threshold is too high, but the number of false alarms will also be low. In most radar detectors, the threshold is set to achieve a required probability of false alarm (equivalently, false alarm rate or time between false alarms). Suppose the background against which targets are to be detected is constant with time and space. In that case, a fixed threshold level can be chosen that provides a specified probability of false alarm, governed by the probability density function of the noise, which is usually assumed to be Gaussian. The probability of detection is then a function of the signal-to-noise ratio of the target return. However, in most fielded systems, unwanted clutter and interference sources mean that the noise level changes both spatially and temporally. In this case, a changing threshold can be used, where the thresho
https://en.wikipedia.org/wiki/Constant%20amplitude%20zero%20autocorrelation%20waveform
In signal processing, a Constant Amplitude Zero AutoCorrelation waveform (CAZAC) is a periodic complex-valued signal with modulus one and out-of-phase periodic (cyclic) autocorrelations equal to zero. CAZAC sequences find application in wireless communication systems, for example in 3GPP Long Term Evolution for synchronization of mobile phones with base stations. Zadoff–Chu sequences are well-known CAZAC sequences with special properties. Example CAZAC Sequence For a CAZAC sequence of length where is relatively prime to the th symbol is given by: Even N Odd N Power Spectrum of CAZAC Sequence The power spectrum of a CAZAC sequence is flat. If we have a CAZAC sequence the time domain autocorrelation is an impulse The discrete fourier transform of the autocorrelation is flat Power spectrum is related to autocorrelation by As a result the power spectrum is also flat.
https://en.wikipedia.org/wiki/Comparison%20of%20firewalls
This is a comparison of firewalls. Firewall software Firewall appliances Firewall rule-set Appliance-UTM filtering features comparison Notes Firewall rule-set advanced features comparison Firewall's other features comparison Notes Non-Firewall extra features comparison These are not strictly firewall features, but are sometimes bundled with firewall software or appliance. Features are also marked "yes" if an external module can be installed that meets the criteria. Notes See also Internet security Comparison of antivirus software Next-generation firewall
https://en.wikipedia.org/wiki/Tracheobronchial%20lymph%20nodes
The tracheobronchial lymph nodes are lymph nodes that are located around the division of trachea and main bronchi. Structure These lymph nodes form four main groups including paratracheal, tracheobronchial, bronchopulmonary and pulmonary nodes. Paratracheal nodes are located on either side of the trachea. Tracheobronchial nodes can be divided into three nodes including left and right superior tracheobronchial nodes, and the inferior trachiobronchial node. The two superior tracheobronchial nodes are located on either side of trachea just before its bifurcation. The inferior tracheobronchial node is located just below the bifurcation in the angle between the two bronchi. Bronchopulmonary nodes situate in the hilum of each lung. Pulmonary nodes are embedded the lung substance on the larger branches of the bronchi. The afferents of the tracheobronchial glands drain the lungs and bronchi, the thoracic part of the trachea and the heart; some of the efferents of the posterior mediastinal glands also end in this group. Their efferent vessels ascend upon the trachea and unite with efferents of the internal mammary and anterior mediastinal glands to form the right and left bronchomediastinal trunks.
https://en.wikipedia.org/wiki/Lamellar%20bodies
In cell biology, lamellar bodies (otherwise known as lamellar granules, membrane-coating granules (MCGs), keratinosomes or Odland bodies) are secretory organelles found in type II alveolar cells in the lungs, and in keratinocytes in the skin. They are oblong structures, appearing about 300-400 nm in width and 100-150 nm in length in transmission electron microscopy images. Lamellar bodies in the alveoli of the lungs fuse with the cell membrane and release pulmonary surfactant into the extracellular space. Role in lungs In alveolar cells the phosphatidylcholines (choline-based phospholipids) that are stored in the lamellar bodies serve as pulmonary surfactant after being released from the cell. In 1964, using transmission electron microscopy, which at that time was a relatively new tool for ultrastructural elucidation, John Balis identified the presence of lamellar bodies in type II alveolar cells, and further noted that upon their exocytotic migration to the alveolar surface, lamellar contents would uniformly unravel and spread along the circumference of the alveolus, thus lowering surface tension and similarly, the required alveolar inflation force. Role in epidermis In the upper stratum spinosum and stratum granulosum layers of the epidermis, lamellar bodies are secreted from keratinocytes, resulting in the formation of an impermeable, lipid-containing membrane that serves as a water barrier and is required for correct skin barrier function. These bodies release components that are required for skin shedding (desquamation) in the uppermost epidermal layer, the stratum corneum. These components include lipids (e.g. glucosylceramides), hydrolytic enzymes (e.g. proteases, acid phosphatases, glucosidases, lipases) and proteins (e.g. corneodesmosin). Lamellar bodies have been observed to contain distinct aggregates of the secreted components glucosylceramide, cathepsin D, KLK7, KLK8 and corneodesmosin. Transportation of molecules via lamellar bodies is thought to
https://en.wikipedia.org/wiki/Tricholoma%20equestre
Tricholoma equestre or Tricholoma flavovirens, commonly known as the man on horseback or yellow knight is a widely eaten but arguably toxic fungus of the genus Tricholoma that forms ectomycorrhiza with pine trees. Known as Grünling in German, gąska zielonka in Polish, míscaro in Portuguese and canari in French, it has been treasured as an edible mushroom worldwide and is especially abundant in France and Central Portugal. Although it is regarded as quite tasty, cases of poisoning from eating T. equestre have been reported. Research has revealed it to have poisonous properties, but these claims are disputed. Taxonomy and naming Tricholoma equestre was known to Linnaeus who officially described it in Volume Two of his Species Plantarum in 1753, giving it the name Agaricus equestris, predating a description of Agaricus flavovirens by Persoon in 1793. Thus this specific name meaning "of or pertaining to horses" in Latin takes precedence over Tricholoma flavovirens, the other scientific name by which this mushroom has been known. It was placed in the genus Tricholoma by German Paul Kummer in his 1871 work Der Führer in die Pilzkunde. The generic name derives from the Greek trichos/τριχος 'hair' and loma/λωμα 'hem', 'fringe' or 'border'. Common names include the man-on-horseback, yellow knight, and saddle-shaped tricholoma. Description The cap ranges from in width and is usually yellow with brownish areas, particularly at the centre. The stem is 4–10 cm long and 1–4 wide, is yellow, and brownish at the base. The gills are also yellow colour and the spores are white. The skin layer covering the cap is sticky and can be peeled off. Toxicity This species was for a long time highly regarded as one of the tastier edible species (and in some guides still is), and sold in European markets; medieval French knights allegedly reserved this species for themselves, leaving the lowly bovine bolete (Suillus bovinus) for the peasants. Concern was first raised in southwestern Fr
https://en.wikipedia.org/wiki/Luigi%20Fantappi%C3%A8
Luigi Fantappiè (15 September 1901 – 28 July 1956) was an Italian mathematician, known for work in mathematical analysis and for creating the theory of analytic functionals: he was a student and follower of Vito Volterra. Later in life, he proposed scientific theories of sweeping scope. Biography Luigi Fantappiè was born in Viterbo, and studied at the University of Pisa, graduating in mathematics in 1922. After time spent abroad, he was offered a chair by the University of Florence in 1926, and a year later by the University of Palermo. He spent the years 1934 to 1939 in the University of São Paulo, Brazil collaborating with Benedito Castrucci notorious Italian-Brazilian mathematician. In 1939 he was offered a chair at the University of Rome. In 1941 he discovered that negative entropy has qualities that are associated with life: The cause of processes driven by negative energy lies in the future, exactly such as living beings work for a better day tomorrow. A process that is driven by negative entropy will increase order with time, such as all forms of life tend to do. This was a very controversial view at the time and not at all accepted by his colleagues. His findings indicate that negative entropy is associated with life in the same way as consciousness is. Consciousness could be a process based on negative entropy. In 1942 he put forth a unified theory of physics and biology, and the syntropy concept. In 1952 he started to work on a unified physical theory called projective relativity, for which, he asserted, special relativity was a limiting case. Giuseppe Arcidiacono worked with him on this theory. See also Analytic functional Andreotti–Norguet formula de Sitter invariant special relativity Negentropy Books Notes
https://en.wikipedia.org/wiki/Sandstorm%20Enterprises
Sandstorm Enterprises was an American computer security software vendor founded in 1998 by Simson Garfinkel, James van Bokkelen, Gene Spafford, Dan Geer. In January 2010, it was purchased by NIKSUN, Inc. Sandstorm was located in the greater Boston area. Sandstorm's major products were PhoneSweep, the first commercial multi-line telephone scanner (a war dialer), introduced in 1998, and NetIntercept, a commercial network forensics tool, introduced in 2001. Designed as a second-generation network analysis tool, NetIntercept operated primarily at the level of TCP and UDP data streams and application-layer objects they transport. In 2002 Sandstorm purchased LanWatch, a commercial packet-oriented LAN monitor originally developed by FTP Software. LanWatch was sold a separate product, but much of its functionality was used by NetIntercept to display individual packets. As of 2019, the PhoneSweep product is still sold and supported by NIKSUN. Core parts of the NetIntercept product also still exist, as incorporated into NIKSUN's own NetDetector network forensics product line.
https://en.wikipedia.org/wiki/Spectral%20flux
Spectral flux is a measure of how quickly the power spectrum of a signal is changing, calculated by comparing the power spectrum for one frame against the power spectrum from the previous frame. More precisely, it is usually calculated as the L2-norm (also known as the Euclidean distance) between the two normalised spectra. Calculated this way, the spectral flux is not dependent upon overall power (since the spectra are normalised), nor on phase considerations (since only the magnitudes are compared). The spectral flux can be used to determine the timbre of an audio signal, or in onset detection, among other things. Variations Some implementations use the L1-norm rather than the L2-norm (i.e. the sum of absolute differences rather than the sum of squared differences). Some implementations do not normalise the spectra. For onset detection, increases in energy are important (not decreases), so some algorithms only include values calculated from bins in which the energy is increasing.
https://en.wikipedia.org/wiki/Ekman%20current%20meter
The Ekman current meter is a mechanical flowmeter invented by Vagn Walfrid Ekman, a Swedish oceanographer, in 1903. It comprises a propeller with a mechanism to record the number of revolutions, a compass and a recorder with which to record the direction, and a vane that orients the instrument so the propeller faces the current. It is mounted on a free-swinging vertical axis suspended from a wire and has a weight attached below. The balanced propeller, with four to eight blades, rotates inside a protective ring. The position of a lever controls the propeller. In down position the propeller is stopped and the instrument is lowered, after reaching the desired depth a weight called a messenger is dropped to move the lever into the middle position which allows the propeller to turn freely. When the measurement has been taken another weight is dropped to push the lever to its highest position at which the propeller is again stopped. The propeller revolutions are counted via a simple mechanism that gears down the revolutions and counts them on an indicator dial. The direction is indicated by a device connected to the directional vane that drops a small metal ball about every 100 revolutions. The ball falls into one of thirty-six compartments in the bottom of the compass box that indicate direction in increments of 10 degrees. If the direction changes while the measurement is being performed the balls will drop into separate compartments and a weighted mean is taken to determine the average current direction. This is a simple and reliable instrument whose main disadvantage is that is must be hauled up to be read and reset after each measurement. Ekman solved this problem by designed a repeating current meter which could take up to forty-seven measurements before needing to be hauled up and reset. This device used a more complicated system of dropping small numbered metal balls at regular intervals to record the separate measurements. Bibliography Harald U. Sverdrup,
https://en.wikipedia.org/wiki/Multi-attribute%20global%20inference%20of%20quality
Multi-attribute global inference of quality (MAGIQ) is a multi-criteria decision analysis technique. MAGIQ is based on a hierarchical decomposition of comparison attributes and rating assignment using rank order centroids. Description The MAGIQ technique is used to assign a single, overall measure of quality to each member of a set of systems where each system has an arbitrary number of comparison attributes. The MAGIQ technique has features similar to the analytic hierarchy process and the simple multi-attribute rating technique exploiting ranks (SMARTER) technique. The MAGIQ technique was first published by James D. McCaffrey. The MAGIQ process begins with an evaluator determining which system attributes are to be used as the basis for system comparison. These attributes are ranked by importance to the particular problem domain, and the ranks are converted to ratings using rank order centroids. Each system under analysis is ranked against each comparison attribute and the ranks are transformed into rank order centroids. The final overall quality metric for each system is the weighted (by comparison attribute importance) sum of each attribute rating. The references provide specific examples of the process. There is little direct research on the theoretical soundness and effectiveness of the MAGIQ technique as a whole, however the use of hierarchical decomposition and the use of rank order centroids in multi-criteria decision analyses have been studied, with generally positive results. Anecdotal evidence suggests that the MAGIQ technique is both practical and useful. See also Multi-attribute utility Multi-attribute auction
https://en.wikipedia.org/wiki/Michael%20Denton
Michael John Denton (born 25 August 1943) is a British proponent of intelligent design and a Senior Fellow at the Discovery Institute's Center for Science and Culture. He holds a PhD degree in biochemistry. Denton's book, Evolution: A Theory in Crisis, inspired intelligent design proponents Phillip Johnson and Michael Behe. Biography Denton gained a medical degree from Bristol University in 1969 and a PhD in biochemistry from King's College London in 1974. He was a senior research fellow in the Biochemistry Department at the University of Otago, Dunedin, New Zealand from 1990 to 2005. He later became a scientific researcher in the field of genetic eye diseases. He has spoken worldwide on genetics, evolution and the anthropic argument for design. Denton's current interests include defending the "anti-Darwinian evolutionary position" and the design hypothesis formulated in his book Nature’s Destiny. Denton described himself as an agnostic. He is currently a senior fellow at the Discovery Institute's Center for Science and Culture. Books Evolution: A Theory in Crisis In 1985 Denton wrote the book Evolution: A Theory in Crisis, presenting a systematic critique of neo-Darwinism ranging from paleontology, fossils, homology, molecular biology, genetics and biochemistry, and argued that evidence of design exists in nature. Some book reviews criticized his arguments. He describes himself as an evolutionist and he has rejected biblical creationism. The book influenced both Phillip E. Johnson, the father of intelligent design, Michael Behe, a proponent of irreducible complexity, and George Gilder, co-founder of the Discovery Institute, the hub of the intelligent design movement. Since writing the book Denton has changed many of his views on evolution; however, he still believes that the existence of life is a matter of design. Nature's Destiny Denton still accepts design and embraces a non-Darwinian evolutionary theory. He denies that randomness accounts for the biology
https://en.wikipedia.org/wiki/Ehrling%27s%20lemma
In mathematics, Ehrling's lemma, also known as Lions' lemma, is a result concerning Banach spaces. It is often used in functional analysis to demonstrate the equivalence of certain norms on Sobolev spaces. It was named after Gunnar Ehrling. Statement of the lemma Let (X, ||·||X), (Y, ||·||Y) and (Z, ||·||Z) be three Banach spaces. Assume that: X is compactly embedded in Y: i.e. X ⊆ Y and every ||·||X-bounded sequence in X has a subsequence that is ||·||Y-convergent; and Y is continuously embedded in Z: i.e. Y ⊆ Z and there is a constant k so that ||y||Z ≤ k||y||Y for every y ∈ Y. Then, for every ε > 0, there exists a constant C(ε) such that, for all x ∈ X, Corollary (equivalent norms for Sobolev spaces) Let Ω ⊂ Rn be open and bounded, and let k ∈ N. Suppose that the Sobolev space Hk(Ω) is compactly embedded in Hk−1(Ω). Then the following two norms on Hk(Ω) are equivalent: and For the subspace of Hk(Ω) consisting of those Sobolev functions with zero trace (those that are "zero on the boundary" of Ω), the L2 norm of u can be left out to yield another equivalent norm.
https://en.wikipedia.org/wiki/List%20of%20Ukrainian%20flags
The following is a list of flags of Ukraine: State flag Presidential Standard Military flags Flags of service branches Command Standards Maritime flags Former flags Personal naval flags Former flags Government and non-military security forces Flags of Ukrainian regions Flags of oblasts Flags of cities with special status Flags of other cities Regional and minority flags Historical flags Kingdom of Galicia–Volhynia Kingdom of Galicia and Lodomeria Cossack Hetmanate Crimean Khanate (1441–1478) Ukrainian People's Republic and Ukrainian State Maritime flags Royal Family standards Organization of Ukrainian Nationalists Flags of occupational powers Ottoman Empire Poland and Lithuania Russian Empire (1654–1917) Maritime flags Habsburg Monarchy, the Austrian Empire, and Austria–Hungary from 1772 to 1918 Makhnovshchina Soviet Union and Ukrainian SSR Kingdom of Romania ending flag Nazi Germany Miscellaneous External links Flags of Ukraine from Vexillographia (in Russian) Flags Ukraine Flag
https://en.wikipedia.org/wiki/London%20equations
The London equations, developed by brothers Fritz and Heinz London in 1935, are constitutive relations for a superconductor relating its superconducting current to electromagnetic fields in and around it. Whereas Ohm's law is the simplest constitutive relation for an ordinary conductor, the London equations are the simplest meaningful description of superconducting phenomena, and form the genesis of almost any modern introductory text on the subject. A major triumph of the equations is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. Description There are two London equations when expressed in terms of measurable fields: Here is the (superconducting) current density, E and B are respectively the electric and magnetic fields within the superconductor, is the charge of an electron or proton, is electron mass, and is a phenomenological constant loosely associated with a number density of superconducting carriers. The two equations can be combined into a single "London Equation" in terms of a specific vector potential which has been gauge fixed to the "London gauge", giving: In the London gauge, the vector potential obeys the following requirements, ensuring that it can be interpreted as a current density: in the superconductor bulk, where is the normal vector at the surface of the superconductor. The first requirement, also known as Coulomb gauge condition, leads to the constant superconducting electron density as expected from the continuity equation. The second requirement is consistent with the fact that supercurrent flows near the surface. The third requirement ensures no accumulation of superconducting electrons on the surface. These requirements do away with all gauge freedom and uniquely determine the vector potential. One can also write the London equation in terms of an arbitrary gauge by simply defining , where is a scal